Thursday, August 4, 2016

Java HotSpot(TM) Client VM warning: Cannot open file /cassandra37/logs/gc.log due to No such file or directory

Error
Java HotSpot(TM) Client VM warning: Cannot open file /cassandra37/logs/gc.log due to No such file or directory

Problem 
Log directory not exists

Solution
Create Log Directory at specified path

Problem
Error occurred during initialization of VM
Could not reserve enough space for 2097152KB object heap

No single argument constructor found for class [Ljava.lang.String;; in 'reader', line 10, column 1:

No single argument constructor found for class [Ljava.lang.String;;  in 'reader', line 10, column 1:
   cluster_name: 'Test Cluster'

Problem:
While modification of cassandra.yaml file you might change the format of given directories

Solution:
Change directories path format as given below (it should be exactly same)

data_file_directories:
    - /cassandra/storage/data
commitlog_directory: /cassandra/storage/commitlog
saved_caches_directory: /cassandra/storage/saved_caches

Workaround
In my case i have written data_file_directories as below
data_file_directories: /cassandra/storage/data
When i change path as per given format, then it works fine for me.

Wednesday, August 3, 2016

WARNING! Powershell script execution unavailable.

Error

WARNING! Powershell script execution unavailable.
   Please use 'powershell Set-ExecutionPolicy Unrestricted'
   on this user-account to run cassandra with fully featured
   functionality on this platform.

 
Problem
PowerShell have four type of execution modes (Restricted, All Signed, Remote Signed, Unrestricted).
Currently your's machine Powershell Execution mode is set to Restricted.

Solution
Change powershell execution mode for this first open PowerShell with administrative rights "Run as Administrator".

Set-ExecutionPolicy Restricted


Check
Check status of execution mode of Powershell using below command
Unrestricted" goto runPowerShell

Friday, July 29, 2016

How to load data in hadoop?

We need to use HIVE to create table and load data. I am using some pre-downloaded files, after search alot on google. As all the files have data in tab seprated form.


  1. Create table from HIVE
  2. Move to Hadoop_Home and where we need to move the files from root directory to specified directory
  3. Move to HIVE shell
  4. and using Load command, load data into tables.


create table users (id STRING, birth_date STRING, gender STRING) ROW FORMAT DELIMITED FIELDS TERMINATED by '\t' stored as textfile tblproperties ("skip.header.line.count"="1");

create table products (url STRING, category STRING) ROW FORMAT DELIMITED FIELDS TERMINATED by '\t' stored as textfile tblproperties ("skip.header.line.count"="1");

hadoop fs -put /urlmap.tsv /user/hadoop/products.tsv
hadoop fs -put /regusers.tsv /user/hadoop/users.tsv

hive>
LOAD DATA INPATH '/user/hadoop/products.tsv' OVERWRITE INTO TABLE products;
LOAD DATA INPATH '/user/hadoop/users.tsv' OVERWRITE INTO TABLE users;


Wednesday, July 27, 2016

How to clear cache of run (saved password of shared folder)


Follow below steps to resolve the problem:
  1. Open cmd
  2. Execute command "net use"
  3. Copy string from Remote column, IP address of server whose cache you want to delete
  4. Execute below command
    • net use \\192.168.1.1\folder(paste your string copied in step 3) /delete
  5. Now on re-execution of net use command, you can see the string entry get removed
  6. Now go to task manager
    • kill explorer.exe and starting it again.

Tuesday, July 19, 2016

SecondaryNameNode Inconsistent checkpoint fields

2016-07-19 12:43:28,702 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
java.io.IOException: Inconsistent checkpoint fields.
LV = -60 namespaceID = 1655575768 cTime = 0 ; clusterId = CID-90fb1076-4e1e-4ab6-bb76-37177e31ad64 ; blockpoolId = BP-532229130-127.0.0.1-1468908471777.
Expecting respectively: -60; 724011492; 0; CID-f9fc0705-4a77-49d1-9220-874fd5f30efe; BP-78957080-127.0.0.1-1456924944705.
at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:134)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:531)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:395)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:361)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:412)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:357)
at java.lang.Thread.run(Thread.java:745)


SOLUTION

  1. Stop all hadoop services (stop-all.sh)
  2. Open logs hadoop-root-secondarynamenode-localhost***.log
  3. Look for path of secondary name node
  4. delete secondary namenode directory
  5. start hadoop services

Hadoop datanode/namenode error

java.io.IOException: Incompatible clusterIDs in /home/hadoop/hadoopdata/hdfs/datanode: namenode clusterID =  datanode clusterID = 
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1311)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1276)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828)
at java.lang.Thread.run(Thread.java:745);


SOLUTION

  1. Check your hdfs-site.xml file to see where dfs.data.dir is pointing to
  2. and delete folder
  3. and then stop and start the datanode


Check you all log files for any error!!!!!!!!!

There might be chances to get below error

org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /path/to/hadoop/storage/namenode is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:762)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:746)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2016-07-19 11:33:16,737 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2016-07-19 11:33:16,838 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2016-07-19 11:33:16,839 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2016-07-19 11:33:16,839 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2016-07-19 11:33:16,839 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

PROBLEM
this is because above you have deleted both namenode and datanode folders:

SOLUTION
need to format the namenode

Steps to follow:

  1. stop datanode
  2. delete folder 
  3. execute "hdfs namenode -format"
  4. start datanode

Thursday, July 14, 2016

Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

ERROR
hive>show databases;
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

SOLUTION
chown -R hdfs:hadoop *


in case above solution doesn't work then try to remove *.lck files, using below command

[root@localhost sbin]# rm metastore_db/*.lck

RuntimeException org.apache.hadoop.security.AccessControlException

ERROR
RuntimeException org.apache.hadoop.security.AccessControlException: Permission denied: user=sa, access=WRITE, inode="/tmp/hive-root":root:supergroup:drwxr-xr-x

SOLUTION
Problem solved by doing change in dfs_permissions. Locate the file hdfs-site.xml and using vim editor change the value property to false.


 <property>
  <name>dfs.permissions</name>
  <value>false</value>
</property>



Wednesday, July 6, 2016

The system cannot find the batch label specified - nodemanager

Seems to be your are trying to start yarn or other on Windows and if you downloaded files from internet.
If this is the case then the files you have downloaded or replaced in directory "bin" and "etc/hadoop"

PROBLEM
Some of the files *.cmd having LF as line terminator (use notepad ++ to see)

SOLUTION
Open notepadd++ tool and convert file type as below

Goto> Edit> EOL Conversion> Windows Format

By doing this you can see the changes in file all new line terminator has been changed to CRLF.

Saturday, June 11, 2016

Want to save kettle Logging in database?

We can easily save the logs generated by the kettle in database tables. Easiest ways is to change the properties of the job.

Create table in database level. I am using SQLServer in my case and field used in below table are required field.
We have to consider all field while creation of table.

Create table logging
(
 id_job int
, channel_id varchar(100)
, jobname  varchar(100)
, status varchar(100)
, lines_Read int
, lines_written int
, lines_updated int
, errors int
, startdate datetime
, enddate datetime
, logdate datetime
, depdate datetime
, replaydate datetime
, log_field varchar(max)
);

Now create a job in kettle.
Double click on job (open area) - Job Properties - Log (tab)




Remember that the number of field selected using checkbox in Log_Table_Fields dialogue box, should exists in tables.
In the add-on you can create n number of fields (depend upon the limit of table and database) in same table

Tuesday, June 7, 2016

database url and driver

sqlserver 
database url
jdbc:sqlserver://192.168.1.1:1433;databaseName=XYZ-DatabaseName


spago bi
url - jdbc:sqlserver://192.168.1.1
driver - com.microsoft.sqlserver.jdbc.SQLServerDriver


jtds driver
database url - jdbc:jtds:sqlserver://192.168.1.1:1433/XYZ-DatabaseName


oracle
url - jdbc:oracle:thin:@localhost:1531:XYZ-DatabaseName
driver - oracle.jdbc.driver.OracleDriver

Sunday, June 5, 2016

Playing With SQL Quries


1. Create triangle using SQL Query

SQL> select lpad(' ',8-level, ' ') ||rpad('*',level-2, '*')
  2  || rpad('*',level-1, '*')
  3   as  a
  4  from dual connect by level < 8
  5  order by level
  6  /

A
---------------

      *
     ***
    *****
   *******
  *********
 ***********

7 rows selected.


Monday, May 30, 2016

Create Folder/Subfolder in transformation java script

We have step in job, using which we can create folder ans sub-folders.

Magically we have "Modified Java Script Value" step in transformation, using which we can also create folder and Subfolders in filesystem.

Steps :

  • Create new transformation and save it
  • Add - "Table Input" - "Select Values" - "Modified Java Script Value" in transformation
  • Table Input step contains - "select 'temp_folder' as path"
  • Select Values Contains entry of field Path
  • Modified Java Script Value contains - createFolder('e:/'+path);
  • Now save and execute it


Sunday, May 8, 2016

Get the list of all tables using JDBC driver using kettle

If you need the tables/columns and etc...

We can easily get all these information using JDBC connection

Download the transformation from below link

http://wiki.pentaho.com/display/EAI/JDBC+Metadata

List and used function:

  1. data types output - function used getTypeInfo()
  2. tables output - function used getTables()
  3. columns output - function used getTables()
  4. primary key output - function used getPrimaryKeys()
  5. imported key output - function used getImportedKeys()
  6. index info output - function used getIndexInfo()


Saturday, May 7, 2016

Define Variable for Database Connection in kettle

Instead of using static Database connection name in spoon UI, We can define database connection variable using kettle.properties file

To define variables we need to make change in two files
1. kettle.properties
2. shared.xml

Content of shared.xml

<?xml version="1.0" encoding="UTF-8"?>
<sharedobjects>
  <connection>
    <name>TestConnection</name>
    <server>${DBHostname}</server>
    <type>MSSQL</type>
    <access>Native</access>
    <database>${DBName}</database>
    <port>1433</port>
    <username>${DBUsername}</username>
    <password>${DBPassword}</password>
    <servername/>
    <data_tablespace/>
    <index_tablespace/>
    <attributes>
      <attribute><code>FORCE_IDENTIFIERS_TO_LOWERCASE</code><attribute>N</attribute></attribute>
      <attribute><code>FORCE_IDENTIFIERS_TO_UPPERCASE</code><attribute>N</attribute></attribute>
      <attribute><code>IS_CLUSTERED</code><attribute>N</attribute></attribute>
      <attribute><code>MSSQL_DOUBLE_DECIMAL_SEPARATOR</code><attribute>N</attribute></attribute>
      <attribute><code>PORT_NUMBER</code><attribute>1433</attribute></attribute>
      <attribute><code>PRESERVE_RESERVED_WORD_CASE</code><attribute>N</attribute></attribute>
      <attribute><code>QUOTE_ALL_FIELDS</code><attribute>N</attribute></attribute>
      <attribute><code>SUPPORTS_BOOLEAN_DATA_TYPE</code><attribute>Y</attribute></attribute>
      <attribute><code>SUPPORTS_TIMESTAMP_DATA_TYPE</code><attribute>Y</attribute></attribute>
      <attribute><code>USE_POOLING</code><attribute>N</attribute></attribute>
    </attributes>
  </connection>



Content of kettle.properties
#TestConnection
DBHostname=localhost
DBName=XYZ0001
DBUsername=sa
DBPassword=******

Now Close your spoon UI if opened
Move to directory .kettle
delete any backup files if exists
start spoon.bat file

Friday, April 1, 2016

ORA-00119: invalid specification for system parameter LOCAL_LISTENER

ORA-00119: invalid specification for system parameter LOCAL_LISTENER
ORA-00132: syntax error or unresolved network name 'LISTENER_ORCL'

Problem
E:\app\*****\admin\orcl\pfile
Open pfile in editor

local_listener=LISTENER_ORCL

Solution
Update the below entry (orcl is a local database sid)
local_listener=orcl



startup pfile='E:\app\*****\admin\orcl\pfile\init.ora'

create spfile from pfile='E:\app\*****\admin\orcl\pfile\init.ora' ;

shu immediate

startup

Thursday, March 17, 2016

ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes

I offered a command to take database backup in oracle and ended up with error after a few minutes

expdp scott/tiger dumpfile=scott.dmp directory=DATA_PUMP_DIR parallel=4 status=1

Worker 4 Status:
  Process Name: DW03
  State: EXECUTING
  Object Schema: scott
  Object Name: emp
  Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
  Completed Objects: 1
  Total Objects: 1014
  Worker Parallelism: 3
ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes


SOLUTION
Use %U in dumpfile creation, expdp will automatically take care of number of file need to created as per size of database size

expdp scott/tiger dumpfile=scott_%U.dmp directory=DATA_PUMP_DIR parallel=4

Tuesday, March 15, 2016

want to divide table into 10 equal parts

if you want to divide table into 10 equal parts then you might need to min max of number (field which holds the primary key)
Instead of manual work, you can easily identify the min max for division purpose, you can use below query

select
min(object_id) as v_start
,max(object_id) as v_end
,num
from
(
select 
object_id,
--// divide all rows into group number
ntile(10) over (order by object_id) num
from all_objects

group by num;

Sunday, March 13, 2016

How to auto delete macro code after execution

We can also auto delete macro code writen in excel workbook. Some times workbooks are generated thru the java/.net and other code, it can have macro codes or can have compatibility issues. To remove those issues we have to delete macro codes.

Lets edit macro code from previous post.

Sub Auto_Open()

    Dim wk As Worksheet
    ' Declare object variable
    Dim vbCom As Object
    Set vbCom = Application.VBE.ActiveVBProject.VBComponents
   
    MsgBox "Fill Values in cell of all sheets"
    For Each wk In ThisWorkbook.Worksheets
    wk.Activate
    With wk.Range("I5")
    .FormulaR1C1 = "testqqq"
    End With
    Next wk

vbCom.Remove VBComponent:= _
vbCom.Item("Module1")
   
End Sub

 
* note - highlighed code in yellow color is newly added in previous post.

Now save the macro code close and reopen the workbook.

ERROR:
Run-time error '1004':

Programmatic access to Visual Basic Projects is not trusted


SOLUTION
Please look for below screen shot and follow the same process and enable the trust access to the VBA code

   

Close and reopen the workbook.. It works!!!!!!

Auto Open [Run] macro code - when workbook open

Lets have look on older post, which is showing that how to execute macro code for individual sheet of a workbook.

http://j4info.blogspot.in/2013/05/execute-macro-for-each-sheet-in-workbook.html

In this post we will do workaround that how to auto open macro code and start its working after immediate open of workbook

Sub Auto_Open()
    Dim wk As Worksheet
    MsgBox "Fill Values in cell of all sheets"
    For Each wk In ThisWorkbook.Worksheets
    wk.Activate
    With wk.Range("I5")
    .FormulaR1C1 = "testqqq"
    End With
    Next wk
End Sub


USe Auto_Open() thats it....

Friday, February 26, 2016

Add/upload documents in image or varbinary datatype

create table documents(id int not null, doctype varchar(4), content varbinary(max));

INSERT INTO documents(id, doctype, content)
SELECT
    2 as id,
    '.pdf' AS doctype,
    * FROM OPENROWSET(BULK N'D:\common\ebooks2\India-Year-Book-2009.pdf', SINGLE_BLOB) AS Document;
GO

select * from documents;
go

How to pull out blob content out to a directory using kettle

How to proceed:
1. create table, add some blob data
2. create transformation and execute it


1. Create table and add blob data
follow below step/post

CREATE TABLE t (id NUMBER(10),doc BLOB, name varchar2(30) );

CREATE SEQUENCE t_seq;

CREATE OR REPLACE DIRECTORY documents AS 'd:\app';

CREATE OR REPLACE PROCEDURE upload_t (file_name_v IN varchar) AS
v_bfile BFILE;
v_blob BLOB;
BEGIN
INSERT INTO t (id, doc,name) vALUES (t_seq.NEXTVAL, empty_blob(),file_name_v) RETURN doc INTO v_blob;
v_bfile := BFILENAME('DOCUMENTS', file_name_v);
Dbms_Lob.Fileopen(v_bfile, Dbms_Lob.File_Readonly);
Dbms_Lob.Loadfromfile(v_blob, v_bfile, Dbms_Lob.Getlength(v_bfile));
Dbms_Lob.Fileclose(v_bfile);

COMMIT;
end;
/

EXEC upload_t('New Text Document.txt');


2. Build transformation as per below image:




Add below java script to step "Modified Java Script Value"


var File = java.io.File;
var FileOutputStream = java.io.FileOutputStream;
var BufferedOutputStream = java.io.BufferedOutputStream;
var out_msg;
var err_msg;
var filename = substr(getVariable('Internal.Transformation.Filename.Directory', '') +'/'+file_name, 7);
var outBytes = file;
try{
    var f = new File(filename);
      if (!f.exists()){
        f.createNewFile();
    }
 
    var fos = new FileOutputStream(f);
    var bos = new BufferedOutputStream(fos);
    bos.write(outBytes);
    bos.flush();
    bos.close();
 
    out_msg = 'succeed';
//    err_msg = 'OK';
 
}
catch (e){
    out_msg = 'Failed';
    err_msg = e.javaException.toString();
}


Save and execute transformation

It will fetch the blob data and build the file at path same as of transformation path. Moreover you can modify javascript to add custom path.

NOTE*- file_name must contain the value as file name with postfix extension (format - filename.extension).

BENIFIT - benefit of using this code is it will work for ORACLE and SQLServer Database (these two are tested by me)
Oracle - Blob datatype
SQLServer - Image and Varbinary(max) datatype

Wednesday, February 24, 2016

Call job through kitchen, stop displaying error on batch [kettle]

Scenario -

 I am calling a job through batch, and job contains verification of database connection active inactive. In case if connection parameters are wrong then step 'Check Db connections' will throw error on batch file.

(Batch file code as below):


call kitchen /file:"%CD%\Data_Migration\zzzzzzzzz.kjb" /norep:Y /level:basic > ".\TEST%systemtime%.log"


Open CMD> execution *.bat file

E:\z_temp\kettle_5_4>test.bat
2016/02/24 13:36:56 - Check Db connections 2 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) :
2016/02/24 13:36:56 - Check Db connections 2 - Error occurred while trying to connect to the database
2016/02/24 13:36:56 - Check Db connections 2 -
2016/02/24 13:36:56 - Check Db connections 2 - Error connecting to database: (using class net.sourceforge.jtds.jdbc.Driver)
2016/02/24 13:36:56 - Check Db connections 2 - Login failed for user 'sa'.
2016/02/24 13:36:56 - Check Db connections 2 -
2016/02/24 13:36:56 - Check Db connections 2 - ]

Solution

If you don't want to show errors on log file then made changes as below, while call batch file


call kitchen /file:"%CD%\Data_Migration\zzzzzzzzz.kjb" /norep:Y /level:basic > ".\TEST%systemtime%.log" 2>&1


E:\z_temp\kettle_5_4>test.bat
E:\z_temp\kettle_5_4>

Sunday, February 14, 2016

Function to process each character of String [SQL server]

Some time nested function will not work, what is the solution? !!!!!!

Want to process each character of string???


--drop function dbo.replacechars

create FUNCTION replacechars (@InStr varchar(max))
RETURNS varchar(max)
AS
BEGIN
declare @alter_string varchar(max) , @length int , @cnt int , @StoreChar varchar(100)
SET @alter_string = ''
SET @length = 0
SET @cnt = 1
SET @length = LEN(@InStr)
IF @length > 0
BEGIN WHILE @cnt <= @length
BEGIN
      SET @StoreChar = SUBSTRING(@InStr, @cnt, 1)
      set @StoreChar =
            case when isnull(@StoreChar,' ') = ';' then '&#.x3b;'
                  when isnull(@StoreChar,' ') = '#' then '&#.x23;'
                  when isnull(@StoreChar,' ') = '&' then '&#.x26;'
                  when isnull(@StoreChar,' ') = '!' then '&#.x21;'
                  when isnull(@StoreChar,' ') = '@' then '&#.x40;'
                  when isnull(@StoreChar,' ') = '$' then '&#.x24;'
                  when isnull(@StoreChar,' ') = '%' then '&#.x25;'
                  when isnull(@StoreChar,' ') = '^' then '&#.x5e;'
                  when isnull(@StoreChar,' ') = '*' then '&#.x2a;'
                  when isnull(@StoreChar,' ') = '(' then '&#.x28;'
                  when isnull(@StoreChar,' ') = ')' then '&#.x29;'
                  when isnull(@StoreChar,' ') = '=' then '&#.x3d;'
                  when isnull(@StoreChar,' ') = '+' then '&#.x2b;'
                  when isnull(@StoreChar,' ') = '{' then '&#.x7b;'
                  when isnull(@StoreChar,' ') = '}' then '&#.x7d;'
                  when isnull(@StoreChar,' ') = '[' then '&#.x5b;'
                  when isnull(@StoreChar,' ') = ']' then '&#.x5d;'
                  when isnull(@StoreChar,' ') = '\' then '&#.x5c;'
                  when isnull(@StoreChar,' ') = '|' then '&#.x7c;'
                  when isnull(@StoreChar,' ') = '''' then '&#.x27;'
                  when isnull(@StoreChar,' ') = ':' then '&#.x3a;'
                  when isnull(@StoreChar,' ') = '"' then '&#.x22;'
                  when isnull(@StoreChar,' ') = '<' then '&#.x3c;'
                  when isnull(@StoreChar,' ') = '>' then '&#.x3e;'
                  when isnull(@StoreChar,' ') = '?' then '&#.x3f;'
                  when isnull(@StoreChar,' ') = '/' then '&#.x2f;'
                  when isnull(@StoreChar,' ') = CHAR(10) then '&#.xd;&#.xa;'
                  else isnull(@StoreChar,' ') end
BEGIN SET @alter_string = @alter_string + isnull(@StoreChar,' ') END
SET @cnt = @cnt + 1
END END
RETURN @alter_string END

Note - PLEASE remove . (dot) from string written after then clause

select dbo.replacechars('a ; f5637# $ % ^ ')

Replace Special Char from string for "table input step" kettle transformation

Some requirement i faced, to write dynamic SQL script for kettle transformation. SQL scripts are generally written in "table input step" and "SQL script" steps.

Observation i have made with two different version of data integration tool (kettle by pentaho) with 4.4 and 6.0.

Version 4.4
When we wrote some SQL script in transformation (table input step) , and if we open the saved transformation in notepad, i observed that the SQL script was in same format enclosed with html tags.

Version 5.4 and above (my observation, version are not accurate)
When i wrote some SQL script in transformation and i open the transformation in notepad, observed that the SQL script written in Spoon (developers mode), now has been in changed format. i seen that the special chars used in SQL script has been changed to some numbers.

I have listed out some special char conversions, as in below image:



Now the main task is how i can replace all above char in incoming String from any source.

Nested replace function will not work here, because replacing string contains itself special chars.

So i created own function, which will process each char of incoming string.

To see the function please refer to below link

Function to process each character of String [SQL server]

Friday, February 12, 2016

Split string on the first occurrence of underscore using kettle

Having a string contain underscore (string can have multiple underscores), want to split on first occurrence of underscore into two parts in kettle. To achieve this I used ‘Java Script’ step, which is quite easy and can perform using single step.


Below is the code for ‘Modified Java Script Value’ step:

var ToBeSplit = InString;
var IndexLevel = ToBeSplit.indexOf("_");
var LeftTrim = ToBeSplit.substr('_',IndexLevel);

var RightTrim = ToBeSplit.substr(IndexLevel+1);


Saturday, January 9, 2016

SQLServer - Integer Format

Query1
select id from table_name

Query2
select cast(format(row_number() over (order by id)  ,'0000#') as varchar) from table_name

Results of above two query in below picture.




I ran same query in SQLServer Version 2008, as "format" function is not available in 2008, so below is the altrenative lof same query in SQLServer 2008 version.

Error
Msg 195, Level 15, State 10, Line 1
'format' is not a recognized built-in function name.


select RIGHT ('0000'+ CAST(row_number() over (order by id) AS varchar), 5) from tabe_name


Friday, January 8, 2016

SQLSERVER - licensed limit of 10240 MB per Databae

After failed for Database (Microsoft.SqlServer.Smo)

Error:
An Exception occured while executing a Transact-SQL statement or batch.
(Microsoft.SqlServer.ConnectionInfo)

CREATE DATABASE or ALTER DATABASE failed because the resulting cumulative database
size would exceed you licensed limit of 10240 MB per Databae.
(Microsoft SQL Server, Error:1827)



Limitation:
If you are using MS SQL Server express edition you will not able to create database having size greater than 10 GB.

Thursday, January 7, 2016

Kettle log contains string - ERROR (version 5.4.0.1-130, build 1 from ** by buildguy)

ERROR (version 5.4.0.1-130, build 1 from ** by buildguy)

In kettle log i have found a string continuously printing until job completed.

How to remove this string from logs?


  • Open Job for which you are generating logs, (In my case i have another "Job step" in main job and generating separate logs for that job )
  • Double click on "Job Entry Step"
  • Popup can be seen (Heading - Executing a Job)
  • Move to tab "Logging Settings"
  • Change Loglevel from "Error Logging Only" to "Basic "

Friday, January 1, 2016

Table Input/Evaluate rows number in a table/Execute SQL step, On error abort all job sequence

This error is somewhere related to my previous post



kettle - Filter step not working initialize both input steps  link


picture 1
i have some set of queries as you can see in above picture highlighted in green box , bearing type of 's' and 't'. That mean i have to filter records with field "type"

If type - 'S' then - execute query with connection 1
If type - 'T' then - execute query with connection 2

In previous post i suggest to use step "Simple evaluation" at job level to filter rows, it works fine
but i faced some another condition on the way :  

REQUIREMENT
 I have to execute all the four queries with their respective type of connection, and log the result, i.e. query execution success and failure

OOPS what happen !!!!!!!!
look in the picture the query highlighted with yellow color having wrong table name, now it is obvious query got fails while execution and it leads to abort the whole job process.
OH IT NOT MET TO MY SCENARIO (scenario is log the result, for query execution success and failure)

SOLUTION
to met above condition i change the orientation of my data-set, as highlighted in RED BOX.
Set variable for each different column

Test Query at job level ..see below image(job in loop)



web stats