NiFi connection to SqlServor for ExecuteSQL - sql-server

I'm trying to import some data from different SqlServer databases using ExecuteSQL in NiFi, but it's returning me an error. I've already imported a lot of other tables from MySQL databases without any problem and I'm trying to use the same workflow structure for the SqlServer dbs.
The structure is as follows:
There's a file .txt with the list of tables to be imported
This file is fetched, splitted and uptaded; so there's a FlowFile for each table of each db that has to be imported,
These FlowFiles are passed into ExecuteSQL which executes their contents
For example:
file.txt
table1
table2
table3
is being updated into 3 different FlowFiles:
FlowFile1
SELECT * FROM table1
FlowFile2
SELECT * FROM table2
FlowFile3
SELECT * FROM table3
which are passed to ExecuteSQL.
Here follows the configuration of ExecuteSQL (identical for SqlServer tables and MySQL ones)
ExecuteSQL
As the only difference with the import from MySQL db is in the connectors, this is how a generic MySQL connector has been configured:
SETTINGSPROPERTIES
Database Connection URL jdbc:mysql://00.00.00.00/DataBase?zeroDateTimeBehavior=convertToNull&autoReconnect=true
Database Driver Class Name com.mysql.jdbc.Driver
Database Driver Location(s) file:///path/mysql-connector-java-5.1.47-bin.jar
Database User user
PasswordSensitive value set
Max Wait Time 500 millis
Max Total Connections 8
Validation query No value set
And this is how a SqlServer connector has been configured:
SETTINGSPROPERTIES
Database Connection URL jdbc:jtds:sqlserver://00.00.00.00/DataBase;useNTLMv2=true;integratedSecurity=true;
Database Driver Class Name net.sourceforge.jtds.jdbc.Driver
Database Driver Location(s) /path/connectors/jtds-1.3.1.jar
Database User user
PasswordSensitive value set
Max Wait Time -1
Max Total Connections 8
Validation query No value set
It has to be noticed that one (only one!) SqlServer connector works and the ExecuteSQL processor imports the data without any problem. The even stranger thing is that the database that is being connected via this connector is located in the same place as other two (the connection URL and user/psw are identical), but only the first one is working.
Notice that I've tried appending ?zeroDateTimeBehavior=convertToNull&autoReconnect=true also to the SqlServer connections, supposing it was a problem of date type, but it didn't give any positive change.
Here is the error that is being returned:
12:02:46 CEST ERROR f1553b83-a173-1c0f-93cb-1c32f0f46d1d
00.00.00.00:0000 ExecuteSQL[id=****] ExecuteSQL[id=****] failed to process session due to null; Processor Administratively Yielded for 1 sec: java.lang.AbstractMethodError
Error retrieved from logs:
ERROR [Timer-Driven Process Thread-49] o.a.nifi.processors.standard.ExecuteSQL ExecuteSQL[id=****] ExecuteSQL[id=****] failed to process session due to java.lang.AbstractMethodError; Processor Administratively Yielded for 1 sec: java.lang.AbstractMethodError
java.lang.AbstractMethodError: null
at net.sourceforge.jtds.jdbc.JtdsConnection.isValid(JtdsConnection.java:2833)
at org.apache.commons.dbcp2.DelegatingConnection.isValid(DelegatingConnection.java:874)
at org.apache.commons.dbcp2.PoolableConnection.validate(PoolableConnection.java:270)
at org.apache.commons.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:389)
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2398)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2381)
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2110)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:305)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.GeneratedMethodAccessor1696.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84)
at com.sun.proxy.$Proxy449.getConnection(Unknown Source)
at org.apache.nifi.processors.standard.AbstractExecuteSQL.onTrigger(AbstractExecuteSQL.java:195)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Related

Error running transactions on HANA database

I am running bulk transactions on HANA database and getting following error:
2018-01-15 10:23:33,865 ERROR c.c.t.Payment [tpcc-thread-5] UPDATE district SET d_ytd = d_ytd + 1601.0 WHERE d_w_id = 1 AND d_id = 1
com.sap.db.jdbc.exceptions.JDBCDriverException: SAP DBTech JDBC: [138]: transaction serialization failure: TrexUpdate failed on table 'SYSTEM:DISTRICT' with error: transaction order error, rc=4614
at com.sap.db.jdbc.exceptions.SQLExceptionSapDB._newInstance(SQLExceptionSapDB.java:193)
.......
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-01-15 10:23:34,139 ERROR c.c.t.Payment [tpcc-thread-5] Payment error
java.lang.Exception: Payment update transaction error
Anybody has any idea what could be wrong?
I found out that this was a valid error. For transaction type REPEATABLE_READ or SERIALIZABLE this problem can occur when before end of one transaction, another transaction modifies the same data which transaction one was working on.
To get rid of this error, we can use transaction type READ_COMMITTED, which takes a snapshot of the data at the beginning and uses the same data throughout the transaction. It will also lock the rows its working on so that other transactions cannot modify it.

Kafka-connect with sqlserver

These are commands which I am running:-
bin/zookeeper-server-start etc/kafka/zookeeper.properties &
bin/kafka-server-start etc/kafka/server.properties &
bin/schema-registry-start etc/schema-registry/schema-registry.properties &
bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/quickstart-sqlserver.properties &
bin/kafka-avro-console-consumer --new-consumer --bootstrap-server localhost:9094 --topic test3-sqlserver-jdbc-ErrorLog --from-beginning
I am trying to connect sqlserver using confluent platform(kafka-connect) and facing following issues:
When I am trying to connect to default schema i.e. dbo , connection is built but it is not able to fetch data into the kafka consumer. The connection details that I am using are:
name=test-sqlserver-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:sqlserver://********:1433;database=AdventureWorks2012;user=****;password=****
mode=incrementing
incrementing.column.name=ErrorLogID
topic.prefix=test3-sqlserver-jdbc-
table.whitelist=ErrorLog
schema.registry=dbo
When I am trying to connect to any other schema, the producer is throwing error, connection details that i am using are :
name=test-sqlserver-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:sqlserver://********:1433;database=AdventureWorks2012;user=****;password=****
mode=incrementing
incrementing.column.name=AddressID
topic.prefix=test3-sqlserver-jdbc-
table.whitelist=Address
schema.registry=Person
Error :
INFO Source task WorkerSourceTask{id=test-sqlserver-jdbc-autoincrement-0} finished
initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:138)
[2017-03-07 17:55:47,041] ERROR Failed to run query for table
TimestampIncrementingTableQuerier{name='Address', query='null',
topicPrefix='test3-sqlserver-jdbc-', timestampColumn='null',
incrementingColumn='AddressID'}:
com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name 'Address'.
io.confluent.connect.jdbc.JdbcSourceTask:239)
[2017-03-07 17:55:52,124] ERROR Failed to run query for table
TimestampIncrementingTableQuerier{name='Address', query='null',
topicPrefix='test3-sqlserver-jdbc-', timestampColumn='null',
incrementingColumn='AddressID'}: com.microsoft.sqlserver.jdbc.SQLServerException:
Invalid object name 'Address'. (io.confluent.connect.jdbc.JdbcSourceTask:239)
[2017-03-07 17:55:53,684] INFO Reflections took 9299 ms to scan
262 urls, producing 12112 keys and 79402 values
(org.reflections.Reflections:229)
[2017-03-07 17:55:57,181] ERROR Failed to run query for table
TimestampIncrementingTableQuerier{name='Address', query='null',
topicPrefix='test3-sqlserver-jdbc-', timestampColumn='null',
incrementingColumn='AddressID'}:
com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name 'Address'.
(io.confluent.connect.jdbc.JdbcSourceTask:239)

Apache NiFi :Convert JSONtoSQL (Oracle Database)

I wanted to convert my JSON string to SQL statement by using ConvertJSONtoSQL processor.
example: JSON string -
{"cpuwait":"0.0","servernamee":"mywindows","cpusys":"5.3","cpuidle":"77.6","datee":"29-SEP-2016","timee":"00:01:33","cpucpuno":"CPU01","cpuuser":"17.1"}
Table structure in oracle db -
CREATE TABLE cpu (
datee varchar2(15) DEFAULT NULL,
timee varchar2(10) DEFAULT NULL,
servernamee varchar2(20) DEFAULT NULL,
cpucpuno varchar2(4) DEFAULT NULL,
cpuuser varchar2(5) DEFAULT NULL,
cpusys varchar2(5) DEFAULT NULL,
cpuwait varchar2(5) DEFAULT NULL,
cpuidle varchar2(5) DEFAULT NULL
);
Configuration used for MySQL Database:
Database connection url:jdbc:mysql://localhost:3306/testnifi
Database Driver classname:com.mysql.jdbc.Driver
I was successfully connected to MySQL using(DBCP connection pool) JDBC url,username and password.
ConvertJSONtoSQL processor successfully worked there and I'm getting valid sql insert statement as output.
But when i was trying the same with Oracle Database I'm getting
ERROR [Timer-Driven Process Thread-6] o.a.n.p.standard.ConvertJSONToSQL
java.sql.SQLException: Stream has already been closed
My configuration for Oracle db connection:
I searched for the error in google but I found that this error will occur when Long Datatypes were used in database tables but I'm not using them.
I went through the source code of ConvertJSONtoSQL processor(following stack trace) and tried to implement the same in eclipse where I'm not getting any error ,I can connect to database and make queries.
So is there any mistake in my configuration?
Nifi version - 0.7.0/1.0(i'm getting same error in both)
java version - java8
Oracle DB version - Oracle Database 11g Express Edition
Complete Stack trace:
2016-10-19 07:10:06,557 ERROR [Timer-Driven Process Thread-6] o.a.n.p.standard.ConvertJSONToSQL
java.sql.SQLException: Stream has already been closed
at oracle.jdbc.driver.LongAccessor.getBytesInternal(LongAccessor.java:156) ~[ojdbc6.jar:11.2.0.1.0]
at oracle.jdbc.driver.LongAccessor.getBytes(LongAccessor.java:126) ~[ojdbc6.jar:11.2.0.1.0]
at oracle.jdbc.driver.LongAccessor.getString(LongAccessor.java:201) ~[ojdbc6.jar:11.2.0.1.0]
at oracle.jdbc.driver.T4CLongAccessor.getString(T4CLongAccessor.java:427) ~[ojdbc6.jar:11.2.0.1.0]
at oracle.jdbc.driver.OracleResultSetImpl.getString(OracleResultSetImpl.java:1251) ~[ojdbc6.jar:11.2.0.1.0]
at oracle.jdbc.driver.OracleResultSet.getString(OracleResultSet.java:494) ~[ojdbc6.jar:11.2.0.1.0]
at org.apache.commons.dbcp.DelegatingResultSet.getString(DelegatingResultSet.java:263) ~[na:na]
at org.apache.nifi.processors.standard.ConvertJSONToSQL$ColumnDescription.from(ConvertJSONToSQL.java:677) ~[nifi-standard-processors-0.7.0.jar:0.7.0]
at org.apache.nifi.processors.standard.ConvertJSONToSQL$TableSchema.from(ConvertJSONToSQL.java:621) ~[nifi-standard-processors-0.7.0.jar:0.7.0]
at org.apache.nifi.processors.standard.ConvertJSONToSQL.onTrigger(ConvertJSONToSQL.java:267) ~[nifi-standard-processors-0.7.0.jar:0.7.0]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-0.7.0.jar:0.7.0]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1054) [nifi-framework-core-0.7.0.jar:0.7.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-0.7.0.jar:0.7.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-0.7.0.jar:0.7.0]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:127) [nifi-framework-core-0.7.0.jar:0.7.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [na:1.7.0_40]
at java.util.concurrent.FutureTask.runAndReset(Unknown Source) [na:1.7.0_40]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown Source) [na:1.7.0_40]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) [na:1.7.0_40]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.7.0_40]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.7.0_40]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_40
It seems a bug in Oracle driver. See:
https://blog.jooq.org/2015/12/30/oracle-long-and-long-raw-causing-stream-has-already-been-closed-exception/
Hibernate custom type to avoid 'Caused by: java.sql.SQLException: Stream has already been closed'
The item 2 give me the workaround. Basically add in bootstrap.conf the following argument:
java.arg.xx=-Doracle.jdbc.useFetchSizeWithLongColumn=true

How to resolve Application timeout issue due to SQL Query in 'Killed/ROLLBACK' Scenario

I've an application which has a database in SQL server 2012 and the application use entity framework to communicate with database. In the application, there is a functionality to update a record in a table based on a WHERE condition and the Primary Key field is the one used in the WHERE condition. So the update happens only to a single record (there is no loop or anything just an update to a single record). This is the background.
Here is the issue now I'm facing - I'm getting a timeout error message from the application when I invoke the functionality to update a record in the table (as mentioned above). I checked the query execution in the SQL server using 'Activity Monitor' and under the 'Processes' tab I could see that the Command of this query comes to 'KILLED/ROLLBACK' after some 'Wait Types' (like LOGBUFFER, pageiolatch_XX, etc...)
I tried to execute the update query directly in SQL server and that also not responding. So it's clear the issues is with the SQL server and it takes too much time to execute the update query. But the execution plan looks good and the PK field is used as the where condition. Is it something related with disk latency?
Note: This issues is not consistent, sometimes it works.
Here is the file stat. How can I interpret or reach a conclusion from these data...
My DB
DbId FileId TimeStamp NumberReads BytesRead IoStallReadMS NumberWrites BytesWritten IoStallWriteMS IoStallMS BytesOnDisk FileHandle
2 1 -1152466625 21199845 1351872315392 2528572322 21869447 1424883785728 10201419039 12729991361 28266332160 0x0000000000000C64
2 2 -1152466625 1063 45187072 87119 1013000 61628433920 178901888 178989007 6945505280 0x0000000000000CC4
TempDB
DbId FileId TimeStamp NumberReads BytesRead IoStallReadMS NumberWrites BytesWritten IoStallWriteMS IoStallMS BytesOnDisk FileHandle
18 1 -1152466625 390905 27728437248 52640514 196501 6927843328 817347538 869988052 58378551296 0x0000000000001F3C
18 2 -1152466625 24840 1596173312 645024 56563 3335298048 2590871 3235895 938344448 0x00000000000012BC

Oracle Identity Federation - RCU OID Schema Creation Failure

I am trying to install OIF - Oracle Identity federation as per the OBE http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/oif/11g/r1/oif_install/oif_install.htm
I have installed the Oracle 11gR2 11.2.0.3 with the charset = AL32UTF8 and db_block size of 8K and nls_length_semantics=CHAR. Created database and listener needed.
Installed weblogic 10.3.6
Started installation of OIM - Oracle identity management, chosen install and configure option and schema creation options.
Installation goes fine, but during configuration it fails. Below is the relevant part of the logs.
I have tried multiple times just to fail again and again. If someone can kindly shed some light as what is going wrong in here. Please let me know, if you need more info on the setup...
_File : ...//oraInventory/logs/install2013-05-30_01-18-31AM.out_
ORA-01450: maximum key length (6398) exceeded
Percent Complete: 62
Repository Creation Utility: Create - Completion Summary
Database details:
Host Name : vccg-rh1.earth.com
Port : 1521
Service Name : OIAMDB
Connected As : sys
Prefix for (non-prefixable) Schema Owners : DEFAULT_PREFIX
RCU Logfile : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/rcu.log
RCU Checkpoint Object : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/RCUCheckpointObj
Component schemas created:
Component Status Logfile
Oracle Internet Directory Failed /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/oid.log
Repository Creation Utility - Create : Operation Completed
Repository Creation Utility - Dropping and Cleanup of the failed components
Repository Dropping and Cleanup of the failed components in progress.
Percent Complete: 93
Percent Complete: -117
Percent Complete: 100
RCUUtil createOIDRepository status = 2------------------------------------------------- java.lang.Exception: RCU OID Schema Creation Failed
at oracle.as.idm.install.config.IdMDirectoryServicesManager.doExecute(IdMDirectoryServicesManager.java:792)
at oracle.as.install.engine.modules.configuration.client.ConfigAction.execute(ConfigAction.java:375)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.run(TaskPerformer.java:88)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.startConfigAction(TaskPerformer.java:105)
at oracle.as.install.engine.modules.configuration.action.ActionRequest.perform(ActionRequest.java:15)
at oracle.as.install.engine.modules.configuration.action.RequestQueue.perform(RequestQueue.java:96)
at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:186)
at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:81)
at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:86)
at java.lang.Thread.run(Thread.java:662)
_File : ...///fmw/Oracle_IDM1_IDP33/rcu/log/oid.log_
CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
*
ERROR at line 1:
ORA-01450: maximum key length (6398) exceeded
Edited by: 1008964 on May 30, 2013 12:10 PM
Edited by: 1008964 on May 30, 2013 12:12 PM
Update :
I looked at the logs again and tracked which sql statements were leading to the above error…
CREATE BIGFILE TABLESPACE "OLTS_CT_STORE" EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO DATAFILE '/data/OIAM/installed_apps/db/oradata/OIAMDB/gcats1_oid.dbf' SIZE 32M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED;
CREATE TABLE ct_dn (
EntryID NUMBER NOT NULL,
RDN varchar2(1024) NOT NULL,
ParentDN varchar2(1024) NOT NULL)
ENABLE ROW MOVEMENT
TABLESPACE OLTS_CT_STORE MONITORING;
*CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
TABLESPACE OLTS_CT_STORE
PARALLEL COMPUTE STATISTICS;*
I ran these statements from sqlplus and I was able to create the index without issues and as per the table space creation statement, autoextend is on. If RCU – repo creation utility runs to create the schemas needed, it fails with the same error as earlier. Any pointers ?
Setting NLS_LENGTH_SEMANTICS=BYTE worked

Resources