Error running transactions on HANA database - database

I am running bulk transactions on HANA database and getting following error:
2018-01-15 10:23:33,865 ERROR c.c.t.Payment [tpcc-thread-5] UPDATE district SET d_ytd = d_ytd + 1601.0 WHERE d_w_id = 1 AND d_id = 1
com.sap.db.jdbc.exceptions.JDBCDriverException: SAP DBTech JDBC: [138]: transaction serialization failure: TrexUpdate failed on table 'SYSTEM:DISTRICT' with error: transaction order error, rc=4614
at com.sap.db.jdbc.exceptions.SQLExceptionSapDB._newInstance(SQLExceptionSapDB.java:193)
.......
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-01-15 10:23:34,139 ERROR c.c.t.Payment [tpcc-thread-5] Payment error
java.lang.Exception: Payment update transaction error
Anybody has any idea what could be wrong?

I found out that this was a valid error. For transaction type REPEATABLE_READ or SERIALIZABLE this problem can occur when before end of one transaction, another transaction modifies the same data which transaction one was working on.
To get rid of this error, we can use transaction type READ_COMMITTED, which takes a snapshot of the data at the beginning and uses the same data throughout the transaction. It will also lock the rows its working on so that other transactions cannot modify it.

Related

Teradata Database TeraJDBC 16.20.00.04 Error 3610 SQLState HY000 Internal error: Please do not resubmit the last request

Im requesting a report in the form of excel in my application. The application generates a report by running several(31) sql tasks, everytime i submit a request. But for a particular request the execution of 28th task doesnt complete and throws the below error.
[Teradata Database] [TeraJDBC 16.20.00.04] [Error 3610] [SQLState HY000] Internal error: Please do not resubmit the last request. SubCode, CrashCode: 0, 2693
01/28/2020 10:35:28,586 ERROR com.xxxx.yyyy.zz.Runner:generateBasicReport:174 - java.sql.SQLException: [Teradata Database] [TeraJDBC 16.20.00.04] [Error 3610] [SQLState HY000] Internal error: Please do not resubmit the last request. SubCode, CrashCode: 0, 2693
com.xxxx.yyyy.zz.exception.ReportGenerationException: java.sql.SQLException: [Teradata Database] [TeraJDBC 16.20.00.04] [Error 3610] [SQLState HY000] Internal error: Please do not resubmit the last request. SubCode, CrashCode: 0, 2693
at com.xxxx.yyyy.zz.generator.task.CancellableSqlTask.execute(CancellableSqlTask.java:79)
at com.xxxx.yyyy.zz.generator.task.TaskExecutor.executeNext(TaskExecutor.java:56)
at com.xxxx.yyyy.zz.generator.ReportGenerator.executeTasks(ReportGenerator.java:119)
at com.xxxx.yyyy.zz.generator.ReportGenerator.execute(ReportGenerator.java:83)
at com.xxxx.yyyy.zz.generator.task.CompositeCancellableTask.execute(CompositeCancellableTask.java:32)
at com.xxxx.yyyy.zz.Runner.executeGeneration(Runner.java:209)
at com.xxxx.yyyy.zz.Runner.generateBasicReport(Runner.java:171)
at com.xxxx.yyyy.zz.Runner.generateReports(Runner.java:137)
at com.xxxx.yyyy.zz.Runner.startReportGenerator(Runner.java:92)
at com.xxxx.yyyy.zz.Runner.main(Runner.java:69)
Caused by: java.sql.SQLException: [Teradata Database] [TeraJDBC 16.20.00.04] [Error 3610] [SQLState HY000] Internal error: Please do not resubmit the last request. SubCode, CrashCode: 0, 2693
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDatabaseSQLException(ErrorFactory.java:309)
at com.teradata.jdbc.jdbc_4.statemachine.ReceiveInitSubState.action(ReceiveInitSubState.java:103)
at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.subStateMachine(StatementReceiveState.java:311)
at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.action(StatementReceiveState.java:200)
at com.teradata.jdbc.jdbc_4.statemachine.StatementController.runBody(StatementController.java:137)
at com.teradata.jdbc.jdbc_4.statemachine.PreparedStatementController.run(PreparedStatementController.java:46)
at com.teradata.jdbc.jdbc_4.TDStatement.executeStatement(TDStatement.java:389)
at com.teradata.jdbc.jdbc_4.TDStatement.executeStatement(TDStatement.java:331)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.doPrepExecuteUpdate(TDPreparedStatement.java:225)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeUpdate(TDPreparedStatement.java:2769)
at com.xxxx.yyyy.zz.generator.task.CancellableUpdate.perform(CancellableUpdate.java:36)
at com.xxxx.yyyy.zz.generator.task.CancellableSqlTask.execute(CancellableSqlTask.java:74)
... 9 more
That means that the request caused some exception that Teradata couldn't handle, and like the message says, "don't re-submit". You could try digging into the logs, but I don't know how fruitful that will be. I think they are somewhere in /var/opt/teradata/. Otherwise, you'd have to check with customer support to debug.
From the documentation:
2693 The fieldids in field 5 are not contiguous.
Explanation: The field descriptors in field 5 should be contiguous, they are not.
Generated By: FldLocat.
For Whom: Site support representative.
Notes: This is caused by an incorrect field 5 header.
Remedy: Contact your Support Representative.

NiFi connection to SqlServor for ExecuteSQL

I'm trying to import some data from different SqlServer databases using ExecuteSQL in NiFi, but it's returning me an error. I've already imported a lot of other tables from MySQL databases without any problem and I'm trying to use the same workflow structure for the SqlServer dbs.
The structure is as follows:
There's a file .txt with the list of tables to be imported
This file is fetched, splitted and uptaded; so there's a FlowFile for each table of each db that has to be imported,
These FlowFiles are passed into ExecuteSQL which executes their contents
For example:
file.txt
table1
table2
table3
is being updated into 3 different FlowFiles:
FlowFile1
SELECT * FROM table1
FlowFile2
SELECT * FROM table2
FlowFile3
SELECT * FROM table3
which are passed to ExecuteSQL.
Here follows the configuration of ExecuteSQL (identical for SqlServer tables and MySQL ones)
ExecuteSQL
As the only difference with the import from MySQL db is in the connectors, this is how a generic MySQL connector has been configured:
SETTINGSPROPERTIES
Database Connection URL jdbc:mysql://00.00.00.00/DataBase?zeroDateTimeBehavior=convertToNull&autoReconnect=true
Database Driver Class Name com.mysql.jdbc.Driver
Database Driver Location(s) file:///path/mysql-connector-java-5.1.47-bin.jar
Database User user
PasswordSensitive value set
Max Wait Time 500 millis
Max Total Connections 8
Validation query No value set
And this is how a SqlServer connector has been configured:
SETTINGSPROPERTIES
Database Connection URL jdbc:jtds:sqlserver://00.00.00.00/DataBase;useNTLMv2=true;integratedSecurity=true;
Database Driver Class Name net.sourceforge.jtds.jdbc.Driver
Database Driver Location(s) /path/connectors/jtds-1.3.1.jar
Database User user
PasswordSensitive value set
Max Wait Time -1
Max Total Connections 8
Validation query No value set
It has to be noticed that one (only one!) SqlServer connector works and the ExecuteSQL processor imports the data without any problem. The even stranger thing is that the database that is being connected via this connector is located in the same place as other two (the connection URL and user/psw are identical), but only the first one is working.
Notice that I've tried appending ?zeroDateTimeBehavior=convertToNull&autoReconnect=true also to the SqlServer connections, supposing it was a problem of date type, but it didn't give any positive change.
Here is the error that is being returned:
12:02:46 CEST ERROR f1553b83-a173-1c0f-93cb-1c32f0f46d1d
00.00.00.00:0000 ExecuteSQL[id=****] ExecuteSQL[id=****] failed to process session due to null; Processor Administratively Yielded for 1 sec: java.lang.AbstractMethodError
Error retrieved from logs:
ERROR [Timer-Driven Process Thread-49] o.a.nifi.processors.standard.ExecuteSQL ExecuteSQL[id=****] ExecuteSQL[id=****] failed to process session due to java.lang.AbstractMethodError; Processor Administratively Yielded for 1 sec: java.lang.AbstractMethodError
java.lang.AbstractMethodError: null
at net.sourceforge.jtds.jdbc.JtdsConnection.isValid(JtdsConnection.java:2833)
at org.apache.commons.dbcp2.DelegatingConnection.isValid(DelegatingConnection.java:874)
at org.apache.commons.dbcp2.PoolableConnection.validate(PoolableConnection.java:270)
at org.apache.commons.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:389)
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2398)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2381)
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2110)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:305)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.GeneratedMethodAccessor1696.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84)
at com.sun.proxy.$Proxy449.getConnection(Unknown Source)
at org.apache.nifi.processors.standard.AbstractExecuteSQL.onTrigger(AbstractExecuteSQL.java:195)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Kafka-connect with sqlserver

These are commands which I am running:-
bin/zookeeper-server-start etc/kafka/zookeeper.properties &
bin/kafka-server-start etc/kafka/server.properties &
bin/schema-registry-start etc/schema-registry/schema-registry.properties &
bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/quickstart-sqlserver.properties &
bin/kafka-avro-console-consumer --new-consumer --bootstrap-server localhost:9094 --topic test3-sqlserver-jdbc-ErrorLog --from-beginning
I am trying to connect sqlserver using confluent platform(kafka-connect) and facing following issues:
When I am trying to connect to default schema i.e. dbo , connection is built but it is not able to fetch data into the kafka consumer. The connection details that I am using are:
name=test-sqlserver-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:sqlserver://********:1433;database=AdventureWorks2012;user=****;password=****
mode=incrementing
incrementing.column.name=ErrorLogID
topic.prefix=test3-sqlserver-jdbc-
table.whitelist=ErrorLog
schema.registry=dbo
When I am trying to connect to any other schema, the producer is throwing error, connection details that i am using are :
name=test-sqlserver-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:sqlserver://********:1433;database=AdventureWorks2012;user=****;password=****
mode=incrementing
incrementing.column.name=AddressID
topic.prefix=test3-sqlserver-jdbc-
table.whitelist=Address
schema.registry=Person
Error :
INFO Source task WorkerSourceTask{id=test-sqlserver-jdbc-autoincrement-0} finished
initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:138)
[2017-03-07 17:55:47,041] ERROR Failed to run query for table
TimestampIncrementingTableQuerier{name='Address', query='null',
topicPrefix='test3-sqlserver-jdbc-', timestampColumn='null',
incrementingColumn='AddressID'}:
com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name 'Address'.
io.confluent.connect.jdbc.JdbcSourceTask:239)
[2017-03-07 17:55:52,124] ERROR Failed to run query for table
TimestampIncrementingTableQuerier{name='Address', query='null',
topicPrefix='test3-sqlserver-jdbc-', timestampColumn='null',
incrementingColumn='AddressID'}: com.microsoft.sqlserver.jdbc.SQLServerException:
Invalid object name 'Address'. (io.confluent.connect.jdbc.JdbcSourceTask:239)
[2017-03-07 17:55:53,684] INFO Reflections took 9299 ms to scan
262 urls, producing 12112 keys and 79402 values
(org.reflections.Reflections:229)
[2017-03-07 17:55:57,181] ERROR Failed to run query for table
TimestampIncrementingTableQuerier{name='Address', query='null',
topicPrefix='test3-sqlserver-jdbc-', timestampColumn='null',
incrementingColumn='AddressID'}:
com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name 'Address'.
(io.confluent.connect.jdbc.JdbcSourceTask:239)

Netezza DISK_FPGA_ERROR

I'm using Netezza with following information:
Model: N3001-010
NPS Software version: 7.2.0.6-P1
When I try to query with simple statements such as:
select count(*) from table_name;
or
select * from table_name where date_col>='2015-12-01' and date_col<='2015-12-30' limit 1000;
the raised error is:
ERROR [HY000] ERROR: DISK_FPGA_ERROR : Status=0,0x4000
[COMPBADFIELDTYPE1] SPU=1010 Dev=31 Eng=11 LBA=32224515
This error does not occur systematically.
How can I fix this error?
This error can indicate a disk page corruption, or some other fundamental issue, and your administrator should immediately open a support ticket to investigate and resolve this. T

perl DBD::ODBC rollback ineffective with AutoCommit enabled at

I am currently having a block like below. So with this we set autocommit off and and do a commit/rollback. Now at the rollback line , we are getting a failure saying that "rollback ineffective with AutoCommit enabled at" . How could this happen since AutoCommit was indeed disabled by the begin_work. This problem was not there for a long time and it is suddenly occuring.
On investigating further , i found that the update_sql1 created a #temp table , and update_sql2,update_sql3,update_sql4 query the same #temp table, and are failing with Invalid object name '#temp' error. Immediately control flows to if($#) where $dbh->{AutoCommit} is set to 1. First of all its really wierd as to why update_sql2 and onwards count not find object #temp , when update_sql1 was indeed successful.
Any pointers ?
====
$dbh->db_Main()->begin_work;
eval {
$dbh->do($update_sql1);
$dbh->do($update_sql2);
$dbh->do($update_sql3);
$dbh->do($update_sql4);
$dbh->commit;
1;
}
if ($#) {
$logger->info("inside catch");
$logger->info("autocommit is $dbh->{AutoCommit}");
$dbh->rollback;
}
===
Here is the full error message
Issuing rollback() due to DESTROY without explicit disconnect() of DBD::ODBC::db handle ..
rollback ineffective with AutoCommit enabled ...
Under autocommit, the begin starts a transaction, which is automatically committed. You have to turn off AutoCommit to get a transaction.

Resources