iBATIS - auto generated keys exception - ibatis

I am using IBATIS 2.3.4 version.
Database is MS SQL Server.
I am trying to insert record in table T_PROFILE, which has Identity primary column auto generated value.
My IBATIS configuration is
<insert id="insertProfile" parameterClass="profileDO" useGeneratedKeys="true" keyProperty="profileId">
INSERT INTO T_PROFILE (E_ID,PROFILE_NAME,DEFAULT_PROFILE)
VALUES(#eId#, #profileName#, #isDefaultProfile#)
<selectKey resultClass="java.lang.Long" keyProperty="profileId" >
SELECT ##IDENTITY AS profileId
</selectKey>
</insert>
It is giving error as
Caused by: com.ibatis.common.xml.NodeletException: Error parsing XML. Cause: org.xml.sax.SAXParseException: Attribute "useGeneratedKeys" must be declared for element type "settings".
at com.ibatis.common.xml.NodeletParser.parse(NodeletParser.java:62)
at com.ibatis.sqlmap.engine.builder.xml.SqlMapConfigParser.parse(SqlMapConfigParser.java:62)
at com.ibatis.sqlmap.engine.builder.xml.SqlMapConfigParser.parse(SqlMapConfigParser.java:55)
at org.springframework.orm.ibatis.SqlMapClientFactoryBean.buildSqlMapClient(SqlMapClientFactoryBean.java:339)
... 160 more
Caused by: org.xml.sax.SAXParseException: Attribute "useGeneratedKeys" must be declared for element type "settings".

If you have identity column set in database, the below query itself will work. useGeneratedKeys not needed.
<insert id="insertProfile" parameterClass="profileDO" >
INSERT INTO T_PROFILE (E_ID,PROFILE_NAME,DEFAULT_PROFILE)
VALUES(#eId#, #profileName#, #isDefaultProfile#)
<selectKey resultClass="long" keyProperty="profileId" >
SELECT ##IDENTITY AS profileId
</selectKey>
</insert>

Related

IDENTITY_INSERT Issue in Apache Kafka - JDBC - MSSQL

Using Apache Kafka sink connector to insert/update data from SQL Server to SQL Server getting following ERROR
java.sql.BatchUpdateException: Cannot insert explicit value for identity column in table 'table_name'
when IDENTITY_INSERT is set to OFF.
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:2075)
The Source Configuration
name=jdbc-mssql-prod-5
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
connection.url=jdbc:sqlserver:
connection.user=
connection.password=
topic.prefix= source_topic.
mode=timestamp
table.whitelist=A,B,C
timestamp.column.name=ModifiedDateTime
connection.backoff.ms=60000
connection.attempts=300
validate.non.null= false
# enter timestamp in milliseconds
timestamp.initial= -1
The Sink Configuration
name=mysql-sink-prod-5
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics= sink_topic_a, sink_topic_b
connection.url=jdbc:sqlserver:
connection.user=
connection.password=
insert.mode=upsert
delete.enabled=true
pk.mode=record_key
errors.log.enable= true
errors.log.include.messages=true
In the table, primary key column and identity column are same.

Is JDBC sink in SQL Server available to skip a not-provided column from my source?

I want to extract all data from my PostgreSQL to SQL Server using Kafka Connect and JDBC sink. I want to rid off from some queries to check whether i can do data stream using insert.mode=insert only.
This is my source config:
name=debezium_pg_connectors
connector.class=io.debezium.connector.postgresql.PostgresConnector
tasks.max=1
plugin.name=pgoutput
database.hostname=XXX.XXX.XXX.XX
database.port=5432
database.user=XXXXXX
database.password=XXXXXX
database.dbname=XXXXX
database.server.name=XXXXX
database.history.kafka.bootstrap.servers=localhost:9092
database.history.kafka.topic=XXXXXX
table.whitelist=XXXXXXX
time.precision.mode=connect
transforms=unwrap
transforms.unwrap.type= io.debezium.transforms.ExtractNewRecordState
This is my sink config:
name=jdbc-sink
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=pj_user
connection.url=<connection>
auto.create=true
auto.evolve=true
insert.mode=insert
pk.mode=record_key
table.name.format=<table>
transforms=unwrap,route
transforms.unwrap.type=io.debezium.transforms.UnwrapFromEnvelope
transforms=route
transforms.route.type=org.apache.kafka.connect.transforms.RegexRouter
transforms.route.regex=([^.]+)\\.([^.]+)\\.([^.]+)
transforms.route.replacement = $3
fields.whitelist=...
In my SQL Server, i have auto-generated column called key with uniqueidentifier data type and as primary key. However, there's failure every time i tried to push my data to sink:
[2020-03-03 12:45:11,487] ERROR WorkerSinkTask{id=jdbc-sink-0} RetriableException from SinkTask: (org.apache.kafka.connect.runtime.WorkerSinkTask:552)
org.apache.kafka.connect.errors.RetriableException: java.sql.SQLException: java.sql.BatchUpdateException: Cannot insert the value NULL into column 'key', table '<table>'; column does not allow nulls. INSERT fails.
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:93)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:539)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.sql.SQLException: java.sql.BatchUpdateException: Cannot insert the value NULL into column 'key', table '<table>'; column does not allow nulls. INSERT fails.
... 12 more
If anyone have any ideas to help me, any helps and advice are appreciated. thanks
Make sure that your key column has a default value:
ALTER TABLE [tableName] ADD DEFAULT NEWSEQUENTIALID() FOR key

DBUnit insists on inserting null for unspecified values, but I want the DB default value to be used

I'm having this problem with DBUnit causing a SQL insert error. Say I have this in my dbunit testdata.xml file:
<myschema.mytable id="1" value1="blah" value2="foo" />
I have a table like this (postgres)
myschema.mytable has an id, value1, value2, and a date field, say "lastmodified." The lastmodified column is timestamp with modifiers "not null default now()"
It appears that dbunit reads the table metadata and attempts to insert nulls for any column that isn't specified in my testdata.xml file. So the above xml results in an insert like this:
insert into myschema.mytable (id,value1,value2,lastmodified) values (1,'blah','foo',null)
When running tests (dbunit/maven plugin) I get an error like this:
Error executing database operation: REFRESH: org.postgresql.util.PSQLException: ERROR: null value in column "lastmodified" violates not-null constraint
Is there some way to tell DBUnit to NOT INSERT null values on fields that I don't specify?
Edit: Using dbunit 2.5.3, junit 4.12, postgressql driver 9.4.1208
Use the dbUnit "exclude column" feature:
How to exclude some table columns at runtime?
The FilteredTableMetaData class introduced in DbUnit 2.1 can be used in combination with the IColumnFilter interface to decide the inclusion or exclusion of table columns at runtime.
FilteredTableMetaData metaData = new FilteredTableMetaData(originalTable.getTableMetaData(), new MyColumnFilter());
ITable filteredTable = new CompositeTable(metaData, originalTable);

fatfreeframework with SQL Server databse using mapper copyfrom method with partial insert

I am attempting to insert a record using the copyFrom('POST') and save() methods of fatfreeframework v3.5. The data from POST does not contain an id field which for this table is set as an autoincrement. The SQL from the logs is
SET IDENTITY_INSERT [xrefs] ON;
INSERT INTO [xrefs] ([status], [supply_id], [description], [unit], [unitcost], [cap], [rev], [buq])
VALUES ('test', 'Htest', 'test', 'test', '1', '1', 1, 1)
As you can see fatfree is adding the set identity insert despite the fact there is no id column included in the insert. Is there a way to tell mapper not to set this flag? Or is there another workaround? I could get the current max ID and then insert +1 but that seems clunky.
I should add this SQL fails because the id column is not included in the columns list.
$this->db->exec(
(preg_match('/mssql|dblib|sqlsrv/',$this->engine) &&
array_intersect(array_keys($pkeys),$ckeys)?
'SET IDENTITY_INSERT '.$this->table.' ON;':'').
'INSERT INTO '.$this->table.' ('.$fields.') '.
'VALUES ('.$values.')',$args
);
This is the code that sets IDENTITY_INSERT in mapper.php function insert.
$this->logger->write( 'xrefs schema:'.
json_encode( $this->tongpodb->schema( 'xrefs' ) ) );
Calling schema on the the db object gives back this array
{"id":{"type":"int","pdo_type":1,"default":null,"nullable":false,"pkey":true},"changed_date":{"type":"datetime","pdo_type":2,"default":null,"nullable":true,"pkey":false},"status":{"type":"varchar","pdo_type":2,"default":null,"nullable":false,"pkey":false},"supply_id":{"type":"varchar","pdo_type":2,"default":null,"nullable":false,"pkey":true},"description":{"type":"varchar","pdo_type":2,"default":null,"nullable":true,"pkey":false},"unit":{"type":"varchar","pdo_type":2,"default":null,"nullable":false,"pkey":false},"hcpcs":{"type":"char","pdo_type":2,"default":null,"nullable":true,"pkey":false},"unitcost":{"type":"decimal","pdo_type":2,"default":null,"nullable":false,"pkey":false},"cap":{"type":"decimal","pdo_type":2,"default":null,"nullable":false,"pkey":false},"rev":{"type":"smallint","pdo_type":1,"default":null,"nullable":false,"pkey":false},"buq":{"type":"smallint","pdo_type":1,"default":null,"nullable":true,"pkey":false},"create_ts":{"type":"datetime","pdo_type":2,"default":null,"nullable":true,"pkey":false},"log_ts":{"type":"int","pdo_type":1,"default":null,"nullable":true,"pkey":false},"filename":{"type":"varchar","pdo_type":2,"default":null,"nullable":true,"pkey":false},"line_no":{"type":"smallint","pdo_type":1,"default":null,"nullable":true,"pkey":false},"file_ts":{"type":"datetime","pdo_type":2,"default":null,"nullable":true,"pkey":false}}
As you can see id has a "pkey":true entry so one could look at the fields from post then look at this and determine if IDENTITY_INSERT needs to set. Perhaps I will implement this. I worry this is above my paygrade.
Updated to the latest version of fatfree fixed this issue.

INSERT statement not working when using it through a variable in Mule

My database component has the following configuration
<db:insert config-ref="Oracle_Configuration" bulkMode="true" doc:name="Database">
<db:dynamic-query><![CDATA[#[flowVars.dbquery]]]></db:dynamic-query>
</db:insert>
I have declared the "dbquery" variable as follows
<set-variable variableName="dbquery" value="INSERT INTO WBUSER.EMP VALUES('#[payload.FullName]','#[payload.SerialNumber]')" doc:name="Variable"/>
On running the application the values inserted into the DB are "#[payload.FullName] and #[payload.SerialNumber].
But when my database component has the following configuration actual values of FullName and SerialNumber are getting inserted into the database.
<db:insert config-ref="Oracle_Configuration" bulkMode="true" doc:name="Database">
<db:dynamic-query><![CDATA[INSERT INTO WBUSER.EMP VALUES('#[payload.FullName]','#[payload.SerialNumber]')]]></db:dynamic-query>
</db:insert>
Here FullName and SerialNumber are not variables. They are column names of the list in the payload as [{FullName=yo, SerialNumber=129329}, {FullName=he, SerialNumber=129329}].
Can someone tell me the difference here. And is there a way i can achieve database insertion using just the variable as in the earlier case?
It caused by different approach to insert data. It works correctly for the configuration inside db-insert, because the payload is in form of List and Bulk Mode option selected.
To make it work for the first configuration (declare SQL query in a variable) then you have to do the following steps:
Iterate each payload value by utilizing: collection-splitter.
Deselect Bulk Mode from database connector.
The configuration should be:
<collection-splitter doc:name="Collection Splitter"/>
<set-variable variableName="dbquery" value="INSERT INTO WBUSER.EMP VALUES('#[payload.FullName]','#[payload.SerialNumber]')" doc:name="Variable"/>
<db:insert config-ref="MySQL_Configuration" doc:name="Database">
<db:dynamic-query><![CDATA[#[flowVars.dbquery]]]></db:dynamic-query>
</db:insert>

Resources