I am using hibernate to map our classes to tables in oracle.
My class has a primary key as id , gets automatically generated by hibernate
<id name="jobId" type="long">
<column name="JOBID" />
<generator class="increment" />
</id>
In my code I do:
Job job = new Job();
do some config for the job.
saveOrUpdate(job);
At this saveOrUpdate I encountered:
org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:96)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:268)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1216)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:383)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:133)
at com.myCompany.BasicDaoImpl.saveOrUpdate(BasicDaoImpl.java:37)
at com.myCompany.JobRoutine.generateJob(JobRoutine.java:142)
Caused by: java.sql.BatchUpdateException: ORA-00001: unique constraint (DBGROUP.SYS_C0011345) violated
at oracle.jdbc.driver.DatabaseError.throwBatchUpdateException(DatabaseError.java:343)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10700)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeBatch(NewProxyPreparedStatement.java:1723)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
... 13 more
I find the constraint is priamry_key.
This error not happy always, but sometimes.
Could anyone please give me some suggestion about it?
Thanks so much!
The documentation says:
increment
generates identifiers of type long, short or int that are unique only when no other process is inserting data into the same table. Do
not use in a cluster.
You probably have another process inserting rows in the same table, and Hibernate is unaware of it, because the increment generator just storesthe next value in memory and assumes it's the only one to insert rows in this table.
Related
So we have a table called dim_merchant.sql and a snapshot of this table called dim_merchant_snapshot.
{% snapshot dim_merchant_snapshot %}
{{
config
(
target_schema='snapshots',
unique_key='id',
strategy='check',
check_cols='all'
)
}}
select * from {{ ref('dim_merchant') }}
{% endsnapshot %}
We never had a trouble with it but since yesterday, we ran into failure in running the snapshot with the following error message:
Database Error in snapshot dim_merchant_snapshot (snapshots/dim_merchant_snapshot.sql)
100090 (42P18): Duplicate row detected during DML action
The error is happening during this step of the snapshot:
On snapshot.analytics.dim_merchant_snapshot: merge into "X"."SNAPSHOTS"."DIM_MERCHANT_SNAPSHOT" as DBT_INTERNAL_DEST
using "X"."SNAPSHOTS"."DIM_MERCHANT_SNAPSHOT__dbt_tmp" as DBT_INTERNAL_SOURCE
on DBT_INTERNAL_SOURCE.dbt_scd_id = DBT_INTERNAL_DEST.dbt_scd_id
when matched
and DBT_INTERNAL_DEST.dbt_valid_to is null
and DBT_INTERNAL_SOURCE.dbt_change_type in ('update', 'delete')
then update
set dbt_valid_to = DBT_INTERNAL_SOURCE.dbt_valid_to
when not matched
and DBT_INTERNAL_SOURCE.dbt_change_type = 'insert'
We realized that some values were being inserted and updated twice in the snapshot (since yesterday) and that caused the failure of our snapshot but we are not sure as to why.
Note that the id key on dim_merchant is tested for its uniqueness and there are no duplicated of it. Meanwhile, the snapshot table contains duplicate after our first snapshot run (that doesn't cause any failure), but the subsequent runs on the snapshot table infected with duplicates are failing.
We just recently updated dbt from 0.20.0 to 1.0.3, but we didnt find any change in the snapshot definition between these versions.
SETUP:
dbt-core==1.0.3,
dbt-snowflake==1.0.0,
dbt-extractor==0.4.0,
Snowflake version: 6.7.1
Thanks !
I know it's been a while since this was posted. I wanted to report that I'm seeing this weird behavior as well. I have a table that I'm snapshotting with the timestamp strategy. The unique_key is made up of several columns. The intention is that this is a full snapshot each time the model is run. The table that is being snapshotted has all unique rows meaning dbt_scd_id is a unique key. I resolved this issue by adding the updated_at column to the unique_key config. In theory, this shouldn't matter since the dbt_scd_id is already a concatenation of unique_key and updated_at. Regardless, it has resolved the issue.
I've created an SP in SQL Server that returns as XML. I decided to do this as the information has contacts and addresses in it and I wanted to reduce the amount of data I get.
<Accounts>
<Account>
<Company />
<ContactNumber />
<Addresses>
<Address>
<Line1 />
....
</Address>
<Addresses>
<Contacts>
<Contact>
<Name />
....
</Contact>
<Account>
</Accounts>
I have found SqlCommand.ExecuteXmlReader but I'm confused as to how to serialise this into my POCO. Can someone point me at what my next step is. (The POCO was created by the Insert XML as a class menu item in VS2019).
My Google fu is letting me down as I'm not sure what I should be looking for to help understand how to serialize the XML into something that will allow me to go with Foreach Account in AccountsClass type logic.
Any help is greatly appreciated.
PS The XML above is just a sample to show what I'm doing. The actual data is over 70 fields and with two FK joins the initial 40000 rows is well in excess of 1.8 million once selected as a normal dataset.
EDIT: Just in case someone stumbles on this and are in the same situation I was in.
When preparing a sample record for the Past XML to class make sure you have more than one record if you are expecting something similar to my example above. (The class changes to support more than one record.)
You get very different results when searching for deserialize when doing your research. This small change resulted in me figuring out the issue.
The CDC table has less number of columns than source table. When debezium tries to create an event in Kafka, it fails with:
ArrayIndexOutOfBoundsException.
The history topic has the snapshot of a complete source table schema.
Is this a limitation with debezium that CDC schema can not be different than source schema?
This connector will be stopped.\n\tat io.debezium.connector.base.ChangeEventQueue.throwProducerFailureIfPresent(ChangeEventQueue.java:170)\n\tat
io.debezium.connector.base.ChangeEventQueue.poll(ChangeEventQueue.java:151)\n\tat io.debezium.connector.sqlserver.SqlServerConnectorTask.poll(SqlServerConnectorTask.java:158)\n\tat
org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:245)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:221)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)\n\tat
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat
java.base/java.lang.Thread.run(Thread.java:834)\nCaused by: java.lang.ArrayIndexOutOfBoundsException: Index 10 out of bounds for length 10\n\tat io.debezium.relational.TableSchemaBuilder.lambda$createValueGenerator$2(TableSchemaBuilder.java:210)\n\tat
io.debezium.relational.TableSchema.valueFromColumnData(TableSchema.java:135)\n\tat io.debezium.relational.RelationalChangeRecordEmitter.emitUpdateRecord(RelationalChangeRecordEmitter.java:89)\n\tat
io.debezium.relational.RelationalChangeRecordEmitter.emitChangeRecords(RelationalChangeRecordEmitter.java:46)\n\tat
io.debezium.pipeline.EventDispatcher.dispatchDataChangeEvent(EventDispatcher.java:125)\n\tat
io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.lambda$execute$1(SqlServerStreamingChangeEventSource.java:203)\n\tat
io.debezium.jdbc.JdbcConnection.prepareQuery(JdbcConnection.java:485)\n\tat
io.debezium.connector.sqlserver.SqlServerConnection.getChangesForTables(SqlServerConnection.java:143)\n\tat
io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.execute(SqlServerStreamingChangeEventSource.java:137)\n\tat
io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:86)\n\t... 5 more\n"}],"type":"source"}```
Capture table must capture all columns that are defined in the source table.
There is a WIP PR https://github.com/debezium/debezium/pull/748 that removes this limitation.
I am upgrading to ColdFusion 11 from ColdFusion 8, so I need to rebuild my search indices to work Solr instead of Verity. I cannot find any reliable way to import my old Verity collections, so I'm attempting to build the new indices from scratch. I am using the following code to index some items along with their corresponding documents which are located on the server:
<cfsetting requesttimeout="3600" />
<cfquery name="qDocuments" datasource="#APPLICATION.DataSource#">
SELECT DISTINCT
ID,
Status,
'C:\Documents\'
CONCAT ID
CONCAT '.PDF' AS File
FROM tblDocuments
</cfquery>
<cfindex
query="qDocuments"
collection="solrdocuments"
action="fullimport"
type="file"
key="document_file"
custom1="ID"
custom2="Status" />
A very similar setup was used with Verity for years without a problem.
When I run the above code, I get the following exception:
Attribute validation error for CFINDEX.
The value of the FULLIMPORT attribute is invalid.
Valid values are: UPDATE, DELETE, PURGE, REFRESH, FULL-IMPORT,
DELTA-IMPORT,STATUS, ABORT.
This makes absolutely no sense, since there is no "FULLIMPORT" attribute for CFINDEX.
I am running ColdFusion 11 Update 3 with Java 1.8.0_25 on Windows Server 2008R2/IIS7.5.
You should believe the error message. try this:
<cfindex
query="qDocuments"
collection="solrdocuments"
action="FULL-IMPORT"
type="file"
key="document_file"
custom1="ID"
custom2="Status" />
It's referring to the value of the attribute action.
This is definitely a bug. In the ColdFusion documentation, fullimport is not an attribute of cfindex.
I know this is an old thread, but in case anyone else has the same question, it's just poor descriptions in the documentation. The action "FullImport" is only available when using type="dih" (i.e. Data Import Handler). When using the query attributes, use action="refresh" instead.
Source: CFIndex Documentation:
...
When type="dih", these actions are used:
abort: Aborts an ongoing indexing task.
deltaimport: For partial indexing. For instance, for any updates in the database,
instead of a full import, you can perform delta import to update your
collection.
fullimport: To index full database. For instance,
when you index the database for the first time.
status:
Provides the status of indexing, such as the total number of documents
processed and status such as idle or running.
I have a relatively simple table of Terms, and each Term can have multiple parents and children so there is a TermAssociation table.
Term TermAssociation
---- ---------------
TermID ParentTermID
TermName ChildTermID
...
When mapped in EF, this generates a Term entity with a many-to-many association with itself. Everything's cool.
The problem is I work in an environment where all table updates must go through stored procedures. I can use stored procedure mapping just fine for the Term entity, but how do I map an SP to the TermAssociation table since it's modeled as an association and not an entity?
I haven't found a way to do this through the designer, but it is possible if you edit the XML of the edmx file directly. Find the association set mapping:
<AssociationSetMapping Name="TermAssociation" TypeName="TCPDataDictionaryModel.TermAssociation" StoreEntitySet="TermAssociation">
<EndProperty Name="Term">
<ScalarProperty Name="TermId" ColumnName="ParentTermId" />
</EndProperty>
<EndProperty Name="Term1">
<ScalarProperty Name="TermId" ColumnName="ChildTermId" />
</EndProperty>
</AssociationSetMapping>
Then add a ModificationFunctionMapping inside the AssociationSetMapping. InsertAssociation is my insert SP and it takes #ParentTermId and #ChildTermId as parameters.
<ModificationFunctionMapping>
<InsertFunction FunctionName="TCPDataDictionaryModel.Store.InsertAssociation" >
<EndProperty Name="Term">
<ScalarProperty Name="TermId" ParameterName="ParentTermId" />
</EndProperty>
<EndProperty Name="Term1">
<ScalarProperty Name="TermId" ParameterName="ChildTermId" />
</EndProperty>
</InsertFunction>
</ModificationFunctionMapping>