When/how are Constraints Checked in a SqlBulkCopy w/ Check Constraints enabled? - sql-server

I am performing several SqlBulkCopy's in a single transaction, I need to be able to roll back easily if anything goes wrong. I'm bulk copying several tables that have foreign keys to each other. I want these constraints checked. I'm copying the parent table first, then the child tables, but I'm receiving foreign key constraint errors. Does SqlBulkCopy include the rows inserted in the transaction when checking constraints?

By default constraints are not checked. Change the options of the SqlBulkCopy to check them during insertion.
Check constraints while data is being inserted. By default,
constraints are not checked

Related

How to make constraints work in Snowflake?

Is there a way for constraints to actually work in Snowflake?
A primary key is created. Still duplicates can be inserted in the table. Giving options like cascade update and delete cascade are not working with Foreign key
Can someone please help?
if you read the Snowflake documentation you will see that only NOT NULL constraints are enforced, all other constraint types are informational only.
I am guessing that the reason for this is that Snowflake is an analytical, rather than an OLTP, database and therefore the expectation is that constraints are enforced in your ELT processes (as is normal practice) rather than in the DB.
Snowflake does not enforce constraints except not null.
Snowflake Notes . I think we cannot enforce a constraint in snowflake database but you can apply the constraint in your ETL tool(if using)

DbUnit: insert data into DB2 database after turning of foreign keys

I'm trying to insert initial data into a DB2 database in DbUnit using DatabaseOperation.INSERT.execute(...) which works fine with some datasets. In order to insert some datasets however, I need to disable foreign key constraints first (because the tables in some datasets can be listed in a 'wrong' order).
I'm disabling the foreign key constraints with command SET INTEGRITY FOR <table_name> OFF, but when I try to insert the data after calling that command, I get this error:
com.ibm.db2.jcc.am.SqlException: DB2 SQL Error: SQLCODE=-668, SQLSTATE=57016, SQLERRMC=1;SCHEMA.TABLE, DRIVER=4.17.30
The IBM error code explanation isn't much helpfull here. Is there something I need to do after setting integrity on a table and before inserting data into that table?
EDIT:
I found this in the documentation for the OFF statement: "Specifies that the tables are placed in set integrity pending state. Only very limited activity is allowed on a table that is in set integrity pending state."
If I understand it correctly, this means that when I turn off the integrity checks on a table, I cannot perform any write/modify operations on it! What's the point of turning the integrity check off then? I need to find a way to do this.
You are not "disabling the foreign key constraints with command SET INTEGRITY". SET INTEGRITY OFF basically means "I'm not sure about the integrity of this table data, so I'd rather restrict access to it until I figure out what's wrong".
To temporary disable foreign key verification you might try ALTER TABLE foo ALTER FOREIGN KEY bar NOT ENFORCED.

How does SqlBulkCopy circumnavigate foreign key constraints?

I used SqlBulkCopy to insert a collection of rows into a table. I forgot to set an integer value on the rows. The missing column is used to reference another table and this is enforced with a foreign key constraint.
For every row inserted, the final integer value was zero and zero didn't identify a row in the related table. When I modified the value to a valid value and then tried to switch it back to zero it wouldn't accept it.
So my question is how does SqlBulkCopy manage to leave the database in an invalid state?
how does SqlBulkCopy manage to leave the database in an invalid state?
It disables foreign keys on the table you are inserting into.
Yes, this is a horrible default. Be sure to set the option CHECK_CONSTRAINTS (or CheckConstraints for SqlBulkCopy) if you can at all afford it.
It also by default does not fire triggers which is equally terrible for data consistency. The triggers are there for a reason.
By default CHECK and FOREIGN KEY constraints are ignored during bulk copy operation. SqlBulkCopy is a managed class providing functionality similar to what SQL Server bcp command offers. The bcp command has a -h hint and unless you provide the CHECK_CONSTRAINTS hint the CHECK and FOREIGN KEY constraints are ignored during the bulk load. The technet article states that - http://technet.microsoft.com/en-us/library/ms162802.aspx
Similarly SqlBulkCopy class has a constructor which accepts SqlBulkCopyOptions enum. You would have to set the CheckConstraints enum option to ensure constraints are checked - http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopyoptions(v=vs.110).aspx
Here is an article that talks about constraint check controlling - http://technet.microsoft.com/en-us/library/ms186247(v=sql.105).aspx
Hope this helps.

SQL Server Foreign Key cause cycles or multiple cascade paths

I'm having problems adding a cascade delete onto a foreign key in SQL Server. Table A has three columns. Column 1 and 2 in Table A are foreign key look ups to the same column in Table B. I want a delete of a row in Table B to cascade a delete on a row on Table A based on these foreign keys.
The other column in Table A has a foreign key lookup to table C. If a row in table C is deleted then I want the corresponding cell to be set to null in Table A.
When I add in these constraints I am thrown the error:
Introducing FOREIGN KEY constraint 'FK_RDU_TODELIVERABLEUNITREF' on table 'RelatedDeliverableUnit' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints.
I am a little stuck with this, Oracle seems perfectly happy with this logic. I am adding in these constraints using Liquibase. I think the error is down to my logic and not syntax but for completeness here is the liquidbase script that manages the foreign keys:
<addForeignKeyConstraint constraintName="FK_RDU_FROMDELIVERABLEUNITREF" baseTableName="relatedDeliverableUnit"
baseColumnNames="FROMDELIVERABLEUNITREF" referencedTableName="DELIVERABLEUNIT" referencedColumnNames="DELIVERABLEUNITREF" onDelete="CASCADE"/>
<addForeignKeyConstraint constraintName="FK_RDU_TODELIVERABLEUNITREF" baseTableName="relatedDeliverableUnit"
baseColumnNames="TODELIVERABLEUNITREF" referencedTableName="DELIVERABLEUNIT" referencedColumnNames="DELIVERABLEUNITREF" onDelete="CASCADE"/>
<addForeignKeyConstraint constraintName="FK_RDU_RELATIONSHIPREF"
baseTableName="relatedDeliverableUnit" baseColumnNames="RELATIONSHIPREF" referencedTableName="RELATIONSHIPTYPES" referencedColumnNames="RELATIONSHIPREF" onDelete="SET NULL"/>
Thanks in advance for any help
I can't find corresponding documentation for later versions, but the SQL Server 2000 BOL addresses this issue:
The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree containing no circular references. No table can appear more than once in the list of all cascading referential actions that result from the DELETE or UPDATE. The tree of cascading referential actions must not have more than one path to any given table. Any branch of the tree is terminated when it encounters a table for which NO ACTION has been specified or is the default.
And later versions haven't changed this. You're falling foul of this:
The tree of cascading referential actions must not have more than one path to any given table
The only way I know of to accomplish this is to implement one of the cascades between B and A using an INSTEAD OF trigger, rather than using ON DELETE....
The relation between tables A and C shouldn't be impacted by any of this.
(2008 BOL)

Can DB constraits ignore existing records and apply for only new data?

I want to learn the answer for different DB engines but in our case;
we have some records that are not unique for a column and now we want to make that column unique which forces us to remove duplicate values.
We use Oracle 10g. Is this reasonable? Or is this something like goto statement :) ? Should we really delete? What if we had millions of records?
To answer the question as posted: No, it can't be done on any RDBMS that I'm aware of.
However, like most things you can work around it, by doing the following.
Create a composite key, with a new column and the existing column
You can make it unique without deleting anything by adding a new column, call it PartialKey.
For existing rows you set PartialKey to a unique value (starting at Zero).
Create a unique constraint on the existing column and PartialKey (you can do this because each of these rows will now be unique).
For new rows, only use a default value of Zero for PartialKey (because zero has already been used), this will force the existing column to have unqiue values in the table.
IMPORTANT EDIT
This is weak - if you delete a row with partial key 0. Now another row can be added with a value that is already in the existing column, because the 0 in partial key will guarentee uniqueness.
You would need to ensure that either
You never delete the row with
partial key 0
You always have a dummy row with
partial key 0, and you never delete
it (or you immediately reinsert it automatically)
Edit: Bite the bullet and clean the data
If as you said you've just realised that the column should be unique, then you should (if possible) clean up the data. The above approach is a hack, and you'll find yourself writing more hacks when accessing the table (you may find you've two sets of logic for dealing with queries against that table, one for where the column IS unique, and one where it's NOT. I'd clean this now or it'll come back and bite you in the arse a thousand times over.
This can be done in SQL Server.
When you create a check constraint,
you can set an option to apply it
either to new data only or to existing
data as well. The option of applying
the constraint to new data only is
useful when you know that the existing
data already meets the new check
constraint, or when a business rule
requires the constraint to be enforced
only from this point forward.
for example
ALTER TABLE myTable
WITH NOCHECK
ADD CONSTRAINT myConstraint CHECK ( column > 100 )
You can do this using NOVALIDATE ENABLE constraint state, but deleting is much more preferred way.
You have to set your records straight before adding the constraints.
In Oracle you can put a constraint in a enable novalidate state. When a constraint is in the enable novalidate state, all subsequent statements are checked for conformity to the constraint. However, any existing data in the table is not checked. A table with enable novalidated constraints can contain invalid data, but it is not possible to add new invalid data to it. Enabling constraints in the novalidated state is most useful in data warehouse configurations that are uploading valid OLTP data.

Resources