Error when copying a check constraint using DTS - sql-server

I have a DTS package that is raising an error with a "Copy SQL Server Objects" task. The task is copying a table plus data from one SQL Server 2000 SP4 server to another (same version) and is giving the error: -
Could not find CHECK constraint for 'dbo.MyTableName', although the table is flagged as having one.
The source table has one check constraint defined that appears to cause the problem. After running the DTS package, the thing appears to work properly - the table, all constraints and data ARE created on the destination server? But the error above is raised causing subsequent steps not to run.
Any idea why this error is raised ?

This indicates that the metadata in the sys tables has gotten out of sync with your actual schema. If you aren't seeing any other signs of more generalized corruption, doing a rebuild of the table by copying it to another table (select * into newtable from oldtable), dropping the old table and then renaming the new one and replacing the constraints will help. This is similar to how the Enterprise manager for 2000 does things when you insert a column that isn't at the end of the table, so inserting a new column in the middle of the table and then removing will achieve the same thing if you don't want to manually write the queries.
I would be somewhat concerned by the state of the database as a whole if you see other occurrences of this kind of error. (I'm assuming here that you have already done CHECKDB commands and that the error is persisting...)

This error started when a new column (with a check constraint) was added to an existing table. To investigate I have: -
Copied the table to a different destination SQL Server and got the same error.
Created a new table with exactly the same structure but different name and copied with no error.
Dropped and re-created the check constraint on the problem table but still get the same error.
dbcc checktable ('MyTableName') with ALL_ERRORMSGS gives no errors.
dbcc checkdb in the source and destination database gives no errors.
Interestingly the DTS package appears to: -
Copy the table.
Copy the data.
Create the constraints
Because the check constraint create time is 7 minutes after the table create time i.e. it creates the check constraint AFTER it has moved the data. Makes sense as it does not have to check the data as it is copying, presumably improving performance.
As Godeke suggests, I think something has become corrupt in the system tables, as a new table with the same columns works. Even though the DBCC statements give no errors?

Related

SQL Server errors out prematurely when starting a batch, for an error that will not actually occur

The code below results in error when a table [mytable] exists and has a columnstore index, even though the table will have been dropped and recreated without columnstore index before page compression is applied to it.
drop table if exists [mytable]
select top 0
*
into [mytable]
from [myexternaltable]
alter table [mytable] rebuild partition = all with (data_compression = page)
The error thrown:
This is not a valid data compression setting for a columnstore index. Please choose COLUMNSTORE or COLUMNSTORE_ARCHIVE compression.
At this point [mytable] has not been dropped so SQL Server has apparently not started executing any code.
The code runs just fine when I run the drop table statement first and the rest of the code after. SQL Server seemingly stops in error prematurely if it detects an inconsistency (that will not necessarily persist) with an existing table when starting a batch, but is perfectly happy with table [mytable] not existing at all, whereas a table not existing can hardly be seen as consistent with applying compression on it. SQL Server's consistency checking does not look particularly consistent itself.
I recall having had similar issues when using column references that did not exist yet and were to be created in code, if only SQL Server would allow the code to run instead of terminating on a wrongly predicted error.
What would be the most straightforward solution to this issue? I would not mind suppressing the error altogether - if possible - since it is obviously wrong.
I am trying to avoid work-arounds such as running the code as 2 separate batches, putting part of the code in an exec phrase, or trying and catching the error. The code is being used in hundreds of stored procedures, so the simpler the solution the better.

"Attempting to set a non-NULL-able column's value to NULL."

Azure SQL Server. But I've seen this with SQL Server 2017. I am creating System-Versioned tables. After I create the table, I'll create a MERGE statement to deposit data into the table (the MERGE statement will be used again to keep the table updated). Many times I will get an error message stating: Attempting to set a non-NULL-able column's value to NULL. If I simply drop the table and recreate, I don't see the error again.
This doesn't happen with every table I create, but it's frequent. Until recently I have never seen this error before. Any ideas what's causing it?
i have had the same problem
"The only way I got around this was to just remove the index from the temporal history table" - it was my solution

How can I check If column already exists to avoid ALTER TABLE in an sql script file for SQLite

I am adding versioning to my database a bit later than I should, and as such I have some tables with inconsistent states. I have a table that a column was added to in Java, but not all tables are guaranteed to have that column at this point.
What I had been doing is on the first run of the program, checking if the column existed, and adding it if it did not exist.
The library (flyway.org) I am using to deal with versioning takes in a bunch of .sql files in order to set up the database. For many tables, this is simple, I just have an sql file that has "CREATE TABLE IF NOT EXISTS XXX," which means it is easily handled, those can still be run.
I am wondering if there is some way to handle these alter tables without SQLite generating an error that I haven't thought of, or if I haven't found out how to do it.
I've tried looking to see if there is a command to add a column if it doesn't exist, but there doesn't seem to be one. I've tried to find a way to handle errors in sqlite, for example running the alter table anyways, and just ignoring the error, but there doesn't seem to be a way of doing that (as far as I can tell). Does anyone have any suggestions? I want a solution 100% in a .sql script if possible.
There is no "IF NOT EXIST" clause for Alter Tables in SQLite, it doesn't exist.
There is a way to interrogate the database on what columns a table contains with PRAGMA table_info(table_name);. But there is no 100% SQL way to take that information and apply it to an Alter Table statement.
Maybe one day, but not today.

Ignore duplicate records in SSIS' OLE DB destination

I'm using a OLE DB Destination to populate a table with value from a webservice.
The package will be scheduled to run in the early AM for the prior day's activity. However, if this fails, the package can be executed manually.
My concern is if the operator chooses a date range that over-laps existing data, the whole package will fail (verified).
I would like it:
INSERT the missing values (works as expected if no duplicates)
ignore the duplicates; not cause the package to fail; raise an exception that can be captured by the windows application log (logged as a warning)
collect the number of successfully-inserted records and number of duplicates
If it matters, I'm using Data access mode = Table or view - fast load and
Suggestions on how to achieve this are appreciated.
That's not a feature.
If you don't want error (duplicates), then you need to defend against it - much as you'd do in your favorite language. Instead of relying on error handling, you test for the existence of the error inducing thing (Lookup Transform to identify existence of row in destination) and then filter the duplicates out (Redirect No Match Output).
The technical solution you absolutely should not implement
Change the access mode from the "Table or View Name - Fast Load" to "Table or View Name". This changes the method of insert from a bulk/set-based operation to singleton inserts. By inserting one row at a time, this will allow the SSIS package to evaluate the success/fail of each row's save. You then need to go into the advanced editor, your screenshot, and change the Error disposition from Fail Component to Ignore Failure
This solution should not used as it yields poor performance, generates unnecessary work load and has the potential to mask other save errors beyond just "duplicates" - referential integrity violations for example
Here's how I would do it:
Point your SSIS Destination to a staging table that will be empty
when the package is run.
Insert all rows into the staging table.
Run a stored procedure that uses SQL to import records from the
staging table to the final destination table, WHERE the records don't
already exist in the destination table.
Collect the desired meta-data and do whatever you want with it.
Empty the staging table for the next use.
(Those last 3 steps would all be done in the same stored procedure).

Auto Created Statistics not getting deleted by Sql Server 2008

I ran into an error in Sql Server and after resolving it, I am looking for the reason why this was happening.
The situation is that I tried to alter a column in a table like this
Alter Table tblEmployee
Alter Column empDate Date
But while running this script, I get the error -
The statistics 'empDate' is dependent on column 'empDate'.
Msg 4922, Level 16, State 9, Line 1
ALTER TABLE ALTER COLUMN empDate failed because one or more objects access this column.
It turns out that this error was because of a statistic being referenced on this column. I have no script that explicitly creates a statistic, and the error occurred in the production environment, so it must have been auto-created. If it is auto-created, then why isn't Sql Server deleting it by itself? My error was resolved when I dropped the statistic.
I looked at other places and not able to find anything relevant.
I haven't looked hard at SQL statistics for a few versions, but back when, auto-generated statistics had fairly distinctive names (like "_WA_Sys_00000005_00000037"). If your statistics literally had the name "empDate", then it was almost certainly not and auto-created statistics, but something someone created deliberately.

Resources