"Attempting to set a non-NULL-able column's value to NULL." - sql-server

Azure SQL Server. But I've seen this with SQL Server 2017. I am creating System-Versioned tables. After I create the table, I'll create a MERGE statement to deposit data into the table (the MERGE statement will be used again to keep the table updated). Many times I will get an error message stating: Attempting to set a non-NULL-able column's value to NULL. If I simply drop the table and recreate, I don't see the error again.
This doesn't happen with every table I create, but it's frequent. Until recently I have never seen this error before. Any ideas what's causing it?

i have had the same problem
"The only way I got around this was to just remove the index from the temporal history table" - it was my solution

Related

SSIS returns an incorrect error

I created SSIS package, which created table MyTable in SQL Server with a column BaseVariantVersionID. Program first inserts data into this table.
At the end of the package, I should drop column BaseVariantVersionID from the table.
First debug is OK. But on second attempt, SSIS returns a validation error. It doesn't allow to recreate the table with BaseVariantVersionID, because on next step package cannot made insert in column, which now is not presented.
Maybe you know some property to disable current db checking?
Update
I drop column after all steps.
And on first step I recreated DB with column.
But system returns error - looks like it use existing table for validation.
This could be several issues I can think of.
You must absolutely delete the table if it already exists prior to creating it at the beginning of the package.
Example Execute SQL Task:
IF OBJECT_ID('MyTable', 'U') IS NOT NULL
DROP TABLE MyTable
GO
CREATE TABLE MyTable (etc...) GO
ALTER TABLE MyTable ADD COLUMN (etc...) GO
Second, you can set DelayValidation = True in the Data Flow Task Properties Window (this is on the bottom right usually after clicking a Dataflow Task in the design area). That delays validation until run time.
If at the moment, you have a missing field error, you can add the column manually in SQL Server Management Studio, then double-click the task with the error and any missing field error should disappear (now that the column exists). After that you can save the package and exit.
For an Execute SQL Task, you can set BypassPrepare to True. Sometimes that will allow you to design and build a package which doesn't validate at design time, but will validate okay at run time.
But I have to question the need to create columns and tables at run time. Are you sure you need to do this? It is a more typical use case to have SSIS move data around in existing table structures, not create them at run time.
If I read your description correctly, you're dropping the column on the first pass, then getting an error trying to recreate the table on the second pass? You will get an error on run #2 if you try to create a table that already exists, even if it doesn't have "SomeColumn" in it.
You'll need to drop the table if you're going to want to recreate it or change your code to add the column back if the table exists.

Sql Server rows being auto inserted without a trigger

I have a table in sql server, for which, if I delete a row from it, a new row is inserted with the same data, and userid as the one I deleted. There are no triggers on this table. In addition, I did a search of all database objects that reference this table, and there are no triggers anywhere in the database that reference this table, only some stored procedures, none of which have any code that would cause this behavior.
To be clear, if I run this query:
delete from my_table where id = 1
the row with the id of 1 will be deleted, but a new row will be inserted that has the same userid, and date as the deleted row. No application code involved, just a straight sql delete statement run directly on the database causes this.
What else besides a trigger could be causing this to happen? I've never encountered something like this before.
It took me a long time, but I discovered this was being caused by a "rogue" linq-to-sql dll that was running in spite of it's parent app being killed.
The good news is, there isn't some weird non-trigger way to insert rows on delete in SQL, so we can all resume our normal lives now, knowing all is as it was.

Auto Created Statistics not getting deleted by Sql Server 2008

I ran into an error in Sql Server and after resolving it, I am looking for the reason why this was happening.
The situation is that I tried to alter a column in a table like this
Alter Table tblEmployee
Alter Column empDate Date
But while running this script, I get the error -
The statistics 'empDate' is dependent on column 'empDate'.
Msg 4922, Level 16, State 9, Line 1
ALTER TABLE ALTER COLUMN empDate failed because one or more objects access this column.
It turns out that this error was because of a statistic being referenced on this column. I have no script that explicitly creates a statistic, and the error occurred in the production environment, so it must have been auto-created. If it is auto-created, then why isn't Sql Server deleting it by itself? My error was resolved when I dropped the statistic.
I looked at other places and not able to find anything relevant.
I haven't looked hard at SQL statistics for a few versions, but back when, auto-generated statistics had fairly distinctive names (like "_WA_Sys_00000005_00000037"). If your statistics literally had the name "empDate", then it was almost certainly not and auto-created statistics, but something someone created deliberately.

Error when copying a check constraint using DTS

I have a DTS package that is raising an error with a "Copy SQL Server Objects" task. The task is copying a table plus data from one SQL Server 2000 SP4 server to another (same version) and is giving the error: -
Could not find CHECK constraint for 'dbo.MyTableName', although the table is flagged as having one.
The source table has one check constraint defined that appears to cause the problem. After running the DTS package, the thing appears to work properly - the table, all constraints and data ARE created on the destination server? But the error above is raised causing subsequent steps not to run.
Any idea why this error is raised ?
This indicates that the metadata in the sys tables has gotten out of sync with your actual schema. If you aren't seeing any other signs of more generalized corruption, doing a rebuild of the table by copying it to another table (select * into newtable from oldtable), dropping the old table and then renaming the new one and replacing the constraints will help. This is similar to how the Enterprise manager for 2000 does things when you insert a column that isn't at the end of the table, so inserting a new column in the middle of the table and then removing will achieve the same thing if you don't want to manually write the queries.
I would be somewhat concerned by the state of the database as a whole if you see other occurrences of this kind of error. (I'm assuming here that you have already done CHECKDB commands and that the error is persisting...)
This error started when a new column (with a check constraint) was added to an existing table. To investigate I have: -
Copied the table to a different destination SQL Server and got the same error.
Created a new table with exactly the same structure but different name and copied with no error.
Dropped and re-created the check constraint on the problem table but still get the same error.
dbcc checktable ('MyTableName') with ALL_ERRORMSGS gives no errors.
dbcc checkdb in the source and destination database gives no errors.
Interestingly the DTS package appears to: -
Copy the table.
Copy the data.
Create the constraints
Because the check constraint create time is 7 minutes after the table create time i.e. it creates the check constraint AFTER it has moved the data. Makes sense as it does not have to check the data as it is copying, presumably improving performance.
As Godeke suggests, I think something has become corrupt in the system tables, as a new table with the same columns works. Even though the DBCC statements give no errors?

Deleting Rows from a SQL Table marked for Replication

I erroneously delete all the rows from a MS SQL 2000 table that is used in merge replication (the table is on the publisher). I then compounded the issue by using a DTS operation to retrieve the rows from a backup database and repopulate the table.
This has created the following issue:
The delete operation marked the rows for deletion on the clients but the DTS operation bypasses the replication triggers so the imported rows are not marked for insertion on the subscribers. In effect the subscribers lose the data although it is on the publisher.
So I thought "no worries" I will just delete the rows again and then add them correctly via an insert statement and they will then be marked for insertion on the subscribers.
This is my problem:
I cannot delete the DTSed rows because I get a "Cannot insert duplicate key row in object 'MSmerge_tombstone' with unique index 'uc1MSmerge_tombstone'." error. What I would like to do is somehow delete the rows from the table bypassing the merge replication trigger. Is this possible? I don't want to remove and redo the replication because the subscribers are 50+ windows mobile devices.
Edit: I have tried the Truncate Table command. This gives the following error "Cannot truncate table xxxx because it is published for replication"
Have you tried truncating the table?
You may have to truncate the table and reset the ID field back to 0 if you need the inserted rows to have the same ID. If not, just truncate and it should be fine.
You also could look into temporarily dropping the unique index and adding it back when you're done.
Look into sp_mergedummyupdate
Would creating a second table be an option? You could create a second table, populate it with the needed data, add the constraints/indexes, then drop the first table and rename your second table. This should give you the data with the right keys...and it should all consist of SQL statements that are allowed to trickle down the replication. It just isn't probably the best on performance...and definitely would impose some risk.
I haven't tried this first hand in a replicated environment...but it may be at least worth trying out.
Thanks for the tips...I eventually found a solution:
I deleted the merge delete trigger from the table
Deleted the DTSed rows
Recreated the merge delete trigger
Added my rows correctly using an insert statement.
I was a little worried bout fiddling with the merge triggers but every thing appears to be working correctly.

Resources