I ran into an error in Sql Server and after resolving it, I am looking for the reason why this was happening.
The situation is that I tried to alter a column in a table like this
Alter Table tblEmployee
Alter Column empDate Date
But while running this script, I get the error -
The statistics 'empDate' is dependent on column 'empDate'.
Msg 4922, Level 16, State 9, Line 1
ALTER TABLE ALTER COLUMN empDate failed because one or more objects access this column.
It turns out that this error was because of a statistic being referenced on this column. I have no script that explicitly creates a statistic, and the error occurred in the production environment, so it must have been auto-created. If it is auto-created, then why isn't Sql Server deleting it by itself? My error was resolved when I dropped the statistic.
I looked at other places and not able to find anything relevant.
I haven't looked hard at SQL statistics for a few versions, but back when, auto-generated statistics had fairly distinctive names (like "_WA_Sys_00000005_00000037"). If your statistics literally had the name "empDate", then it was almost certainly not and auto-created statistics, but something someone created deliberately.
Related
The code below results in error when a table [mytable] exists and has a columnstore index, even though the table will have been dropped and recreated without columnstore index before page compression is applied to it.
drop table if exists [mytable]
select top 0
*
into [mytable]
from [myexternaltable]
alter table [mytable] rebuild partition = all with (data_compression = page)
The error thrown:
This is not a valid data compression setting for a columnstore index. Please choose COLUMNSTORE or COLUMNSTORE_ARCHIVE compression.
At this point [mytable] has not been dropped so SQL Server has apparently not started executing any code.
The code runs just fine when I run the drop table statement first and the rest of the code after. SQL Server seemingly stops in error prematurely if it detects an inconsistency (that will not necessarily persist) with an existing table when starting a batch, but is perfectly happy with table [mytable] not existing at all, whereas a table not existing can hardly be seen as consistent with applying compression on it. SQL Server's consistency checking does not look particularly consistent itself.
I recall having had similar issues when using column references that did not exist yet and were to be created in code, if only SQL Server would allow the code to run instead of terminating on a wrongly predicted error.
What would be the most straightforward solution to this issue? I would not mind suppressing the error altogether - if possible - since it is obviously wrong.
I am trying to avoid work-arounds such as running the code as 2 separate batches, putting part of the code in an exec phrase, or trying and catching the error. The code is being used in hundreds of stored procedures, so the simpler the solution the better.
Azure SQL Server. But I've seen this with SQL Server 2017. I am creating System-Versioned tables. After I create the table, I'll create a MERGE statement to deposit data into the table (the MERGE statement will be used again to keep the table updated). Many times I will get an error message stating: Attempting to set a non-NULL-able column's value to NULL. If I simply drop the table and recreate, I don't see the error again.
This doesn't happen with every table I create, but it's frequent. Until recently I have never seen this error before. Any ideas what's causing it?
i have had the same problem
"The only way I got around this was to just remove the index from the temporal history table" - it was my solution
I am adding versioning to my database a bit later than I should, and as such I have some tables with inconsistent states. I have a table that a column was added to in Java, but not all tables are guaranteed to have that column at this point.
What I had been doing is on the first run of the program, checking if the column existed, and adding it if it did not exist.
The library (flyway.org) I am using to deal with versioning takes in a bunch of .sql files in order to set up the database. For many tables, this is simple, I just have an sql file that has "CREATE TABLE IF NOT EXISTS XXX," which means it is easily handled, those can still be run.
I am wondering if there is some way to handle these alter tables without SQLite generating an error that I haven't thought of, or if I haven't found out how to do it.
I've tried looking to see if there is a command to add a column if it doesn't exist, but there doesn't seem to be one. I've tried to find a way to handle errors in sqlite, for example running the alter table anyways, and just ignoring the error, but there doesn't seem to be a way of doing that (as far as I can tell). Does anyone have any suggestions? I want a solution 100% in a .sql script if possible.
There is no "IF NOT EXIST" clause for Alter Tables in SQLite, it doesn't exist.
There is a way to interrogate the database on what columns a table contains with PRAGMA table_info(table_name);. But there is no 100% SQL way to take that information and apply it to an Alter Table statement.
Maybe one day, but not today.
I'm troubleshooting a nasty stored procedure and noticed that after running it, and I have closed my session, lots of temp tables are still left in tempdb. They have names like the following:
#000E262B
#002334C4
#004E1D4D
#00583EEE
#00783A7F
#00832777
#00CD403A
#00E24ED3
#00F75D6C
If I run this code:
if object_id('tempdb..#000E262B') is null
print 'Does NOT exist!'
I get:
Does NOT exist!
If I do:
use tempdb
go
drop TABLE #000E262B
I get an error:
Msg 3701, Level 11, State 5, Line 1
Cannot drop the table '#000E262B', because it does not exist or you do not have permission.
I am connected to SQL Server as sysadmin. Using SP3 64-bit. I currently have over 1100 of these tables in tempdb, and I can't get rid of them. There are no other users on the database server.
Stopping and starting SQL Server is not an option in my case.
Thanks!
http://www.sqlservercentral.com/Forums/Topic456599-149-1.aspx
If temp tables or table variables are frequently used then, instead of dropping them, SQL just 'truncates' them, leaving the definition. It saves the effort of recreating the table next time it's needed.
Tables created with the # prefix are only available to the current connection. Therefore any new connection you create will not be able to see them and therefore not be able to drop them.
How can you tell that they still exist? What command are you running to find this out?
Is the connection that created them closed properly? If not then this may be why they still exist.
I have a DTS package that is raising an error with a "Copy SQL Server Objects" task. The task is copying a table plus data from one SQL Server 2000 SP4 server to another (same version) and is giving the error: -
Could not find CHECK constraint for 'dbo.MyTableName', although the table is flagged as having one.
The source table has one check constraint defined that appears to cause the problem. After running the DTS package, the thing appears to work properly - the table, all constraints and data ARE created on the destination server? But the error above is raised causing subsequent steps not to run.
Any idea why this error is raised ?
This indicates that the metadata in the sys tables has gotten out of sync with your actual schema. If you aren't seeing any other signs of more generalized corruption, doing a rebuild of the table by copying it to another table (select * into newtable from oldtable), dropping the old table and then renaming the new one and replacing the constraints will help. This is similar to how the Enterprise manager for 2000 does things when you insert a column that isn't at the end of the table, so inserting a new column in the middle of the table and then removing will achieve the same thing if you don't want to manually write the queries.
I would be somewhat concerned by the state of the database as a whole if you see other occurrences of this kind of error. (I'm assuming here that you have already done CHECKDB commands and that the error is persisting...)
This error started when a new column (with a check constraint) was added to an existing table. To investigate I have: -
Copied the table to a different destination SQL Server and got the same error.
Created a new table with exactly the same structure but different name and copied with no error.
Dropped and re-created the check constraint on the problem table but still get the same error.
dbcc checktable ('MyTableName') with ALL_ERRORMSGS gives no errors.
dbcc checkdb in the source and destination database gives no errors.
Interestingly the DTS package appears to: -
Copy the table.
Copy the data.
Create the constraints
Because the check constraint create time is 7 minutes after the table create time i.e. it creates the check constraint AFTER it has moved the data. Makes sense as it does not have to check the data as it is copying, presumably improving performance.
As Godeke suggests, I think something has become corrupt in the system tables, as a new table with the same columns works. Even though the DBCC statements give no errors?