I've got following error in dbcc checkdb output from our customer for one table (more of very similar lines):
Msg 8964, Level 16, State 1, Line 1
Table error: Object ID 212503619, index ID 1, partition ID 72057594046251008, alloc unit ID
72057594048675840 (type LOB data). The off-row data node at page (1:705), slot 0, text ID 328867287793664 is not referenced.
CHECKDB found 0 allocation errors and 49 consistency errors in table 'X' (object ID 2126630619).
This error was created when running upgrade of our software (if he restores DB from backup and run the upgrade again, the same issue reappears).
My question is - how can I possibly create this kind of error from my app? I always thought that this kind of error must be caused by some environmental (HDD) problem, but I've seen the same issue on the same table on another environment. I tried the same steps as him, but without success.
Thanks!
You're right, this is probably a severe bug in SQL Server. It is not possible to cause corruption using documented and supported T-SQL. To cause corruption you need
hardware problems
OS-level file system problems (filter drivers, ...)
Undocumented commands like DBCC WRITEPAGE
A severe bug
Can you single-step through the upgrade script? If not, try tracing it with SQL Profiler. Find the statement that first makes corruption appear.
Here is a simpler, less noisy command:
DBCC CHECKDB([AdventureWorks2012]) WITH NO_INFOMSGS, ALL_ERRORMSGS
Related
When I try to create a #temp table, I get this:
A severe error occurred on the current command. The results, if any, should be discarded.
Any ideas how to solve this error? Thank you!
In one of my update queries, I had the same problem. I realized that the problem was from memory leakage.
Restarting MSSQL service will flush tempDb resource and free a large amount of memory. This will solve the problem.
SQL SERVER – Msg 0, Level 11 – A Severe Error Occurred on the Current Command. The results, if Any, Should be Discarded
CHECKLIST for this error
First and foremost, check database consistency. This can be done by running below command in SQL Server
DBCC CHECKDB('database_name');
If you have nailed down to a table, then check table consistency. We can execute below command
DBCC CHECKTABLE('table_name');
Check the LOG folder which contains ERRORLOG and look for any file named ‘SQLDump*’ at the same time when the error was reported. If you find any, you can either contact Microsoft or use the self-service by getting dump analyzed using diagnostic preview.
If you are getting this while using extended stored procedure, then you need to run, debug by running the exact piece of code one by one. Here is one such error which had none of 3 causes.
In case you want to see the error yourself, feel free to use below code
create table #ErrorLog (column1 char(8000))
go
insert into #ErrorLog
exec xp_readerrorlog 'SQL Error'
drop table #ErrorLog
Reference: Pinal Dave (https://blog.sqlauthority.com)
I'm using SQL Server 2008 R2, trying to reverse-engineer an opaque application and duplicate some of its operations, so that I can automate some massive data loads.
I figured it should be easy to do -- just go into SQL Server Profiler, start a trace, do the GUI operation, and look at the results of the trace. My problem is that the filters aren't working as I'd expect. In particular, the "Writes" column often shows "0", even on statements that are clearly making changes to the database, such as INSERT queries. This makes it impossible to set a Writes >= 1 filter, as I'd like to do.
I have verified that this is exactly what's happening by setting up an all-inclusive trace, and running the app. I have checked the table beforehand, run the operation, and checked the table afterward, and it's definitely making a change to the table. I've looked through the trace, and there's not a single line that shows any non-zero number in the "Writes" column, including the line showing the INSERT query. The query is nothing special... Just something like
exec sp_executesql
N'INSERT INTO my_table([a], [b], [c])
values(#newA, #newB, #newC)',
N'#newA int,#newB int,#newC int', #newA=1, #newB=2, #newC=3
(if there's an error in the above, it's my typo here -- the statement is definitely inserting a record in the table)
I'm sure the key to this behavior is in the description of the "Writes" column: "Number of physical disk writes performed by the server on behalf of the event." Perhaps the server is caching the write, and it happens outside of the Profiler's purvue. I don't know, and perhaps it's not important.
Is there a way to reliably find and log all statements that change the database?
Have you tried a Server Side Trace? It also works to document read and writes, which - if I'm reading you correctly - you are wanting to document writes.
I ran into an error in Sql Server and after resolving it, I am looking for the reason why this was happening.
The situation is that I tried to alter a column in a table like this
Alter Table tblEmployee
Alter Column empDate Date
But while running this script, I get the error -
The statistics 'empDate' is dependent on column 'empDate'.
Msg 4922, Level 16, State 9, Line 1
ALTER TABLE ALTER COLUMN empDate failed because one or more objects access this column.
It turns out that this error was because of a statistic being referenced on this column. I have no script that explicitly creates a statistic, and the error occurred in the production environment, so it must have been auto-created. If it is auto-created, then why isn't Sql Server deleting it by itself? My error was resolved when I dropped the statistic.
I looked at other places and not able to find anything relevant.
I haven't looked hard at SQL statistics for a few versions, but back when, auto-generated statistics had fairly distinctive names (like "_WA_Sys_00000005_00000037"). If your statistics literally had the name "empDate", then it was almost certainly not and auto-created statistics, but something someone created deliberately.
How can I effectively troubleshoot this error?
The query processor ran out of stack space during query optimization.
Please simplify the query.
Msg 8621, Level 17, State 2
I've tried to attaching profiling, but I'm not sure I have the right messages selected. I do see the error in there. The Estimated Execution Plan gives this error as well.
The sproc I am calling is just doing a really simple UPDATE on one table. There is one UPDATE trigger, but I disabled it, yet it still is giving me this error. I even took the same UPDATE statement out and manually supplied the values. It doesn't return as fast, and still gives me the error.
Edit:
OK, my generated script is setting the PK. So if I set the PK and another column, I get this error. Any suggestions along those lines?
There's a microsoft KB article about this.
Basically it's a bug and you need to update. I'm assuming you are running SQL Server 2005 sp2?
There are a great number of FK's that were being referenced by this PK. I changed our code not to update that PK any further.
This error frequently appears when the number of foreign keys relating to a table exceeds the Microsoft recommended maximum of 253.
You can disable the constraints temporarily by the following line of code:
EXEC sp_MSforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all"
YOUR DELETE/UPDATE COMMAND
and after the executing your command, enable it again as the following:
EXEC sp_MSforeachtable "ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all"
Hope that it helps.
This isn't always a bug! Sounds like Daniel was able to come to the conclusion that the query wasn't as simple as he originally thought.
This article seems to answer a similar question as the one Daniel had. I just ran into the same error for a different (legitimate) reason as well. Dynamic SQL being run on a database with data no one anticipated resulted in a single select statement with hundreds of tables.
I have a SQL Server 2005 database that could only be restored using
Restore Database The_DB_Name
From Disk = 'C:\etc\etc'
With Continue_After_Error
I am told the source database was fine. The restore reports
Warning: A column nullability
inconsistency was detected in the
metadata of index
"IDX_Comp_CompanyId" (index_id = 2)
on object ID nnnnn in database
"The_DB_Name". The index may be
corrupt. Run DBCC CHECKTABLE to verify
consistency.
DBCC CHECKTABLE (Company)
gives
Msg 8967, Level 16, State 216, Line 1
An internal error occurred in DBCC
that prevented further processing.
Contact Customer Support Services.
Msg 8921, Level 16, State 1, Line 1
Check terminated. A failure was
detected while collecting facts.
Possibly tempdb out of space or a
system table is inconsistent. Check
previous errors.
Alter Index IDX_Comp_CompanyId On dbo.Company
Rebuild
gives me
Msg 824, Level 24, State 2, Line 1
SQL Server detected a logical
consistency-based I/O error: incorrect
pageid (expected 1:77467; actual
45:2097184). It occurred during a read
of page (1:77467) in database ID 20 at
offset 0x00000025d36000 in file
'C:\etc\etc.mdf'. Additional messages
in the SQL Server error log or system
event log may provide more detail.
This is a severe error condition that
threatens database integrity and must
be corrected immediately. Complete a
full database consistency check (DBCC
CHECKDB). This error can be caused by
many factors; for more information,
see SQL Server Books Online.
How much trouble am I in?
A corruption in an index is not nearly as bad as a corruption in the base table as an index can be rebuilt.
Compare the table and index definitions between the source and destination databases.
Check the version of both servers as well. (was the backup automatically upgraded when restored to your server)
Drop and recreate the index and rerun the CheckTable.