Troubleshooting SQL Server Stack Overflow error - sql-server

How can I effectively troubleshoot this error?
The query processor ran out of stack space during query optimization.
Please simplify the query.
Msg 8621, Level 17, State 2
I've tried to attaching profiling, but I'm not sure I have the right messages selected. I do see the error in there. The Estimated Execution Plan gives this error as well.
The sproc I am calling is just doing a really simple UPDATE on one table. There is one UPDATE trigger, but I disabled it, yet it still is giving me this error. I even took the same UPDATE statement out and manually supplied the values. It doesn't return as fast, and still gives me the error.
Edit:
OK, my generated script is setting the PK. So if I set the PK and another column, I get this error. Any suggestions along those lines?

There's a microsoft KB article about this.
Basically it's a bug and you need to update. I'm assuming you are running SQL Server 2005 sp2?

There are a great number of FK's that were being referenced by this PK. I changed our code not to update that PK any further.

This error frequently appears when the number of foreign keys relating to a table exceeds the Microsoft recommended maximum of 253.
You can disable the constraints temporarily by the following line of code:
EXEC sp_MSforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all"
YOUR DELETE/UPDATE COMMAND
and after the executing your command, enable it again as the following:
EXEC sp_MSforeachtable "ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all"
Hope that it helps.

This isn't always a bug! Sounds like Daniel was able to come to the conclusion that the query wasn't as simple as he originally thought.
This article seems to answer a similar question as the one Daniel had. I just ran into the same error for a different (legitimate) reason as well. Dynamic SQL being run on a database with data no one anticipated resulted in a single select statement with hundreds of tables.

Related

SQL Server errors out prematurely when starting a batch, for an error that will not actually occur

The code below results in error when a table [mytable] exists and has a columnstore index, even though the table will have been dropped and recreated without columnstore index before page compression is applied to it.
drop table if exists [mytable]
select top 0
*
into [mytable]
from [myexternaltable]
alter table [mytable] rebuild partition = all with (data_compression = page)
The error thrown:
This is not a valid data compression setting for a columnstore index. Please choose COLUMNSTORE or COLUMNSTORE_ARCHIVE compression.
At this point [mytable] has not been dropped so SQL Server has apparently not started executing any code.
The code runs just fine when I run the drop table statement first and the rest of the code after. SQL Server seemingly stops in error prematurely if it detects an inconsistency (that will not necessarily persist) with an existing table when starting a batch, but is perfectly happy with table [mytable] not existing at all, whereas a table not existing can hardly be seen as consistent with applying compression on it. SQL Server's consistency checking does not look particularly consistent itself.
I recall having had similar issues when using column references that did not exist yet and were to be created in code, if only SQL Server would allow the code to run instead of terminating on a wrongly predicted error.
What would be the most straightforward solution to this issue? I would not mind suppressing the error altogether - if possible - since it is obviously wrong.
I am trying to avoid work-arounds such as running the code as 2 separate batches, putting part of the code in an exec phrase, or trying and catching the error. The code is being used in hundreds of stored procedures, so the simpler the solution the better.

SQL Server: error when creating a # temp table

When I try to create a #temp table, I get this:
A severe error occurred on the current command. The results, if any, should be discarded.
Any ideas how to solve this error? Thank you!
In one of my update queries, I had the same problem. I realized that the problem was from memory leakage.
Restarting MSSQL service will flush tempDb resource and free a large amount of memory. This will solve the problem.
SQL SERVER – Msg 0, Level 11 – A Severe Error Occurred on the Current Command. The results, if Any, Should be Discarded
CHECKLIST for this error
First and foremost, check database consistency. This can be done by running below command in SQL Server
DBCC CHECKDB('database_name');
If you have nailed down to a table, then check table consistency. We can execute below command
DBCC CHECKTABLE('table_name');
Check the LOG folder which contains ERRORLOG and look for any file named ‘SQLDump*’ at the same time when the error was reported. If you find any, you can either contact Microsoft or use the self-service by getting dump analyzed using diagnostic preview.
If you are getting this while using extended stored procedure, then you need to run, debug by running the exact piece of code one by one. Here is one such error which had none of 3 causes.
In case you want to see the error yourself, feel free to use below code
create table #ErrorLog (column1 char(8000))
go
insert into #ErrorLog
exec xp_readerrorlog 'SQL Error'
drop table #ErrorLog
Reference: Pinal Dave (https://blog.sqlauthority.com)

How can I check If column already exists to avoid ALTER TABLE in an sql script file for SQLite

I am adding versioning to my database a bit later than I should, and as such I have some tables with inconsistent states. I have a table that a column was added to in Java, but not all tables are guaranteed to have that column at this point.
What I had been doing is on the first run of the program, checking if the column existed, and adding it if it did not exist.
The library (flyway.org) I am using to deal with versioning takes in a bunch of .sql files in order to set up the database. For many tables, this is simple, I just have an sql file that has "CREATE TABLE IF NOT EXISTS XXX," which means it is easily handled, those can still be run.
I am wondering if there is some way to handle these alter tables without SQLite generating an error that I haven't thought of, or if I haven't found out how to do it.
I've tried looking to see if there is a command to add a column if it doesn't exist, but there doesn't seem to be one. I've tried to find a way to handle errors in sqlite, for example running the alter table anyways, and just ignoring the error, but there doesn't seem to be a way of doing that (as far as I can tell). Does anyone have any suggestions? I want a solution 100% in a .sql script if possible.
There is no "IF NOT EXIST" clause for Alter Tables in SQLite, it doesn't exist.
There is a way to interrogate the database on what columns a table contains with PRAGMA table_info(table_name);. But there is no 100% SQL way to take that information and apply it to an Alter Table statement.
Maybe one day, but not today.

Mechanism in T-SQL similar to SAVE EXCEPTIONS in Oracle

Is there a mechanic similar to Oracle PL/SQL's SAVE EXCEPTIONS in Microsoft T-SQL?
Currently I am doing the update using a cursor and it is extremely slow.
The description of SAVE EXCEPTIONS from Oracle's site:
SAVE EXCEPTIONS allows an UPDATE,
INSERT, or DELETE statement to
continue executing after it issues an
exception. When the statement
finishes, an error is issued to signal
that at least one exception occurred.
Exceptions are collected into an array
that you can examine using
%BULK_EXCEPTIONS after the statement
has executed.
link to the Save exceptions definition:
http://download.oracle.com/docs/cd/E11882_01/timesten.112/e13076/sqlexamples.htm#TTPLS364
If you are importing a large number of records, use an SSIS package ansd send the failed rows to an exception table. If you can;t uses SSIS for some reason, consider cleaning your data before trying to insert it, so that you have no failed rows. For instance delete any records that have a null where you are required to havea value, null out bad dates, etc.
If you are coming from Oracle, you need to stop using cursors and use set-based logic instead. SQL Server does not perform well with cursors.
I think the closest you could come to simulating this behavior would be to disable/enable (with check) the constraints. The downside with this approach is that the bad data is now in your table and you can't enable the constraints until it's cleaned up. You'd need to decide if this is an acceptable risk in your particular case.
ALTER TABLE YourTable NOCHECK CONSTRAINT ALL
/* Perform your DML operations */
ALTER TABLE YourTable WITH CHECK CHECK CONSTRAINT ALL
/* Deal with any errors that are thrown:
'The ALTER TABLE statement conflicted with the CHECK constraint ...'
clean up the bad data then enable constraints again */
Not sure exactly what kind of exceptions you are expecting. Some more detail along this line might be helpful.
I don't believe there is anything equivalent in MS SQL to what you are describing. A few ideas to do something somewhat similar:
You can use a TRY ... CATCH in SQL, but that's going to fail the whole batch if something goes wrong, not just the problematic rows.
An SSIS bulk insert task can be configured to have a separate path for "failed" rows, which you can then treat however you want.
If you are talking about unique index duplicates (insert all these rows, and if any are dups then just ignore them, but don't fail the whole batch), then you can declare the unique index with the IGNORE_DUP_KEY option (see this SO question)
Anything further, you'd probably need to be more explicit about what kinds of errors you imagine encountering.

Cannot delete from the database...?

So, I have 2 database instances, one is for development in general, another was copied from development for unit tests.
Something changed in the development database that I can't figure out, and I don't know how to see what is different.
When I try to delete from a particular table, with for example:
delete from myschema.mytable where id = 555
I get the following normal response from the unit test DB indicating no row was deleted:
SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a query is an empty table. SQLSTATE=02000
However, the development database fails to delete at all with the following error:
DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL0440N No authorized routine named "=" of type "FUNCTION" having compatible arguments was found. SQLSTATE=42884
My best guess is there is some trigger or view that was added or changed that is causing the problem, but I have no idea how to go about finding the problem... has anyone had this problem or know how to figure out what the root of the problem is?
(note that this is a DB2 database)
Hmm, applying the great oracle to this question, I came up with:
http://bytes.com/forum/thread830774.html
It seems to suggest that another table has a foreign key pointing at the problematic one, when that FK on the other table is dropped, the delete should work again. (Presumably you can re-create the foreign key as well)
Does that help any?
You might have an open transaction on the dev db...that gets me sometimes on SQL Server
Is the type of id compatible with 555? Or has it been changed to a non-integer type?
Alternatively, does the 555 argument somehow go missing (e.g. if you are using JDBC and the prepared statement did not get its arguments set before executing the query)?
Can you add more to your question? That error sounds like the sql statement parser is very confused about your statement. Can you do a select on that table for the row where id = 555 ?
You could try running a RUNSTATS and REORG TABLE on that table, those are supposed to sort out wonky tables.
#castaway
A select with the same "where" condition works just fine, just not delete. Neither runstats nor reorg table have any affect on the problem.
#castaway
We actually just solved the problem, and indeed it is just what you said (a coworker found that exact same page too).
The solution was to drop foreign key constraints and re-add them.
Another post on the subject:
http://www.ibm.com/developerworks/forums/thread.jspa?threadID=208277&tstart=-1
Which indicates that the problem is a referential constraint corruption, and is actually, or supposedly anyways, fixed in a later version of db2 V9 (which we are not yet using).
Thanks for the help!
Please check
1. your arguments of triggers, procedure, functions and etc.
2. datatype of arguments.

Resources