I'm looking at an error from one of our web applications, and it was calling a stored procedure that was responsible for updating a record in the database.
This stored procedure has worked for weeks with no issues. Then one day it started throwing errors, while debugging we found the cause to be inside the stored procedure.
It basically had a statement like this
Begin
// Do Stuff
Set
End
So the SET never actually set anything. For some reason this runs in perfectly fine on our server, and was running fine on a client server until earlier today when it decided to start complaining. (Incorrect Syntax error)
Is there any type of SQL Server setting that would cause this sudden change in behaviour?
Clarification - The SET has always been in the procedures. And running a SET by itself, or as a sole statement in a stored procedure does in fact work for me. This is the problem, it shouldn't work. So is there anything that would cause it to work when it should be failing?
A procedure with a SET like that would normally fail to compile, even if the SET cannot be reached:
alter procedure dbo.testproc as
begin
return 1;
set
end
Incorrect syntax near the keyword 'SET'.
Since the alter fails, I can't see how the procedure could end up in your database in the first place?
Or maybe you were running in compatibility mode for SQL Server 2000 (which still allowed this.) Changing the compatibility mdoe to SQL Server 2005 or higher would then break the procedure.
Executing "SET", by itself, will generate an error. I was originally going to suggest that you might have branching code (IFs, RETURNs, GOTOs, etc.) that caused the line to never be reached... but I find that I cannot create a stored procedure that contains this as a stand-alone statement.
If you script out the procedure and try to recreate it (with a different name), can it be created?
Might be worth posting that script, or as much of it as your are comfortable making public.
Related
I have a stored proc that is called by a .net application and passes an xml parameter - this is then shredded and forms the WHERE section of the query.
So in my query, I look for records with a documentType matching that contained in the XML. The table contains many more records with a documentType of C than P.
The query will run fine for a number of week, regardless of if the XML contains P or C for documentType. Then it stops working for documentType C.
I can run both queries from SSMS with no errors (using profiler to capture the exact call that was made). Profiler shows that when run from the application, the documentType C query starts a statement then finishes before the statement ends, and before completing the outstanding steps of the query.
I ran another profiler session to capture all errors and warnings. All I can see is error 3621 - The statement has been terminated.
There are no other errors relating to this spid, the only other things to be picked up were warnings changing database context.
I've checked the SQL logs and extended events and can find nothing. I don't think the query relates to the data content as it runs in SSMS without problems - I've also checked the range values for other fields in the WHERE clause and nothing unusual or untoward there. I also know that if I drop and recreate the procedure (i.e. exact same code) the problem will be fixed.
Does anyone know how I can trace the error that is causing the 3261 failure? Profiling does not pick this up.
In some situations, SQL Server raises two error messages, one is the actual error message saying exactly what is happening and the other one is 3621 which says The statement has been terminated.
Sometimes the first message get lost specially when you are calling an SQL query or object from a script.
I suggest you to go through each of your SQL statement and run them individually.
Another guess is you have a timeout error on your client side. If you have Attention event on your SQL Server trace, you can follow the timeout error messages.
I have an existing SQL 2005 stored procedure that for some reason, outputs its results in the Messages pane in SSMS instead of the Results pane. (Its actually a CLR procedure already compiled and deployed to all our servers, and used for another daily process. So I can't change it, I just want to use its output.)
For the sake of discussion, here's a stored proc that behaves the same way:
CREATE PROCEDURE [dbo].[OutputTest]
#Param1 int, #Param2 varchar(100)
AS
BEGIN
SET NOCOUNT ON;
PRINT 'C,10000,15000';
PRINT 'D,30000,90000';
PRINT 'E,500,50000';
END
So no actual SELECT statement in there, and if you run this, you'll see these results only on the Messages pane.
Is there any way for me to use these results as part of a larger query? Put them in a temp table or something, so I can parse them out?
None of the "normal stuff" works, because there is no true "output" here:
INSERT INTO #output
EXEC OutputTest 100, 'bob'
just shows
C,10000,15000
D,30000,90000
E,500,50000
(0 row(s) affected)
on the messages pane, and the temp table doesn't actually get anything put into it.
Can you execute the stored proc from C# code? If so, you might be able to hook into the SqlCommand event called SqlInfoMessage:
SqlConnection _con = new SqlConnection("server=.;
database=Northwind;integrated Security=SSPI;");
_con.InfoMessage += new SqlInfoMessageEventHandler(_con_InfoMessage);
The event handler will look like this:
static void _con_InfoMessage(object sender, SqlInfoMessageEventArgs e)
{
string myMsg = e.Message;
}
The "e.Message" is the message printed out to the message window in SQL Server Mgmt Studio.
While it won't be pretty and might require some ugly parsing code, at least you could get a hold of those messages that way, I hope!
Marc
You cannot trap, catch or use these messages from within SQL Server. You can, however, receive them from within a client application.
I don't think there is a way but even if there is I think you should seriously consider whether it is a good idea. This sounds like a fudge which can only cause you pain in the long term. Creating an alternative proc that does exactly what you want sounds to me like a better plan.
there is no way to get messages from the message pane in your result.
if you think about it the SSMS is just a client that parses those messages the way you see it.
if you wan to use them in your app take a look at Connection Events in ADO.NET
The only way I could think that this might be possible is if the output is printed via the RAISERROR command. In that case, you might be able to capture it elsewhere using TRY/CATCH.
But that's just an idea: I've never done it. In fact, the only thing we do that's remotely close is that we have a command line tool to run stored procedures in batch jobs rather than using sql server agent to schedule them. This way all of our nightly jobs are scheduled in one place (the windows task scheduler) rather than two, and the command line tool captures the anything printed to the message window into a common logging system that we monitor. So some of procedures will output quite a lot of detail to that window.
I know some ways that we can use in order to determine that whether our own Stored procedure has been executed successfully or not. (using output parameter, putting a select such as select 1 at the end of the stored procedure if it has been executed without any error, ...)
so which one is better and why?
Using RAISERROR in case of error in the procedure integrates better with most clients than using fake out parameters. They simply call the procedure and the RAISERROR translates into an exception in the client application, and exceptions are hard to avoid by the application code, they have to be caught and dealt with.
Having a print statement that clearly states whether the SP has been created or not would be more readable.
e.g.
CREATE PROCEDURE CustOrdersDetail #OrderID int
AS
...
...
...
GO
IF OBJECT_ID('dbo.CustOrdersDetail') IS NOT NULL
PRINT '<<< CREATED PROCEDURE dbo.CustOrdersDetail >>>'
ELSE
PRINT '<<< FAILED CREATING PROCEDURE dbo.CustOrdersDetail >>>'
GO
SP is very much like a method/subroutine/procedure & they all have a task to complete. The task could be as simple as computing & returning a result or could be just a simple manipulation to a record in a table. Depending on the task, you could either return a out value indicating the result of the task whether it was a success, failure or the actual results.
If you need common T-SQL solution for your entire project/database, you can use the output parameter for all procedures. But RAISEERROR is the way to handle errors in your client code, not T-SQL.
Why don't use different return values which then can be handled in code?
Introducing an extra output paramter or an extra select is unnecessary.
If the only thing you need to know is whether there is a problem, a successful execution is good enough choice. Have a look at the discussions of XACT_ABORT and TRY...CATCH here and here.
If you want to know specific error, return code is the right way to pass this information to the caller.
In the majority of production scenarios I tend to deploy a custom error reporting component within the database tier, as part of the solution. Nothing fancy, just a handful of log tables and a few of stored procedures that manage the error logging process.
All stored procedure code that is executed on a production server is then encapsulated using the TRY-CATCH-BLOCK feature available within SQL Server 2005 and above.
This means that in the unlikely event that a given stored procedures were to fail, the details of the error that occurred and the stored procedure that generated it are recorded to a log table. A simple stored procedure call is made from within the CATCH BLOCK in order to record the relevant details.
The foundations for this implementation are actually explained in books online here
Should you wish, you can easily extend this implementation further, for example by incorporating email notification to a DBA or even an SMS alert could be sent dependent on the severity of the error.
An implementation of this sort ensures that if your stored procedure did not report failure then it was of course successful.
Once you have a simple and robust framework in place, it is then straightforward to duplicate and rollout your base implementation to other production servers/application platforms.
Nothing special here, just simple error logging and reporting that works.
If on the other hand you also need to record the successful execution of stored procedures then again, a similar solution can be devised that incorporates log table/s.
I think this question is screaming out for a blog post……..
Been working with SQL Server since it was Sybase (early 90s for the greenies) and I'm a bit stumped on this one.
In Oracle and DB2, you can pass a SQL batch or script to a stored procedure to test if it can be parsed, then execute conditional logic based on the result, like this pseudocode example:
if (TrySQLParse(LoadSQLFile(filename)) == 1
{ execute logic if parse succeeds }
else
{ execute logic if parse fails }
I'm looking for a system proc or similar function in SQL Server 2008 -- not SHOWPLAN or the like -- to parse a large set of scripts from within a TSQL procedure, then conditionally control exception handling and script execution based on the results. But, I can't seem to find a similar straightforward gizmo in TSQL.
Any ideas?
The general hacky way to do this in any technology that does a full parse/compile before execution is to prepend the code in question with something that causes execution to stop. For example, to check if a vbscript passes syntax checking without actually running it, I prepend:
Wscript.exit(1)
This way I see a syntax error if there are any, or if there are none then the first action is to exit the script and ignore the rest of the code.
I think the analog in the sql world is to raise a high severity error. If you use severity 20+ it kills the connection, so if there are multiple batches in the script they are all skipped. I can't confirm that there is 100.00000% no way some kind of sql injection could make it past this prepended error, but I can't see any way that there could be. An example is to stick this at the front of the code block in question:
raiserror ('syntax checking, disregard error', 20, 1) with log
So this errors out from syntax error:
raiserror ('syntax checking, disregard error', 20, 1) with log
create table t1()
go
create table t2()
go
While this errors out from the runtime error (and t1/t2 are not created)
raiserror ('syntax checking, disregard error', 20, 1) with log
create table t1(i int)
go
create table t2( i int)
go
And to round out your options, you could reference the assembly C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Microsoft.SqlServer.SqlParser.dll in a clr utility (outside of the db) and do like:
SqlScript script = Parser.Parse(#"create proc sp1 as select 'abc' as abc1");
You could call an exec(), passing in the script as a string and wrap it in a Try/Catch
There isn't a mechanism in SQL Server to do this. You might be able to do it with a CLR component and SMO, but it seems like a lot of work for questionable gain.
How about wrapping the script in a try/catch block, and executing the "if fails" code in the catch block?
Potentially very dangerous. Google up "SQL injection" and see for yourslef.
I am still learning sql server somewhat and recently came across a select query in a stored procedure which was causing a very slow fill of a dataset in c#. At first I thought this was to do with .NET but then found a suggestion to put in the stored procedure:
set implicit_transactions off
this seems to cure it but I would like to know why also I have seen other options such as:
set nocount off
set arithabort on
set concat_null_yields_null on
set ansi_nulls on
set cursor_close_on_commit off
set ansi_null_dflt_on on
set ansi_padding on
set ansi_warnings on
set quoted_identifier on
Does anyone know where to find good info on what each of these does and what is safe to use when I have stored procedures setup just to query of data for viewing.
I should note just to stop the usual use/don't use stored procedures debate these queries are complex select statements used on multiple programs in multiple languages it is the best place for them.
Edit: Got my answer didn't end up fully reviewing all the options but did find
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
Sped up the complex queries dramatically, I am not worried about the dirty read in this instance.
This is the page out of SQL Server Books Online (BOL) that you want. It explains all the SET statements that can be used in a session.
http://msdn.microsoft.com/en-us/library/ms190356.aspx
Ouch, someone, somewhere is playing with fire big-time.
I have never had a production scenario where I had to enable implicit transactions. I always open transactions when I need them and commit them when I am done. The problem with implicit transactions is its really easy to "leak" an open transaction which can lead to horrible issues. What this setting means is "please open a transaction for me the first time I run a statement if there is no transaction open, don't worry about committing it".
For example have a look at the following examples:
set implicit_transactions on
go
select top 10 * from sysobjects
And
set implicit_transactions off
go
begin tran
select top 10 * from sysobjects
They both do the exact same thing, however in the second statement its pretty clear someone forgot to commit the transaction. This can get very complicated to track down if you have this set in an obscure place.
The best place to get documentation for all the set statements is the old trusty sql server books online. It together with a bit of experimentation in query analyzer are usually all that is required to get a grasp of most settings.
I would strongly recommend you find out who is setting up implicit transactions, find out why they are doing it, and remove the setting if its not really required. Also, you must confirm that whoever uses this setting commits their implicitly open transactions.
What was probably going on is that you had an open transaction that was blocking a bit of your your stored proc, and somewhere you have a timeout that is occurring, raising an error and being handled in code, when that timeout happens your stored proc continues running. My guess is that the delay is usually 30 seconds exactly.
I think you need to look deeper into your stored procedure. I don't think that SET IMPLICIT_TRANSACTIONS is really going to be what's sped up your procedure, I think it's probably a coincidence.
One thing that may be worth a look at is what is passed from the client to the server by using the profiler.
We had an odd situation where the default SET arguments for the ADO connection were causing an SP to take ages to run from the client which we resolved by looking at exactly what the server was receiving from the client, complete with default SET arguments compared to what was sent when executing from SSMS. We then made the client pass the same SET statements as those sent by SSMS.
This may be way off track but it is a useful method to use when the SP executes in a timely fashion on the server but not from the client.