SPROC hangs in SQL Server 2005 - sql-server

I get a problem with SQL Server 2005, where a stored procedure seems to randomly hang/lock, and never return any result.
What the stored procedure does is to call a function, which in turn makes a union of two different functions – returning the same type of data, but from different criteria. Nothing advanced. I don’t think it’s the functions hanging, because there are other SPROCs that call the same functions without a problem, even when the first one has locked up.
After the SPROC hangs, any further attempts to call it will result in a time out – not in the call itself, but the response time will be too great, as no result is returned the code will throw an exception.
It has happened at least three times in two months in a relatively low-load system. Restarting SQL Server solves the situation, but I don’t regard that as a “solution” to the problem.
I’ve looked for information, and found something about the query cache going corrupt. However, that was in regard to dynamic SQL strings, which my problem is not. I guess it could still be the query cache.
Has anyone had the same problem, and if so, what did you do about it (don’t say “restart SQL Server every morning” ;) )? Is there any way of debugging the issue to try and find exactly what and where things go wrong? I can’t recreate the problem, but when it appears again it would be good if I knew where to take a closer look.
I don't think it makes any difference, but just for the record, the SPROC is called from .NET 3.5 code, using the Entity Franework. I say it doesn't make a difference, because when I've tested to just execute the SPROC directly from SQL Server Management Studio, no result is returned either.

It's most likely parameter sniffing
Restarting SQL server clears the plan cache. If you rebuild statistics or indexes the problem will also go away "ALTER INDEX" and "sp_updatestats"
I suggest using "parameter masking" (not WITH RECOMPILE!) to get around it
SO answer already by me:
One
Two

Are your STATISTICS up to date? One of the common causes of an cached query plan that is incorrect, is out of date statistics.
Do you have a regularly scheduled index rebuild job?

Did you verify the SQL Server log..? Defenetly the cause for the problem is been logged.. atleast you can get some hint about that. pls check that.
This excellent MSDN Article SQL Server technical bulletin - How to resolve a deadlock
explains the steps needed to identify and resolve the deadlock issues in very detail.

Related

Force a Plan Guide in SQL Server 2008

I have a query embedded in an application which I cannot access to change without contacting the original developers and getting them to change it.
The query that I am trying to alter is very slow to run and yields incomplete data, I have an improved version of this query and am looking for a way in SQL Server 2008 to essentially substitute the original query with the improved one when the original query is run through the application.
I have tried to create and force a Plan Guide based on the original query to force the new query. Following this article - https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms190772(v=sql.105) (as well as others).
So far every attempt to use plan forcing seems to have failed and the original query still gets executed. Does anyone know if I'm taking the right approach here? Or is there a better solution to the problem that I've described.
As others have said, this is not possible. If you can get the developers to change the query in the app, ask them to call a stored procedure. That way you can update the proc whenever you need to - it gives you much more flexibility in how the query operates and what it does.

Random timeouts on 1 specific stored procedure from SQL Azure

We have a database hosted in SQL Azure that we connect to through a Cloud Service webapp. Every once a while, one specific stored procedure that returns 100 rows throws a timeout exception when ran from the actual webapp. When we run the same stored procedure with the same parameters from the SQL Management Studio we get the actual results. This issue persists for a while and sometimes disappears as fast as it occurred.
Other stored procedures and data retrieval from our application works like a charm, but one specific SP has this issue, which is weird. When the issue occurs, we can temporarily fix it by adding something like WHERE 1=1 into the where clause. Then it works for a while, but at some point the whole things starts all over again. I cannot get a grip on what's going wrong here or what could be causing this. We also added WITH RECOMPILE to the stored proc, but to no avail.
I experienced the same thing with mine. I checked the execution plan and everything looked fine. So I copied the database to my local machine and it ran there without any problems. Finally I decided to drop and recreate the stored procedure that was timing out. After doing that, the SP ran normally again with no timeouts.
We have the same issue especially after upgrading or downgrading a server. We found that a quick alter including "with recompile" fixes the problem. We turn recompile off after we've altered it once. Not sure if Azure gets some kind of corruption in the execution plan or what is happening. Not convinced you need "with recompile" you may just need to do the alter.

sp_prepare and sp_execute

I've been consulting google for some time now, but it hasn't been able to give me a satisfactory answer...
In a SQL Server 2005 trace, I've got lots of "exec sp_execute" statements. I know they are connected to a corresponding "exec sp_prepare" statement which specifies the actual SQL.
But...
One: Is it possible to find out the SQL behind the sp_execute without finding the sp_prepare?
Two: What type of construct would typically hide behind the sp_execute? That is, is it a stored procedure? Is it just a string in code? Or what?
Three: Should I fear bad performance seeing these in the trace?
Any input is appreciated
Use
select * from sys.dm_exec_query_plan(PlanHandle)
to generate an xml document that indicates what sql sp_execute is using.
Those are API Server Cursors, most probably used by an old (or not so old, but badly developed) application.
99% of the times, cursors affect performance on your server. Disk and network I/O are the potential victims.
Read this, it helped me understanding how server side cursors work.
Late answer, but I recently had an application with bad performance executing sp_prepare and sp_execute.
One: Answered before
Two: It could be anything, stored procedures, any valid sql query basically.
Three: I had problems with SQL Server failing to generate good execution plans when the application was using sp_prepare. Basically, SQL Server analyzes the incoming parameters to generate a good execution plan, but with sp_prepare no values for the parameters are supplied, since they are only added when executing sp_execute. So in the mean time, SQL Server applies generic costs for different operators and might very well generate a suboptimal plan.
If you look at your reads/cpu-usage for your traces, you should be able to determine if your queries are behaving badly or as expected.
Also see http://blogs.infosupport.com/speeding-up-ssis-literally

SQL Server Performance and Update Statistics

We have a site in development that when we deployed it to the client's production server, we started getting query timeouts after a couple of hours.
This was with a single user testing it and on our server (which is identical in terms of Sql Server version number - 2005 SP3) we have never had the same problem.
One of our senior developers had come across similar behaviour in a previous job and he ran a query to manually update the statistics and the problem magically went away - the query returned in a few miliseconds.
A couple of hours later, the same problem occurred.So we again manually updated the statistics and again, the problem went away. We've checked the database properties and sure enough, auto update statistics isTRUE.
As a temporary measure, we've set a task to update stats periodically, but clearly, this isn't a good solution.
The developer who experienced this problem before is certain it's an environment problem - when it occurred for him previously, it went away of its own accord after a few days.
We have examined the SQL server installation on their db server and it's not what I would regard as normal. Although they have SQL 2005 installed (and not 2008) there's an empty "100" folder in installation directory. There is also MSQL.1, MSQL.2, MSQL.3 and MSQL.4 (which is where the executables and data are actually stored).
If anybody has any ideas we'd be very grateful - I'm of the opinion that rather than the statistics failing to update, they are somehow becoming corrupt.
Many thanks
Tony
Disagreeing with Remus...
Parameter sniffing allows SQL Server to guess the optimal plan for a wide range of input values. Some times, it's wrong and the plan is bad because of an atypical value or a poorly chosen default.
I used to be able to demonstrate this on demand by changing a default between 0 and NULL: plan and performance changed dramatically.
A statistics update will invalidate the plan. The query will thus be compiled and cached when next used
The workarounds are one of these follows:
parameter masking
use OPTIMISE FOR UNKNOWN hint
duplicate "default"
See these SO questions
Why does the SqlServer optimizer get so confused with parameters?
At some point in your career with SQL Server does parameter sniffing just jump out and attack?
SQL poor stored procedure execution plan performance - parameter sniffing
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
...and Google search on SO
Now, Remus works for the SQL Server development team. However, this phenomenon is well documented by Microsoft on their own website so blaming developers is unfair
How Data Access Code Affects Database Performance (MSDN mag)
Suboptimal index usage within stored procedure (MS Connect)
Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005 (an excellent white paper)
Is not that the statistics are outdated. What happens when you update statistics all plans get invalidated and some bad cached plan gets evicted. Things run smooth until a bad plan gets again cached and causes slow execution.
The real question is why do you get bad plans to start with? We can get into lengthy technical and philosophical arguments whether a query processor shoudl create a bad plan to start with, but the thing is that, when applications are written in a certain way, bad plans can happen. The typical example is having a where clause like (#somevaribale is null) or (somefield= #somevariable). Ultimately 99% of the bad plans can be traced to developers writing queries that have C style procedural expectation instead of sound, set based, relational processing.
What you need to do now is to identify the bad queries. Is really easy, just check sys.dm_exec_query_stats, the bad queries will stand out in terms of total_elapsed_time and total_logical_reads. Once you identified the bad plan, you can take corrective measures which depend from query to query.

Query times out when executed from web, but super-fast when executed from SSMS

I'm trying to debug the source of a SQL timeout in a web application that I maintain. I have the source code of the C# code behind, so I know exactly what code is running. I have debugged the application right down to the line that executes the SQL code that times out, and I watch the query running in SQL profiler.
When this query executes from the web, it times out after 30 seconds. However, when I cut/paste the query exactly as presented in Profiler, and I put it into SSMS and run it, it returns almost instantly. I have traced the problem to ARITHABORT being set to OFF in the connection that the web is using (that is, if I turn ARITHABORT OFF in the SSMS session, it runs for a long time, and if I turn it back ON then it runs very quickly). However, reading the description of ARITHABORT, it doesn't seem to apply... I'm only doing a simple SELECT, and there is NO arithmetic being performed at all.. just a single INNER JOIN with a WHERE condition:
Why would ARITHABORT OFF be causing this behavior in this context?? Is there any way I can alter the ARITHABORT setting for that connection from SSMS? I'm using SQL Server 2008.
So your C# code is sending an ad hoc SQL query to SQL Server, using what method? Have you considered using a stored procedure? That would probably ensure the same performance (at least in the engine) regardless of who called it.
Why? The ARITHABORT setting is one of the things the optimizer looks at when it is determining how to execute your query (more specifically, for plan matching). It is possible that the plan in cache has the same setting as SSMS, so it uses the cached plan, but with the opposite setting your C# code is forcing a recompile (or perhaps you are hitting a really BAD plan in the cache), which can certainly hurt performance in a lot of cases.
If you are already calling a stored procedure (you didn't post your query, though I think you meant to), you can try adding OPTION (RECOMPILE) to the offending query (or queries) in the stored procedure. This will mean those statements will always recompile, but it could prevent the use of the bad plan you seem to be hitting. Another option is to make sure that when the stored procedure is compiled, the batch is executed with SET ARITHABORT ON.
Finally, you seem to be asking how you can change the ARITHABORT setting in SSMS. I think what you meant to ask is how you can force the ARITHABORT setting in your code. If you decide to continue sending ad hoc SQL from your C# app, then of course you can send a command as text that has multiple statements separated by semi-colons, e.g.:
SET ARITHABORT ON; SELECT ...
For more info on why this issue occurs, see Erland Sommarskog's great article:
Slow in the Application, Fast in SSMS? Understanding Performance Mysteries
This answer includes a way to resolve this issue:
By running the following commands as administrator on the database all queries run as expected regardless of the ARITHABORT setting.
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Update
It seems that most people end up having this problem occur very rarely, and the above technique is a decent one-time fix. But if a specific query exhibits this problem more than once, a more long-term solution to this problem would be to use Query Hints like OPTIMIZE FOR and OPTION(Recompile), as described in this article.
Update 2
SQL Server has had some improvements made to its query execution plan algorithms, and I find problems like this are increasingly rare on newer versions. If you are experiencing this problem, you might want to check the Compatibility Level setting on the database that you're executing against (not necessarily the one you're querying, but rather the default database--or "InitialCatalog"--for your connection). If you are stuck on an old compatibility level, you'll be using the old query execution plan generation techniques, which have a much higher chance of producing bad queries.
I've had this problem many times before but if you have a stored procedure with the same problem dropping and recreating the stored proc will solve the issue.
It's called parameter sniffing.
You need to always localize the parameters in the stored proc to avoid this issue in the future.
I understand this might not be what the original poster wants but might help someone with the same issue.
If using Entity Framework you must be aware that query parameters for string values are sent to database as nvarchar by default, if database column to compare is typed varchar, depending on your collation, query execution plan may require an "IMPLICIT CONVERSION" step, that forces a full scan. I could confirm it by looking in database monitoring in expensive queries option, which displays the execution plan.
Finally, a explanation on this behavior in this article:
https://www.sqlskills.com/blogs/jonathan/implicit-conversions-that-cause-index-scans/
Just using ARITHABORT wont solve the problem, especially if you use parameterised stored procedures.
Because parameterised stored procedures can cause "parameter sniffing", which uses cached query plan
So, before jumping into conclusion, please check below link.
the-elephant-and-the-mouse-or-parameter-sniffing-in-sql-server
I had the same problem and it was fixed by executing procedure "WITH RECOMPILE". You can also try using parameter sniffing. My issue was related to SQL cache.
If you can change your code to fix parameter sniffing optimize for unknown hint is your best option. If you cannot change your code the best option is exec sp_recompile 'name of proc' which will force only that one stored proc to get a new execution plan. Dropping and recreating a proc would have a similar effect but could cause errors if someone tries to execute the proc while you have it dropped. DBCC FREEPROCCACHE drops all your cached plans which can wreck havoc ok your system up to and including causing lots of timeouts in a heavy transactions production environment. Setting arithabort is not a solution to the problem but is a useful tool for discovering if parameter sniffing is the issue.
I have the same problem when trying to call SP from SMSS it took 2 sec, while from the webapp (ASP.NET) it took about 3 min.
I've tried all suggested solutions sp_recompile, DBCC FREEPROCCACHE and DBCC DROPCLEANBUFFERS but nothing fixed my problem, but when tried parameter sniffing it did the trick, and worked just fine.

Resources