sp_prepare and sp_execute - database

I've been consulting google for some time now, but it hasn't been able to give me a satisfactory answer...
In a SQL Server 2005 trace, I've got lots of "exec sp_execute" statements. I know they are connected to a corresponding "exec sp_prepare" statement which specifies the actual SQL.
But...
One: Is it possible to find out the SQL behind the sp_execute without finding the sp_prepare?
Two: What type of construct would typically hide behind the sp_execute? That is, is it a stored procedure? Is it just a string in code? Or what?
Three: Should I fear bad performance seeing these in the trace?
Any input is appreciated

Use
select * from sys.dm_exec_query_plan(PlanHandle)
to generate an xml document that indicates what sql sp_execute is using.

Those are API Server Cursors, most probably used by an old (or not so old, but badly developed) application.
99% of the times, cursors affect performance on your server. Disk and network I/O are the potential victims.
Read this, it helped me understanding how server side cursors work.

Late answer, but I recently had an application with bad performance executing sp_prepare and sp_execute.
One: Answered before
Two: It could be anything, stored procedures, any valid sql query basically.
Three: I had problems with SQL Server failing to generate good execution plans when the application was using sp_prepare. Basically, SQL Server analyzes the incoming parameters to generate a good execution plan, but with sp_prepare no values for the parameters are supplied, since they are only added when executing sp_execute. So in the mean time, SQL Server applies generic costs for different operators and might very well generate a suboptimal plan.
If you look at your reads/cpu-usage for your traces, you should be able to determine if your queries are behaving badly or as expected.
Also see http://blogs.infosupport.com/speeding-up-ssis-literally

Related

SQL Server 2000 caching

I have one question in order to speed up SQL Server 2000.
I want to use caching mechanism, but I don't know how to use.
I found some articles about it, but can you give an example for how to use.
For example:
there is a stored procedure - sp_stackOverFlow - it executes when every user enter to the program/web site and it is clear it makes slower running.
Is there a way of caching sp_stackOverFlow in every 2 minutes or another?
Your question isn't clear, not least because it isn't obvious what the stored procedure does. If the results are different for every execution and/or user then they cannot easily be cached anyway.
But more fundamentally, "I have a slow stored procedure" does not automatically mean "I need caching"; the database engine itself already caches data when it can. You need to understand why the stored procedure is running slowly: underpowered hardware, poor TSQL code, poor data model design and poor indexing are all very common issues that have major effects on performance.
You can find a lot of information on this site and by Googling about how to troubleshoot slow execution times for procedures, but you can start by reviewing the execution plan for the procedure in Query Analyzer and tracing the execution using Profiler. That will immediately tell you which statements are taking the most time, if there are table scans happening etc.
Because performance troubleshooting is potentially complex, if you need more assistance please post short, specific questions about individual issues. If the code for your stored procedure is very short (< 30 lines formatted) people may be willing to comment on it directly, otherwise it would be better to post only the individual SQL statements that are causing a problem.
Finally, mainstream support for MSSQL 2000 stopped 3 years ago, so you should definitely look into upgrading to newer version. The performance tools in newer versions will make resolving your issue much easier.

SQL Server Performance and Update Statistics

We have a site in development that when we deployed it to the client's production server, we started getting query timeouts after a couple of hours.
This was with a single user testing it and on our server (which is identical in terms of Sql Server version number - 2005 SP3) we have never had the same problem.
One of our senior developers had come across similar behaviour in a previous job and he ran a query to manually update the statistics and the problem magically went away - the query returned in a few miliseconds.
A couple of hours later, the same problem occurred.So we again manually updated the statistics and again, the problem went away. We've checked the database properties and sure enough, auto update statistics isTRUE.
As a temporary measure, we've set a task to update stats periodically, but clearly, this isn't a good solution.
The developer who experienced this problem before is certain it's an environment problem - when it occurred for him previously, it went away of its own accord after a few days.
We have examined the SQL server installation on their db server and it's not what I would regard as normal. Although they have SQL 2005 installed (and not 2008) there's an empty "100" folder in installation directory. There is also MSQL.1, MSQL.2, MSQL.3 and MSQL.4 (which is where the executables and data are actually stored).
If anybody has any ideas we'd be very grateful - I'm of the opinion that rather than the statistics failing to update, they are somehow becoming corrupt.
Many thanks
Tony
Disagreeing with Remus...
Parameter sniffing allows SQL Server to guess the optimal plan for a wide range of input values. Some times, it's wrong and the plan is bad because of an atypical value or a poorly chosen default.
I used to be able to demonstrate this on demand by changing a default between 0 and NULL: plan and performance changed dramatically.
A statistics update will invalidate the plan. The query will thus be compiled and cached when next used
The workarounds are one of these follows:
parameter masking
use OPTIMISE FOR UNKNOWN hint
duplicate "default"
See these SO questions
Why does the SqlServer optimizer get so confused with parameters?
At some point in your career with SQL Server does parameter sniffing just jump out and attack?
SQL poor stored procedure execution plan performance - parameter sniffing
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
...and Google search on SO
Now, Remus works for the SQL Server development team. However, this phenomenon is well documented by Microsoft on their own website so blaming developers is unfair
How Data Access Code Affects Database Performance (MSDN mag)
Suboptimal index usage within stored procedure (MS Connect)
Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005 (an excellent white paper)
Is not that the statistics are outdated. What happens when you update statistics all plans get invalidated and some bad cached plan gets evicted. Things run smooth until a bad plan gets again cached and causes slow execution.
The real question is why do you get bad plans to start with? We can get into lengthy technical and philosophical arguments whether a query processor shoudl create a bad plan to start with, but the thing is that, when applications are written in a certain way, bad plans can happen. The typical example is having a where clause like (#somevaribale is null) or (somefield= #somevariable). Ultimately 99% of the bad plans can be traced to developers writing queries that have C style procedural expectation instead of sound, set based, relational processing.
What you need to do now is to identify the bad queries. Is really easy, just check sys.dm_exec_query_stats, the bad queries will stand out in terms of total_elapsed_time and total_logical_reads. Once you identified the bad plan, you can take corrective measures which depend from query to query.

Sql Server 2000 Stored Procedure Prevent Parallelism or something?

I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it.
I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster.
From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit.
When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds?
Thank you, any ideas would be great... I'm stuck.
If the only difference between the two query plans is parallelism, try putting OPTION (MAXDOP 1) at the end of the query to limit it to a serial plan.
As to why they are different, I'm not sure, but I remember the SQL Server 2000 optimizer as being, um, finicky. Similar to your case, what we usually saw was that ad-hoc query batches would be fast and the same query via sp_executesql would be slow. Never did fully figure out what was going on.
Serial v parallel can definitely explain the difference in speeds, though. On SQL Server 2000, parallel plans use all the processors on the machine, not just the ones it needs:
If SQL Server chooses to use parallelism, it must use all the configured processors (as determined by the MAXDOP query hint configuration) for the execution of a parallel plan. For example, if you use MAXDOP=0 on a 32-way server, SQL Server tries to use all 32 processors even if seven processors might perform the job more efficiently as compared to a serial plan that only uses one processor. Because of this all-or-nothing behavior, if SQL Server chooses the parallel plan and you do not restrict the MAXDOP query hint[...], the time that it takes SQL Server to coordinate all the processors on a high-end server outweighs the advantages of using a parallel plan.
By default, I believe the server-wide setting of MAXDOP is 0, meaning use as many as possible. If you recently upgraded your database server with more processors to help performance, that could ironically explain why your performance is suffering. If that's the case, you might try setting the MAXDOP hint to the number of processors you had before and see if that helps.
try adding SET ARITHABORT ON at the top of the procedure.
as seen here: https://stackoverflow.com/questions/2465887/why-would-set-arithabort-on-dramatically-speed-up-a-query
If you have made many changes to the table and not run a re-index or defragment on the tables in question you probably should. Check out this article. The reason i suggest this is because the procedure at one time was fast and now over time it has degraded performance. I don't think making changes to an already existing procedure that was tested and worked well at one time should change on account of degraded performance over time. This usually only treats the symptoms not the actual problem.
I do know that if I take the body of
the stored procedure and then
declare/set the values of the
parameters and run it in query
analyzer that it runs more than 20x
faster.
Are you sure that it is not the fetching of these params ahead of the SP's execution that's not causing your slowness? With bypassing the population of the params you could be oversimplifying your issue.
Where do these params come from? How are they populated? It seems from your question that you've isolated the stored proc and found out that it might not be the issue.
Could it be a problem with contention? Does this store procedure run at a particular time when other heavy lifting is also happening?

Query times out when executed from web, but super-fast when executed from SSMS

I'm trying to debug the source of a SQL timeout in a web application that I maintain. I have the source code of the C# code behind, so I know exactly what code is running. I have debugged the application right down to the line that executes the SQL code that times out, and I watch the query running in SQL profiler.
When this query executes from the web, it times out after 30 seconds. However, when I cut/paste the query exactly as presented in Profiler, and I put it into SSMS and run it, it returns almost instantly. I have traced the problem to ARITHABORT being set to OFF in the connection that the web is using (that is, if I turn ARITHABORT OFF in the SSMS session, it runs for a long time, and if I turn it back ON then it runs very quickly). However, reading the description of ARITHABORT, it doesn't seem to apply... I'm only doing a simple SELECT, and there is NO arithmetic being performed at all.. just a single INNER JOIN with a WHERE condition:
Why would ARITHABORT OFF be causing this behavior in this context?? Is there any way I can alter the ARITHABORT setting for that connection from SSMS? I'm using SQL Server 2008.
So your C# code is sending an ad hoc SQL query to SQL Server, using what method? Have you considered using a stored procedure? That would probably ensure the same performance (at least in the engine) regardless of who called it.
Why? The ARITHABORT setting is one of the things the optimizer looks at when it is determining how to execute your query (more specifically, for plan matching). It is possible that the plan in cache has the same setting as SSMS, so it uses the cached plan, but with the opposite setting your C# code is forcing a recompile (or perhaps you are hitting a really BAD plan in the cache), which can certainly hurt performance in a lot of cases.
If you are already calling a stored procedure (you didn't post your query, though I think you meant to), you can try adding OPTION (RECOMPILE) to the offending query (or queries) in the stored procedure. This will mean those statements will always recompile, but it could prevent the use of the bad plan you seem to be hitting. Another option is to make sure that when the stored procedure is compiled, the batch is executed with SET ARITHABORT ON.
Finally, you seem to be asking how you can change the ARITHABORT setting in SSMS. I think what you meant to ask is how you can force the ARITHABORT setting in your code. If you decide to continue sending ad hoc SQL from your C# app, then of course you can send a command as text that has multiple statements separated by semi-colons, e.g.:
SET ARITHABORT ON; SELECT ...
For more info on why this issue occurs, see Erland Sommarskog's great article:
Slow in the Application, Fast in SSMS? Understanding Performance Mysteries
This answer includes a way to resolve this issue:
By running the following commands as administrator on the database all queries run as expected regardless of the ARITHABORT setting.
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Update
It seems that most people end up having this problem occur very rarely, and the above technique is a decent one-time fix. But if a specific query exhibits this problem more than once, a more long-term solution to this problem would be to use Query Hints like OPTIMIZE FOR and OPTION(Recompile), as described in this article.
Update 2
SQL Server has had some improvements made to its query execution plan algorithms, and I find problems like this are increasingly rare on newer versions. If you are experiencing this problem, you might want to check the Compatibility Level setting on the database that you're executing against (not necessarily the one you're querying, but rather the default database--or "InitialCatalog"--for your connection). If you are stuck on an old compatibility level, you'll be using the old query execution plan generation techniques, which have a much higher chance of producing bad queries.
I've had this problem many times before but if you have a stored procedure with the same problem dropping and recreating the stored proc will solve the issue.
It's called parameter sniffing.
You need to always localize the parameters in the stored proc to avoid this issue in the future.
I understand this might not be what the original poster wants but might help someone with the same issue.
If using Entity Framework you must be aware that query parameters for string values are sent to database as nvarchar by default, if database column to compare is typed varchar, depending on your collation, query execution plan may require an "IMPLICIT CONVERSION" step, that forces a full scan. I could confirm it by looking in database monitoring in expensive queries option, which displays the execution plan.
Finally, a explanation on this behavior in this article:
https://www.sqlskills.com/blogs/jonathan/implicit-conversions-that-cause-index-scans/
Just using ARITHABORT wont solve the problem, especially if you use parameterised stored procedures.
Because parameterised stored procedures can cause "parameter sniffing", which uses cached query plan
So, before jumping into conclusion, please check below link.
the-elephant-and-the-mouse-or-parameter-sniffing-in-sql-server
I had the same problem and it was fixed by executing procedure "WITH RECOMPILE". You can also try using parameter sniffing. My issue was related to SQL cache.
If you can change your code to fix parameter sniffing optimize for unknown hint is your best option. If you cannot change your code the best option is exec sp_recompile 'name of proc' which will force only that one stored proc to get a new execution plan. Dropping and recreating a proc would have a similar effect but could cause errors if someone tries to execute the proc while you have it dropped. DBCC FREEPROCCACHE drops all your cached plans which can wreck havoc ok your system up to and including causing lots of timeouts in a heavy transactions production environment. Setting arithabort is not a solution to the problem but is a useful tool for discovering if parameter sniffing is the issue.
I have the same problem when trying to call SP from SMSS it took 2 sec, while from the webapp (ASP.NET) it took about 3 min.
I've tried all suggested solutions sp_recompile, DBCC FREEPROCCACHE and DBCC DROPCLEANBUFFERS but nothing fixed my problem, but when tried parameter sniffing it did the trick, and worked just fine.

SPROC hangs in SQL Server 2005

I get a problem with SQL Server 2005, where a stored procedure seems to randomly hang/lock, and never return any result.
What the stored procedure does is to call a function, which in turn makes a union of two different functions – returning the same type of data, but from different criteria. Nothing advanced. I don’t think it’s the functions hanging, because there are other SPROCs that call the same functions without a problem, even when the first one has locked up.
After the SPROC hangs, any further attempts to call it will result in a time out – not in the call itself, but the response time will be too great, as no result is returned the code will throw an exception.
It has happened at least three times in two months in a relatively low-load system. Restarting SQL Server solves the situation, but I don’t regard that as a “solution” to the problem.
I’ve looked for information, and found something about the query cache going corrupt. However, that was in regard to dynamic SQL strings, which my problem is not. I guess it could still be the query cache.
Has anyone had the same problem, and if so, what did you do about it (don’t say “restart SQL Server every morning” ;) )? Is there any way of debugging the issue to try and find exactly what and where things go wrong? I can’t recreate the problem, but when it appears again it would be good if I knew where to take a closer look.
I don't think it makes any difference, but just for the record, the SPROC is called from .NET 3.5 code, using the Entity Franework. I say it doesn't make a difference, because when I've tested to just execute the SPROC directly from SQL Server Management Studio, no result is returned either.
It's most likely parameter sniffing
Restarting SQL server clears the plan cache. If you rebuild statistics or indexes the problem will also go away "ALTER INDEX" and "sp_updatestats"
I suggest using "parameter masking" (not WITH RECOMPILE!) to get around it
SO answer already by me:
One
Two
Are your STATISTICS up to date? One of the common causes of an cached query plan that is incorrect, is out of date statistics.
Do you have a regularly scheduled index rebuild job?
Did you verify the SQL Server log..? Defenetly the cause for the problem is been logged.. atleast you can get some hint about that. pls check that.
This excellent MSDN Article SQL Server technical bulletin - How to resolve a deadlock
explains the steps needed to identify and resolve the deadlock issues in very detail.

Resources