SQL procedure running time widely divergent - sql-server

I have an application that runs a huge stored procedure on SQL Server 2000. Usually it takes about 1 minute to complete, but occasionally it will take MUCH longer.
Just now I ran it three times in a row in my test system. It took 1:12, 1:23, and 55:25.
What would cause that behavior? There are other things going on in the database, so I wonder if it has something to do with locks. How can I catch this in the act?

Create a trace and examine it in Profiler. That should at least point towards where the problem lies - in your procedure or elsewhere.

It's probably parameter sniffing: based on the input, Sql Server chose a different query plan.
Another possibility is that a separate query was running at the same time and locked everything up.

Related

Drastic difference of the time cost for the same store procedure

I am using SQL Server 2008 R2.
The process is actually like this:
First, about 2 million records are pulled from a remote server,
then a join is done locally,
the final result is thousands of records.
The time cost varies from less one 1 min to 30 mins.
And after I experienced the 30 mins delay, it seems the following time costs are all only around 3 mins.
It is the same data, same SP.
What could cause this drastic difference?
Update
I delet the SP, re-start the SQL server service, and re-creat the SP. The execution took only 50 seconds!
What's wrong?
The behaviour you describe seems extreme - but (if you exclude the client), there are 3 logical places to look.
The first is the query execution on the database server. It's worth using the Query Analyzer tool to see if it's using any indices - by far the most common reason for variable performance of database queries is that the query is not using (the right) indices, and that therefore the impact of the query cache plays a big part. SQL Server will cache a lot of data, and the first run of your proc populates that cache; the second run is faster because it hits the cache. After a while, the cache goes stale, and running the proc slows down again.
The second possibility is that the database server is wobbly - it may just not be powerful enough to do all the work it's supposed to do. In that case, one moment you get lucky, have all the server resources to yourself; the next, someone else is running a query and yours slows down. That would make all queries slow, not just this one - so it doesn't sound likely.
Third possibility is networking weirdness - as Phil says, "thousands of records" is nothing too scary, but if they're big, and your network is saturated with pictures of kittens, it might have an impact. Again, that would manifest in general network slowness, and is unlikely to explain a delay of 30 minutes...
Fourth, is anything going on at the same time?
Fifth, does your SP use dynamically generated SQL statements? This would cause the SP not to become pre-compiled. If possible seperate such statements into child SPs.

Strange problem with SQL Server procedure execution plan

I was wondering if you guys could help me get to the bottom of a weird problem I have recently had on SQL Server.
I have a stored procedure (lets call SPold) which is reasonably large with a lot of calculations (can't possibly do this in app as info for around 6000 users needs to come back in a one-er (I reduce this to 1000 based on Surname)). The stored procedure usually executes in a couple of seconds, and is called once every couple of minutes.
Now this morning, the stored procedure was suddenly taking 4-10 times as long to execute, causing a number of timeouts. I discovered that by making a copy of the procedure with a new name (SPnew) and executing, I would get the fast execution times again. This indicated to me that the execution plan was the problem with the original, SPold, so I decided to execute it with recompile. This would return the results a quicker (although not as fast as SPnew), but subsequent calls from users to SPold were once again slow. It was like the new plan wasn't being kept.
What I have done is to fix this is put Exec SPnew into SPold, and now calls to SPold are returning fast again.
Does anyone have any idea what is going on here? The only thing that updated overnight was the statistics, although I think that this should affect both SPold and SPnew.
Sounds like you are experiencing an incorrectly cached query plan due to parameter sniffing.
Can you post the stored procedure?
Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005
I Smell a Parameter!
In SQL Server 2005, you can use the OPTIMIZE FOR query hint for preferred values of parameters to remedy some of the problems associated with parameter sniffing:
OPTIMIZE FOR Instructs the query optimizer to use a particular value for a local
variable when the query is compiled and optimized. The value is used
only during query optimization, and not during query execution.
OPTIMIZE FOR can counteract the parameter detection behavior of the
optimizer or can be used when you create plan guides. For more
information, see Recompiling Stored Procedures and Optimizing Queries
in Deployed Applications by Using Plan Guides.
Although SQL Server 2005 does not support OPTIMIZE FOR UNKNOWN (introduced in SQL Server 2008) which
will eliminate parameter sniffing for a given parameter:
OPTION (OPTIMIZE FOR (#myParam UNKNOWN))
You can achieve the same effect in SQL Server 2005 by copying the parameter into a local variable, and then use the local variable in the query.
I've also encounterred two "strange" cases with Sql Server 2005, which might relate to your problem as well.
In the first case my procedure executed prety fast, when being run as dbo, and it was slow when being run from the application, under a different user account.
In the second case the query plan of the procedure got optimized for the parameter values with which the procedure was called for the first time, and this plan was then reused later for other parameter values as well, resulting in a slow execution.
For this second case the solution was to copy the parameter values into local variables in the procedure, and then using the variables in the queries instead of the parameters.

Sql Server 2000 Stored Procedure Prevent Parallelism or something?

I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it.
I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster.
From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit.
When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds?
Thank you, any ideas would be great... I'm stuck.
If the only difference between the two query plans is parallelism, try putting OPTION (MAXDOP 1) at the end of the query to limit it to a serial plan.
As to why they are different, I'm not sure, but I remember the SQL Server 2000 optimizer as being, um, finicky. Similar to your case, what we usually saw was that ad-hoc query batches would be fast and the same query via sp_executesql would be slow. Never did fully figure out what was going on.
Serial v parallel can definitely explain the difference in speeds, though. On SQL Server 2000, parallel plans use all the processors on the machine, not just the ones it needs:
If SQL Server chooses to use parallelism, it must use all the configured processors (as determined by the MAXDOP query hint configuration) for the execution of a parallel plan. For example, if you use MAXDOP=0 on a 32-way server, SQL Server tries to use all 32 processors even if seven processors might perform the job more efficiently as compared to a serial plan that only uses one processor. Because of this all-or-nothing behavior, if SQL Server chooses the parallel plan and you do not restrict the MAXDOP query hint[...], the time that it takes SQL Server to coordinate all the processors on a high-end server outweighs the advantages of using a parallel plan.
By default, I believe the server-wide setting of MAXDOP is 0, meaning use as many as possible. If you recently upgraded your database server with more processors to help performance, that could ironically explain why your performance is suffering. If that's the case, you might try setting the MAXDOP hint to the number of processors you had before and see if that helps.
try adding SET ARITHABORT ON at the top of the procedure.
as seen here: https://stackoverflow.com/questions/2465887/why-would-set-arithabort-on-dramatically-speed-up-a-query
If you have made many changes to the table and not run a re-index or defragment on the tables in question you probably should. Check out this article. The reason i suggest this is because the procedure at one time was fast and now over time it has degraded performance. I don't think making changes to an already existing procedure that was tested and worked well at one time should change on account of degraded performance over time. This usually only treats the symptoms not the actual problem.
I do know that if I take the body of
the stored procedure and then
declare/set the values of the
parameters and run it in query
analyzer that it runs more than 20x
faster.
Are you sure that it is not the fetching of these params ahead of the SP's execution that's not causing your slowness? With bypassing the population of the params you could be oversimplifying your issue.
Where do these params come from? How are they populated? It seems from your question that you've isolated the stored proc and found out that it might not be the issue.
Could it be a problem with contention? Does this store procedure run at a particular time when other heavy lifting is also happening?

What are these sys.sp_* Stored Procedures doing?

I'm tracking down an odd and massive performance problem in my SQL server installation. On my system, a particular stored procedure takes 2 minutes to execute; on a colleague's system it takes less than 1 second. We have similar databases/data and configurations, but there's obviously something very different.
I ran the SP in question through the Profiler on both systems and noticed something odd. On My system, I see 9 entries with the following properties:
The Duration is way high relative to other rows. I have values as high as 37,698 and as low as 1734. On the "fast" system the maximum duration (for the entire SP call) is 259.
They are executed for two databases related to the one that contains the SP I'm running. (This SP makes calls via Linked Servers to these two databases).
They are executions of one of the following system SPs:
sp_tables_info_90_rowset
sp_check_constbytable_rowset
sp_columns_90_rowset
sp_table_statistics2_rowset
sp_indexes_90_rowset
I can't find any Googleable documentation on what these are, why they would be so slow, or why they would run on one system but not the other. Does anyone know what they're all about?
Try manually updating statistics on that table.
UPDATE STATISTICS [TableName]
Then double check that the database option to AutoUpdateStatistics is TRUE. Even if it is, though, I've seen cases where adding large amounts of data to a table doesn't always cause the statistics to update in a timely way, and queries can be slow.
I don't know the answer to your question. But to try to fix the problem you're having (which, I assume, is what you're actually interested in), the first thing I'd do is run a re-index on the tables you're querying. This frequently will fix any kind of slowness when the conditions are as you described (same database structure, different data/database, same query).
These are the tables created when you have linked server calls. These are called work tables created in Tempdb. They are automatically created by the database engine for temporary operations like Spooling etc.
Those sp's mean your query is hitting linked servers by using synonyms. This should be avoided whenever possible.
I'm not familiar with those specific procedures, but you can try running:
SELECT object_definition(object_id('Procedure Name'))
To get a better idea of what's going on under the hood.
Last index rebuild? Last statistics update?
Otherwise, these stored procs are used by the SQL Server client too... no? And probably won't cause these errors

Usage history of Stored Procedures in SQL Server 2008

I work with legacy systems that have tens of thousand of lines of stored procedure code, where many of the stored procedures are obsolete and not used anymore. There doesn't seem to be a way to check execution history, so my question is if it might be a good idea to start each stored procedure by inserting a row into a table that keeps records of the execution?
could be very simple like:
insert into
executionHistory (
name,
date
)
select
'spName',
getdate()
-- then rest of procedure
I imagine this could be very useful for doing cleanups of old unused code, and might also be handy when trying to decide where to optimize. I mean it's better to shave 10 seconds off execution time on a procedure that is executed 50 times a day, than saving 10 minutes execution time on a procedure that is only used once a year.
There is a tracing option (SQL Profiler) in SQL server. you could take a trace of a days SQL activity and see which sprocs are executed there.
This will give you a good idea of where to focus your optimisations.
because you're using sql server 2008 i wouldn't do what rwmnau suggest because this would mean you have to modify all your stored procedures.
SQL Server 2008 introduces a feature called Extended Events and SQL Server Auditing based on them. Extended events are high performance tracing system.
by using SQL Server Auditing you can trace your system withouth the overhead of sql trace.
I think your idea is simple enough and would accomplish your goal. Though it would involve modifying every SP, it's the route I would choose. Then you can ensure that you're getting an accurate recording of all activity on the database.
Another poster suggested you do a trace - while this works for short periods, it's only going to catch the time you're watching. You'd have to make sure you traces across any important, high-traffic periods, like month-end financial closing, and even then, you're missing other times you don't think are that big a deal, so you're being subjective.

Resources