Why do we see large swings in Stored Procedure Performance - sql-server

I'm working on a Stored Procedure where we're seeing high variance in performance. All the procedure does is read from a set of tables.
The issue is that it often starts off with <1s performance. Then, later in the week, the same queries run in >7s. We'd like to keep this <1s. The issue may be related to Parameter Sniffing but recompile statements do not positively OR negatively influence performance in this case. We've also reproduced this behavior on 2 independent systems of identical hardware.
Here's what we know:
3 of the 8 tables used have a high update/insert rate but with small amounts of data. Covering indexes are in place in every place where a table is touched. Most tables are very small and fast-access.
Queries have require similar plan behavior: Look up 1 project and read it's 12-month summary at 1 row/month. Then pivot all 12 months of cost, rev, and budgets. (Each measure gets 12 columns so 36 columns outputed)
Here's what we've tried to no avail:
Query hints
(ForceSeek)
Run queries (with recompile) or forced all queries against the Stored Procedure with recompile
CREATE PROCEDURE usp_ProjectSummary #ProjectID INT
WITH RECOMPILE
Specified unknown variables using hints
Here's what's helped and where we have landed so far:
Temp tables
Force Fixed Plan with
OPTION (KEEPFIXED PLAN)
Routinely run the following 2 statements to force a good query plan:
EXEC sp_recompile usp_ProjectSummary
EXEC usp_ProjectSummary #ProjectID

Related

Is recompiling a long running query a good habit

I have some long running (a few hours) stored procedures which contain queries that goes to tables that contain millions of records in a distributed environment. These stored procedures take a date parameter and filters these tables according to that date parameter.
I've been thinking that because of the parameter sniffing feature of SQL Server, at the first time that my stored procedure gets called, the query execution plan will be cached according to that specific date and any future calls will use that exact plan. And I think that since creating an execution plan takes only a few seconds, why would I not use RECOMPILE option in my long running queries, right? Does it have any cons that I have missed?
if the query should run within your acceptable performance limits and you suspect parameter sniffing is the cause,i suggest you add recompile hint to the query..
Also if the query is part of stored proc,instead of recompiling the entire proc,you can also do a statement level recompilation like
create proc procname
(
#a int
)
as
select * from table where a=#a
option(recompile)
--no recompile here
select * from table t1
join
t2 on t1.id=t2.id
end
Also to remind ,recompiling query will cost you.But to quote from Paul White
There is a price to pay for the plan compilation on every execution, but the improved plan quality often repays this cost many times over.
Query store in 2016 helps you in tracking this issues and also stores plans for the queries over time..you will be able to see which are performing worse..
if you are not on 2016,William Durkin have developed open query store for versions (2008-2014) which works more or less the same and helps you in troubleshootng issues
Further reading:
Parameter Sniffing, Embedding, and the RECOMPILE Options

Extrememly High Estimated Number of Rows in Execution Plan

I have a stored procedure running 10 times slower in production than in staging. I took at look at the execution plan and the first thing I noticed was the cost on Table Insert (into a table variable #temp) was 100% in production and 2% in staging.
The estimated number of rows in production showed almost 200 million row! But in staging was only about 33.
Although the production DB is running on SQL Server 2008 R2 while staging is SQL Server 2012 but I don't think this difference could cause such a problem.
What could be the cause of such a huge difference?
UPDATED
Added the execution plan. As you can see, the large number of estimated rows shows up in Nested Loops (Inner Join) but all it does is a clustered index seek to another table.
UPDATED2
Link for the plan XML included
plan.xml
And SQL Sentry Plan Explorer view (with estimated counts shown)
This looks like a bug to me.
There are an estimated 90,991.1 rows going into the nested loops.
The table cardinality of the table being seeked on is 24,826.
If there are no statistics for a column and the equality operator is used, that means the SQL can’t know the density of the column, so it uses a 10 percent fixed value.
90,991.1 * 24,826 * 10% = 225,894,504.86 which is pretty close to your estimated rows of 225,894,000
But the execution plan shows that only 1 row is estimated per seek. Not the 24,826 from above.
So these figures don't add up. I would assume that it starts off from an original 10% ball park estimate and then later adjusts it to 1 because of the presence of a unique constraint without making a compensating adjustment to the other branches.
I see that the seek is calling a scalar UDF [dbo].[TryConvertGuid] I was able to reproduce similar behavior on SQL Server 2005 where seeking on a unique index on the inside of a nested loops with the predicate being a UDF produced a result where the number of rows estimated out of the join was much larger than would be expected by multiplying estimated seeked rows * estimated number of executions.
But, in your case, the operators to the left of the problematic part of the plan are pretty simple and not sensitive to the number of rows (neither the rowcount top operator or the insert operator will change) so I don't think this quirk is responsible for the performance issues you noticed.
Regarding the point in the comments to another answer that switching to a temp table helped the performance of the insert this may be because it allows the read part of the plan to operate in parallel (inserting to a table variable would block this)
Run EXEC sp_updatestats; on the production database. This updates statistics on all tables. It might produce more sane execution plans if your statistics are screwed up.
Please don't run EXEC sp_updatestats; On a large system it could take hours, or days, to complete. What you may want to do is look at the query plan that is being used on production. Try to see if it has a index that could be used and is not being used. Try rebuilding the index (as a side effect it rebuilds statistics on the index.) After rebuilding look at the query plan and note if it is using the index. Perhaps you many need to add an index to the table. Does the table have a clustered index?
As a general rule, since 2005, SQL server manages statistics on its own rather well. The only time you need to explicitly update statistics is if you know that if SQL Server uses an index the query would execute would execute a lot faster but its not. You may want to run (on a nightly or weekly basis) scripts that automatically test every table and every index to see if the index needs to be reorged or rebuilt (depending on how fragmented it is). These kind of scripts (on a large active OLTP system)r may take a long time to run and you should consider carefully when you have a window to run it. There are quite a few versions of this script floating around but I have used this one often:
https://msdn.microsoft.com/en-us/library/ms189858.aspx
Sorry this is probably too late to help you.
Table Variables are impossible for SQL Server to predict. They always estimate one row and exactly one row coming back.
To get accurate estimates so that the better plan can be created you need to switch your table variable to a temp table or a cte.

Non-optimal execution plan using sp_executesql

I'm having problems with slow performance in a sql select statement with some parameters, for the same query, executing this select using sp_executesql way it takes double time that the inline-way.
The problem is that in sp_execute-way sql server is not using optimal execution plan. Although plans are different, it seems that in both cases indexes of tables are being used correctly. I really don't understand why performance are so different.
My original query is more complex but to try to figure out what's happening I have simplify the original query to a select with 3 tables and 2 joins. The main difference is the use of Hash Match in optimal , I really don't know the meaning of this but is the only difference I can see.
Optimal plan (hash match, over 3 seconds)
Wrong plan (no hash match, same indexes than above, over 12 seconds)
I think my problem is not "parameter sniffing", in my case the query is always slow for all distinct parameter values because the execution plan is always incorrect.
OPTION (RECOMPILE) doesn't help, sp_executesql keeps going slow and inline-way take more time (because the query always compile the execution plan)
Statistics for tables are updated
I have to use sp_executesql way because it seems that reporting services encapsulates the select in sp_executesql calls
Does anybody know why sp_executesql generates a different (wrong) execution plan than the inline query?
EDIT: Queries wasn't using same indexes I guess that because the execution tree is not the same and sqlserver takes indexes as it pleases, attached you can find new execution plans to force to use the same indexes, performance is now even worst, from 12 seconds to more than 15 minutes (I have cancelled) in slow query. I'm really not interested in run this specific query more speed, as I say this is not the real query I'm dealing with, what I'm trying to figure out is why execution plans are so different between inline-query and sp_executesql-query.
Is there any magic option in sp_executesql that do this works properly? :)
Optimal
Slow
My understanding is that sp_executesql keeps a cached plan after the first execution. Subsequent queries maybe using a bad cached plan. You can use the following command to clear out the ENTIRE SQL Server procedure cache.
DBCC FREEPROCCACHE
http://msdn.microsoft.com/en-us/library/ms174283.aspx

SQL performance: Recompile versus drop and re-apply for stored procedure

I have a stored procedure used for a DI report that contains 62 sub-queries using UNION ALL. Recently, performance went from under 1 minute to over 8 minutes and using SQL Profiler, it was showing very high CPU and Reads. The procedure currently has passed in variables set to local variables to prevent parameter sniffing.
Running the contents of the procedure as a SELECT statement and performance was back to under a minute.
Calling the procedure via EXEC in Management Studio and performance was horrible and over 8 minutes.
Calling procedure via EXEC including WITH RECOMPILE command and performance did not improve. I ran DBCC FREEPROCCACHE and DBCC DROPCLEANBUFFERS and still no improvement.
In the end, I dropped the procedure and re-applied it and performance is now back.
Can anyone help explain to me why the initial steps did not correct the performance of the procedure but dropping and re-applying the procedure did?
Sounds like blocking parameter sniffing produced a bad plan. When you use local variables the query optimizer uses the density for each column to come up with cardinality estimates, essentially optimizing for the average value. If you data distribution is skewed significantly, this estimate will be significantly off for some values. This theory explains why your initial steps did not work. Using WITH RECOMPILE or running DBCC FREEPROCCACHE will not help if parameter sniffing is blocked. It will just produce the same plan every time. Because you say that running the contents of the procedure as a SELECT statement made it faster makes me think you actually need parameter sniffing. However you also need to try using WITH RECOMPILE if compilation time is acceptable, otherwise there's a risk of getting stuck with a bad plan based on atipical sniffed values.

Stored procedures eating CPU SQL Server 2005

I am trying to resolve a 100% cpu usage by the SQL Server process on the database server. While investigating, I have now come across the fact that the stored procedures are taking the most worker time.
For the following query of dmv's to find queries taking highest time,
SELECT TOP 20 st.text
,st.dbid
,st.objectid
,qs.total_worker_time
,qs.last_worker_time
,qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.total_worker_time DESC
most of them are stored procedures. The weird thing is that all these stored procedures are querying different tables. And yet they are the top of taking the most worker time, even though when I look at the Profiler for queries with top CPU, Reads, Duration, the stored procedures don't figure at the top there.
Why could this be happening?
==Edit==
The application actually uses adhoc queries more than stored procedures. Some of these procedures are to be migrated to using adhoc queries. The thing is that these procedures are not called as often as some of the other queries, which are cpu intensive, and are called very frequently.
Also, it does strike me odd that a stored procedure with which does a simple select a,b,c from tbl where id=#id would have a higher total worker time than a query which has mulitple joins, user defined functions in the where clause, a sort and a row_number over and while the simple one queries a table with 20000 records, the complex query is on a table with over 200,000 records.
It depends what are you doing in that stored procedure, how you are tuning your tables, indexes, etc.
For example if you create a stored procedure using loops like cursor you will have at maximum your cpu usage. If you don't setup your indexes and you are using select using different joins,etc. You'll have your cpu overloaded.
It is accordingly of how you use your database.
Some tips:
I recommend create stored procedure most of the time.
When you create your stored procedures, run with execution plan to get some suggestions.
If the tables have a lot of records (more than 1 or 2 millions), think about create index views.
Do update or delete only when is necessary, sometimes is better only insert records and run a job daily or weekly to update or remove the records that you don't need. (it depends of the case)

Resources