Recently, one of my colleague working in SQL development got into a problem like this: a procedure ran fine on all environments, but production, which has the most resources. Typical case of parameter sniffing, but the profiler indicated that only one query in the whole procedure took very much to execute:
UPDATE a
SET status_id = 6
FROM usr.tpt_udef_article_grouping_buffer a
LEFT JOIN (SELECT DISTINCT buying_domain_id, suppl_no FROM usr.buyingdomain_supplier_article) b ON a.buying_domain_id = b.buying_domain_id
AND a.suppl_no = b.suppl_no
WHERE a.tpt_file_id = #tpt_file_id
AND a.status_id IS NULL
AND b.suppl_no IS NULL
As I am biased towards development (I have little administration experience), I suggested that this query should be rewritten:
replace LEFT JOIN (SELECT DISTINCT ...) with NOT EXISTS (SELECT 1 ...)
put the appropriate index on table usr.tpt_udef_article_grouping_buffer(SSMS suggested an effort reduced by 95% when query was run outside the procedure)
Also, multiple queries from the procedure shared the same pattern.
I know that parameter sniffing is more related to the plan constructing when running the procedure for the first time after its (re)creation and I think it is also favored by high cyclomatic complexity.
My question is:
Does the way queries in the procedure are written (bad execution plans from the beginning) favor parameter sniffing appearance or just worsen their effects?
Your only parameter here is a.tpt_file_id = #tpt_file_id and if this is parameter sniffing, then the cases must be such that for certain tpt_file_id there are thousands (or more) records, and for certain there is few (or none).
The other reason you get different plans in production than test environment is that the machines are different. You usually have a lot more memory and more CPUs / cores in production environment, causing optimizer to choose different plan and of course if your row counts in the tables are not the same, it of course can lead to into a totally different plan.
You can check this with using option (recompile) to see if the plan changes or look at plan cache that what was the value of the parameter used to create the plan. It can be seen in the properties of the leftmost object in the plan.
Changing the select distinct into exists clause is probably a good idea, and of course indexing the tables properly.
Related
I've recently hit a bottleneck situation in which if I keep a current version of a query inside a report (designed in Report Builder SSRS 2008) it will generate loading times of up to 15 minutes for a report with specific parameters. This JOIN represents a sub-query which I JOIN to the main query on a non-indexed column. Let's call this sub-query "Units".
If I delete the "Units" JOIN from the SQL Query and set it up as a separate Data Set inside the report, linking it using the SSRS Lookup function (same as the JOIN in SQL) to the Main Data Set (Query), the report runs smoothly, in under a minute (Approximately 3 to 5 miliseconds).
Keeping in mind that the "Units" sub-query, when ran separately runs in under 5 milliseconds for the same parameters that previously took 15 minutes, but when it is attached to the Main query causes severe performance issues.
Is there a clear benefit on doing this type of separation or should I just investigate further on how to improve the query? What are the performance benefits/downsides of using lookup versus improving the current query performance.
My concern is that this is a situational improvement and this will not represent a long term solution. I've used this alternative in the past to avoid tweaking the query and it did not backfire, but I do not fully understand the performance implications of using this workaround.
Thanks,
Radu.
There are a lot of things that could be causing the performance issues but here's a few simple things that might get the dataset back up to speed again with very little effort.
1. Parameter sniffing
You mention with specific parameters, if you mean that the query only performs badly with some parameters and performs well with other parameters, and assuming that the size of the data does not vary significantly based on these parameters then it's likely a parameter sniffing issue. This is caused by a query plan that was generated based on once set of parameters that is not suitable for other parameters. The easiest way to prove this is to simply add option (recompile) to the end of the query. This is not a permanent fix but it will force a new query plan to be generated. If you see an instant improvement then parameter sniffing is the most common cause.
2. Refactor dataset query
The other option is to redesign your query. I don't know what you query looks like but if we take a simple example based on the information you posted...
If you query looks something like..
SELECT * FROM tableA a
JOIN (SELECT * FROM tableB WHERE someValue=someOtherValue) b
ON a.FieldA = b.FieldB
then you could refactor it by putting the subquery into a temp table and joining to that, something like
SELECT *
INTO #t
FROM tableB WHERE someValue=someOtherValue
SELECT * FROM tableA a
JOIN #t b
ON a.FieldA = b.FieldB
This is an approach I often take and it can get round exactly these types of performance issues.
I have following queries:
DECLARE #application_number CHAR(8)= '37832904';
SELECT
la.LEASE_NUMBER AS lease_number,
la.[LEASE_APPLICATION] AS application_number,
tnu.[FOLLOWUP_CODE] AS note_type_code -- catch codes not in codes table
FROM [dbo].[lease_applications] la
LEFT JOIN [dbo].tickler_notes_uniq tnu ON tnu.[ACCOUNT_NUMBER] = la.[ACCOUNT_NUMBER]
WHERE la.LEASE_APPLICATION = #application_number
OR #application_number IS NULL;
SELECT
la.LEASE_NUMBER AS lease_number,
la.[LEASE_APPLICATION] AS application_number,
tnu.[FOLLOWUP_CODE] AS note_type_code -- catch codes not in codes table
FROM [dbo].[lease_applications] la
LEFT JOIN [dbo].tickler_notes_uniq tnu ON tnu.[ACCOUNT_NUMBER] = la.[ACCOUNT_NUMBER]
WHERE la.LEASE_APPLICATION = #application_number;
The only difference between these 2 queries is that I've added checking for the variable if it is NULL or not.
The execution plans of these queries are:
You can find graphical plan here
So the question is. Why the plans are so different?
UPDATE:
The actual execution plan of the first query can be found here
OPTION(RECOMPILE) changed the actual execution plan to the good one. However the downside of that is that my main goal was to create the TVF with these params and then everybody who uses that function is supposed to provide that option.
It is also worth to mention that my main goal is to create TVF with 2 params. Each of it might be null and might be not but at least 1 of them is supposed to be NOT NULL. These params are more or less equal, they are just different keys in the 2 tables that would give the same result anyway (the same number of rows and so on). That's why I wanted to do something like
WHERE (col1 = #param1 OR #param1 IS NULL) AND (col2 = #param2 OR #param2 IS NULL) AND (#param1 IS NOT NULL or #param2 IS NOT NULL)
So, basically I am not interested in ALL records at all
You have two different plans for two different queries.
It makes sense that when you have an equality condition on the WHERE clause(la.LEASE_APPLICATION = #application_number)(and having indexes in place) you get an index seek: working as expected!
On the other hand, when you write both conditions into one WHERE clause (la.LEASE_APPLICATION = #application_number OR #application_number IS NULL) the query optimizer has chosen to do a scan.
Even though the parameter value has been supplied and it is not null, the plan that is being used is the cached one and it can not know at compile time the actual value of your parameter.
This is the case if you have a stored procedure and you are calling it with parameters. This is not the case when executing a simple query using a variable.
As #sepupic has stated, variable values do not get sniffed.
The plan is generated to handle both cases: when you have a value for your parameter as well as when you have none.
One option to fix your problem would be using OPTION(RECOMPILE) as it has been stated already in the comments.
Another option would be to have your queries separated(for ex. having two different stored procedures, called by a third "wrapper" procedure), so that they get optimized accordingly, each one on it's own.
I would suggest you to take a look at this article by Kimberly L. Tripp: Building High Performance Stored Procedures and this other one by Aaron Bertrand: An Updated "Kitchen Sink" Example. I think these are the best articles explaining these kind of scenarios.
Both articles explain this situation, possible problems with it and possible solutions as well such as option(recompile), dynamic sql or having separated stored procedures.
Good luck!
Your queries do not use parameters, they use a variable. The variable is not sniffed at the moment the batch is compiled (compilation = making a plan) because the batch is seen as one whole thing. So server has no idea if the variable is null or is not null. And it must make a plan that will be suitable in both cases.
The first query can filter no rows at all so the scan is selected.
The second query does filter, but the value is unknown, so if you use SQL server 2014 and the fintered column is not unique, the estimation is C^3/4 (C= table cardinality)
The situation can be different if you use RECOMPILE query option. When you add it to your query, it's recompiled AFTER the assignment of table variable is done. In this case the variable value is known, and you'll get another plan. It will be a plan based on column statistics for a known value of your filter
I have some long running (a few hours) stored procedures which contain queries that goes to tables that contain millions of records in a distributed environment. These stored procedures take a date parameter and filters these tables according to that date parameter.
I've been thinking that because of the parameter sniffing feature of SQL Server, at the first time that my stored procedure gets called, the query execution plan will be cached according to that specific date and any future calls will use that exact plan. And I think that since creating an execution plan takes only a few seconds, why would I not use RECOMPILE option in my long running queries, right? Does it have any cons that I have missed?
if the query should run within your acceptable performance limits and you suspect parameter sniffing is the cause,i suggest you add recompile hint to the query..
Also if the query is part of stored proc,instead of recompiling the entire proc,you can also do a statement level recompilation like
create proc procname
(
#a int
)
as
select * from table where a=#a
option(recompile)
--no recompile here
select * from table t1
join
t2 on t1.id=t2.id
end
Also to remind ,recompiling query will cost you.But to quote from Paul White
There is a price to pay for the plan compilation on every execution, but the improved plan quality often repays this cost many times over.
Query store in 2016 helps you in tracking this issues and also stores plans for the queries over time..you will be able to see which are performing worse..
if you are not on 2016,William Durkin have developed open query store for versions (2008-2014) which works more or less the same and helps you in troubleshootng issues
Further reading:
Parameter Sniffing, Embedding, and the RECOMPILE Options
I'm having trouble understanding the behavior of the estimated query plans for my statement in SQL Server when a change from a parameterized query to a non-parameterized query.
I have the following query:
DECLARE #p0 UniqueIdentifier = '1fc66e37-6eaf-4032-b374-e7b60fbd25ea'
SELECT [t5].[value2] AS [Date], [t5].[value] AS [New]
FROM (
SELECT COUNT(*) AS [value], [t4].[value] AS [value2]
FROM (
SELECT CONVERT(DATE, [t3].[ServerTime]) AS [value]
FROM (
SELECT [t0].[CookieID]
FROM [dbo].[Usage] AS [t0]
WHERE ([t0].[CookieID] IS NOT NULL) AND ([t0].[ProductID] = #p0)
GROUP BY [t0].[CookieID]
) AS [t1]
OUTER APPLY (
SELECT TOP (1) [t2].[ServerTime]
FROM [dbo].[Usage] AS [t2]
WHERE ((([t1].[CookieID] IS NULL) AND ([t2].[CookieID] IS NULL))
OR (([t1].[CookieID] IS NOT NULL) AND ([t2].[CookieID] IS NOT NULL)
AND ([t1].[CookieID] = [t2].[CookieID])))
AND ([t2].[CookieID] IS NOT NULL)
AND ([t2].[ProductID] = #p0)
ORDER BY [t2].[ServerTime]
) AS [t3]
) AS [t4]
GROUP BY [t4].[value]
) AS [t5]
ORDER BY [t5].[value2]
This query is generated by a Linq2SQL expression and extracted from LINQPad. This produces a nice query plan (as far as I can tell) and executes in about 10 seconds on the database. However, if I replace the two uses of parameters with the exact value, that is replace the two '= #p0' parts with '= '1fc66e37-6eaf-4032-b374-e7b60fbd25ea' ' I get a different estimated query plan and the query now runs much longer (more than 60 seconds, haven't seen it through).
Why is it that performing the seemingly innocent replacement produces a much less efficient query plan and execution? I have cleared the procedure cache with 'DBCC FreeProcCache' to ensure that I was not caching a bad plan, but the behavior remains.
My real problem is that I can live with the 10 seconds execution time (at least for a good while) but I can't live with the 60+ sec execution time. My query will (as hinted above) by produced by Linq2SQL so it is executed on the database as
exec sp_executesql N'
...
WHERE ([t0].[CookieID] IS NOT NULL) AND ([t0].[ProductID] = #p0)
...
AND ([t2].[ProductID] = #p0)
...
',N'#p0 uniqueidentifier',#p0='1FC66E37-6EAF-4032-B374-E7B60FBD25EA'
which produces the same poor execution time (which I think is doubly strange since this seems to be using parameterized queries.
I'm not looking for advise on which indexes to create or the like, I'm just trying to understand why the query plan and execution are so dissimilar on three seemingly similar queries.
EDIT: I have uploaded execution plans for the non-parameterized and the parameterized query as well as an execution plan for a parameterized query (as suggested by Heinz) with a different GUID here
Hope it helps you help me :)
If you provide an explicit value, SQL Server can use statistics of this field to make a "better" query plan decision. Unfortunately (as I've experienced myself recently), if the information contained in the statistics is misleading, sometimes SQL Server just makes the wrong choices.
If you want to dig deeper into this issue, I recommend you to check what happens if you use other GUIDs: If it uses a different query plan for different concrete GUIDs, that's an indication that statistics data is used. In that case, you might want to look at sp_updatestats and related commands.
EDIT: Have a look at DBCC SHOW_STATISTICS: The "slow" and the "fast" GUID are probably in different buckets in the histogram. I've had a similar problem, which I solved by adding an INDEX table hint to the SQL, which "guides" SQL Server towards finding the "right" query plan. Basically, I've looked at what indices are used during a "fast" query and hard-coded those into the SQL. This is far from an optimal or elegant solution, but I haven't found a better one yet...
I'm not looking for advise on which indexes to create or the like, I'm just trying to understand why the query plan and execution are so dissimilar on three seemingly similar queries.
You seem to have two indexes:
IX_NonCluster_Config (ProductID, ServerTime)
IX_NonCluster_ProductID_CookieID_With_ServerTime (ProductID, CookieID) INCLUDE (ServerTime)
The first index does not cover CookieID but is ordered on ServerTime and hence is more efficient for the less selective ProductID's (i. e. those that you have many)
The second index does cover all columns but is not ordered, and hence is more efficient for more selective ProductID's (those that you have few).
In average, you ProductID cardinality is so that SQL Server expects the second method to be efficient, which is what it uses when you use parametrized queries or explicitly provide selective GUID's.
However, your original GUID is considered less selective, that's why the first method is used.
Unfortunately, the first method requires additional filtering on CookieID which is why it's less efficient in fact.
My guess is that when you take the non paramaterized route, your guid has to be converted from a varchar to a UniqueIdentifier which may cause an index not to be used, while it will be used taking the paramatarized route.
I've seen this happen with using queries that have a smalldatetime in the where clause against a column that uses a datetime.
Its difficult to tell without looking at the execution plans, however if I was going to guess at a reason I'd say that its a combinaton of parameter sniffing and poor statistics - In the case where you hard-code the GUID into the query, the query optimiser attempts to optimise the query for that value of the parameter. I believe that the same thing happens with the parameterised / prepared query (this is called parameter sniffing - the execution plan is optimised for the parameters used the first time that the prepared statement is executed), however this definitely doesn't happen when you declare the parameter and use it in the query.
Like I said, SQL server attempt to optimise the execution plan for that value, and so usually you should see better results. It seems here that that information it is basing its decisions on is incorrect / misleading, and you are better off (for some reason) when it optimises the query for a generic parameter value.
This is mostly guesswork however - its impossible to tell really without the execution - if you can upload the executuion plan somewhere then I'm sure someone will be able to help you with the real reason.
I have a query that has been running every day for a little over 2 years now and has typically taken less than 30 seconds to complete. All of a sudden, yesterday, the query started taking 3+ hours to complete and was using 100% CPU the entire time.
The SQL is:
SELECT
#id,
alpha.A, alpha.B, alpha.C,
beta.X, beta.Y, beta.Z,
alpha.P, alpha.Q
FROM
[DifferentDatabase].dbo.fnGetStuff(#id) beta
INNER JOIN vwSomeData alpha ON beta.id = alpha.id
alpha.id is a BIGINT type and beta.id is an INT type. dbo.fnGetStuff() is a simple SELECT statement with 2 INNER JOINs on tables in the same DB, using a WHERE id = #id. The function returns approximately 11000 results.
The view vwSomeData is a simple SELECT statement with two INNER JOINs that returns about 590000 results.
Both the view and the function will complete in less than 10 seconds when executed by themselves. Selecting the results of the function into a temporary table first and then joining on that makes the query finish in < 10 seconds.
How do I troubleshoot what's going on? I don't see any locks in the activity manager.
Look at the query plan. My guess is that there is a table scan or more in the execution plan. This will cause huge amounts of I/O for the few record you get in the result.
You could use the SQL Server Profiler tool to monitor what queries are running on SQL Server. It doesn't show the locks, but it can for instance also give you hints on how to improve your query by suggesting indexes.
If you've got a reasonably recent version of SQL Server Management Studio, it has a Database Tuning Adviser as well, under Tools. It takes a trace from the Profiler and makes some, sometimes highly useful, suggestions. Makes sure there's not too many queries - it takes a long time to build advice.
I'm not an expert on it, but have had some luck with it in the past.
Do you need to use a function? Can you re-write the entire thing into a stored procedure in which you pass in the #ID as a parameter.
Even if your table has indexes because you pass the #ID as a variable to the WHERE clause potentially greatly increasing the amount of time for the query to run.
The reason the indexes may not be used is because the Query Analyzer does not know the value of the variables when it selects an access method to perform the query. Because this is a batch file, only one pass is made of the Transact-SQL code, preventing the Query Optimizer from knowing what it needs to know in order to select an access method that uses the indexes.
You might want to consider an INDEX query hint if you cannot re-write the SQL.
it might also be possible, since this just started happening, that the INDEXes have become fragmented and might need to be rebuilt.
I've had similar problems with joining functions that return large datasets. I had to do what you've already suggested. Put the results in a temp table and join on that.
Look at the estimated plan, this will probably shed some light. Typically when query cost gets orders of magnitude more expensive it is because a loop or merge join is being used where a hash join is more appropriate. If you see a loop or merge join in the estimated plan, look at the number of rows it expects to process - is it far smaller than the number of rows you know will actually be in play? You can also specify a hint to use a hash join and see if it performs much better. If so, try updating statistics and see if it goes back to a hash join without a hint.
SELECT
#id,
alpha.A, alpha.B, alpha.C,
beta.X, beta.Y, beta.Z,
alpha.P, alpha.Q
FROM
[DifferentDatabase].dbo.fnGetStuff(#id) beta
INNER HASH JOIN vwSomeData alpha ON beta.id = alpha.id
-- having no idea what type of schema is in place and just trying to throw out ideas:
Like others have said... use Profiler and find the source of pain... but I'm thinking it is the function on the other database. Since that function might be a source of pain, have you thought about a little denormalization or anything on [DifferentDatabase]. I think you'll find a bit more scalability in joining to a more flattened table with indexes than a costly function.
Run this command:
SET SHOWPLAN_ALL ON
Then run your query. It will display the execution plan, look for a "SCAN" on an index or a table. That is most likely what is happening to your query now. If that is the case, try to figure out why it is not using indexes now (refresh statistics, etc)