when is execution plans in SQL Server generated? - sql-server

In a stored procedure in SQL Server, when is the actual SQL query plans generated? When the SQL is run for the first time or when the stored procedure is compiled? Any idea how expensive the generation of query plans is in comparison to Oracle?

When a query is run, SQL Server will check to see if an execution plan already exists for that query in the execution plan cache. If it finds one, it can reuse that execution plan. If it doesn't find one in the cache, it then generates a plan, puts in the cache ready for subsequent calls to reuse, and then executes the query. So it does this at the time when the query is executed.
How long a plan stays in the cache for is down to a number of factors, including:
- how often that plan is used
- how much "value" that plan offers
- memory pressure on the server
So a given query could have an execution plan generated multiple times over the course of a given period, if it's plan is not managing to stay in the cache. Also, when SQL Server restarts, the cache is cleared.
There's a good MSDN article on Execution Plan Caching and Reuse

Related

How to clear sqlserver cache to get correct execution plan

I had a query that was running slow (2.5 mins) on sqlserver.
I got actual execution plan, and there was a suggestion for an index. I created the index and now execution time is < 2 seconds.
Then we had to restart sql server.
Query went back to being slow (2.5 mins), again, I looked at execution plan. This time there was a suggestion for a different index!
It would appear that first execution plan index suggestion was taking into account some sort of cached index maybe?
How can I clear cache (if this is the issue) before looking at execution plan?
The symptoms suggest parameter sniffing, where the query plan was generated for the initially supplied parameter values but the plan is suboptimal for subsequent queries with different values. You can invalidate the currently cached plan for specific query by providing the plan handle to DBCC FREEPROCCACHE:
DBCC FREEPROCCACHE(plan_pandle);
There are a number of ways to avoid parameter sniffing. If the query is not executed frequently, a recompile query hint will provide the optimal plan for the parameter values supplied. Otherwise, you could specify an optimize for unknown hint or use the Query Store (depending on your SQL Server version) to force a specific plan or have SQL Server automatically identify plan regression and select a plan.
Dont clear the cache in PRODUCTION environment. It will lead to serious performance issues.
If you want to generate new plan instead of existing plan, you can go for RECOMPILE option as part of stored procedure execution to see whether new index is being considered in the new plan.
EXEC dbo.Procedure WITH RECOMPILE;
or you can regenerate the execution plan for the procedure, by using the below command. Next time, it will be using the newly generated plan.
EXEC sp_recompile `dbo.procedure`
If you want to measure performance improvement repeatedly in a test environment, you can go with below clearing approaches:
DBCC FREEPROCCACHE -- It will clear the plan cache completely
DBCC DROPCLEANBUFFERS -- It will clear the unchanged data brought from disk to memory.
More elegant approach is to write the dirty pages to disk and then issue the cleaning of unchanged data.
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS;
GO
DBCC FREEPROCCACHE;
GO

Does the execution plan depend on data?

I run my SQL statement twice, including 'Actual Execution Plan'. However, I got two different execution plans for the same query. The only difference is I change ID.
First, I run sql with ID which is related to only 5 records.
Second, I run sql with ID which is related to 5000+ records.
Does the execution plan change depending on the data?
I'm using SQL Server 2008 R2
Does the execution plan change depending on the data?
Strictly speaking, No. The execution plan will change if the query changes or the statistics (which do depend on the data) change.
If the data changes but the stats do not, then your execution plan will stay the same.

Clear SQL Azure execution plan / query cache

I have a few "inefficient" queries that I am trying to debug on Azure SQL (v12). The problem I have is that after the query executes for the first time (albeit, many seconds) Azure appears to cache the query / execution plan. I have done some research and several people have suggested adding and removing a column will clear the cache but this doesn't seem to work. If I leave the server alone for a few hours / overnight and re-run the query it takes its usual time to execute but once again the cache is in place - this makes it very hard to optimise my query. Does anyone know how to force Azure SQL to not cache my queries / execution plans?
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE is designed to help wit this problem.
https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql?view=sql-server-2017
This is closest to the DBCC FREEPROCCACHE you have in SQL Server but is scoped to a database instead of the server instance. This does not prevent caching of query plans - it just invalidates the current cache entries.
Please note that the query store is there to help you in SQL Azure (on-by-default). It stores a history of plan choices and plan performance (per-plan). So, if you have a prior plan that performs better available in the history of your application, you can force it using SSMS if you'd prefer to have the query optimizer pick this plan each time your query compiles. One common reason for what you are seeing is parameter-sensitivity in the plan choice where the optimizer will use the passed parameter value to try to generate the query plan, assuming it is representing a common pattern when you run that query. If that value is actually not close to a common value (in terms of how frequent it is in the table), then you can sometimes compile and cache a plan that is not better on average for your application.
Query store has an overview here:
https://learn.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
Note that SQL Azure also has an automated mechanism to try forcing prior plans if it notices a performance regression. It is somewhat conservative, however, so it may not kick in for every single regression until it sees an obvious pattern over time. So, while you can force things in SSMS, you can also potentially just wait (assuming this is the issue you were seeing)

query hangs, but ok if I update statistics

Got an issue where i have a complicated sql query that occasionally hangs and doesn't execute on MS SQL. However, when i run update statistics on the tables involved in the query, the query executes normally.
Any idea or pointers on the cause?
Thanks!
SQL Server creates an "execution plan" that uses the statistics info to determine an optimal order to filter the data/reduce access to the database tables.
This execution plan is stored in the database cache and is re-used as long as the database is online; the statistics are not rebuild and the query is not modified.
When you update the indexes, the statistics are updated as well.
As a result, the stored execution plan for your query is no longer optimal and as a result will not be used any more.
I expect SQL Server also closes unused locks and transactions for the table before rebuilding the index. That is an undocumented feature.

SQL DMV Queries & Cached Plans

My understanding is that some of the DMV's in SQL Server depend on query plans being cached. My questions are these. Are all query plans cached? If not, when is a query plan not cached? For ones that are cached, how long do they stay in the cache?
Thanks very much
Some of the SQL Server DMV's that capture tokens relating directly to the query plan cache, are at the mercy of the memory pressure placed on the query plan cache (due to adhoc queries, other memory usage and high activity, or through recompilation). The query plan cache is subject to plan aging (e.g. a plan with a cost of 10 that has been referenced 5 times has an "age" value of 50):
If the following criteria are met, the plan is removed from memory:
· More memory is required by the system
· The "age" of the plan has reached zero
· The plan isn't currently being referenced by an existing connection
Ref.
Those DMV's not directly relating to the query plan cache are flushed under 'general' memory pressure (cached data pages) or if the sql server service is restarted.
The factors affecting query plan caching have changed slightly since SQL Server 2000. The up-to-date reference for SQL Server 2008 is here: Plan Caching in SQL Server 2008
I just want to add some geek minutia: The Query plan cache leverages the general caching mechanism of SQL Server. These caches use the Clock algorithm for eviction, see Q and A: Clock Hands - what are they for. For query plan caches, the cost of the entry takes into consideration the time, IO and memory needed to create the cache entry.
For ones that are cached, how long do
they stay in the cache?
A valid object stays in cache until the clock hand decrements the cost to 0. See sys.dm_os_memory_cache_clock_hands. There is no absolute time answer to this question, the clock hand could decrement an entry to 0 in a second, in a hour, in a week or in a year. It all depends on the initial cost of the entry (query/schema complexity), on the frequency of reusing the plan, and the clock hands speed (memory pressure).
Cached object may be invalidated though. The various reasons why a Query plan gets invalidated are explained in great detail the white paper linked by Mitch: Plan Caching in SQL Server 2008.

Resources