Stored Procedure is faster the 2nd time - sql-server

I am running SP on SQL Server 2017
The first time I executed the SP it took 43 sec
but the 2nd time it took 1 sec only
How can I execute the SP everytime same as the first time without cache or any learning from the previous experience?
I am not asking why, as this question wants to know Why this is happening
First run slowness in a sql server stored procedure
I am asking, How to make it the same as the first time.
I want my procedure everytime it executes as if its the first time

If you want to remove stored plan caches, you can execute DBCC FREEPROCCACHE. Just beware that it will get a clean slate for all stored procedures.
Remember that once the plan is compiled, it will be stored for future calls until either the SP gets altered or a dependent object is also modified. So most of the times what you want to test is actually the performance of the already compiled plan, unless you constantly are clearing these caches, restarting the servers or triggering a recompile somehow.

Run:
CHECKPOINT;
DBCC DROPCLEANBUFFERS;
prior to each execution. This will clear the buffer pool so that the playing field is leveled and each iteration will incur roughly the same IO overhead.
Also see Performance testing with DBCC DROPCLEANBUFFERS for additional considerations for measuring performance with this method.

Related

How to clear sqlserver cache to get correct execution plan

I had a query that was running slow (2.5 mins) on sqlserver.
I got actual execution plan, and there was a suggestion for an index. I created the index and now execution time is < 2 seconds.
Then we had to restart sql server.
Query went back to being slow (2.5 mins), again, I looked at execution plan. This time there was a suggestion for a different index!
It would appear that first execution plan index suggestion was taking into account some sort of cached index maybe?
How can I clear cache (if this is the issue) before looking at execution plan?
The symptoms suggest parameter sniffing, where the query plan was generated for the initially supplied parameter values but the plan is suboptimal for subsequent queries with different values. You can invalidate the currently cached plan for specific query by providing the plan handle to DBCC FREEPROCCACHE:
DBCC FREEPROCCACHE(plan_pandle);
There are a number of ways to avoid parameter sniffing. If the query is not executed frequently, a recompile query hint will provide the optimal plan for the parameter values supplied. Otherwise, you could specify an optimize for unknown hint or use the Query Store (depending on your SQL Server version) to force a specific plan or have SQL Server automatically identify plan regression and select a plan.
Dont clear the cache in PRODUCTION environment. It will lead to serious performance issues.
If you want to generate new plan instead of existing plan, you can go for RECOMPILE option as part of stored procedure execution to see whether new index is being considered in the new plan.
EXEC dbo.Procedure WITH RECOMPILE;
or you can regenerate the execution plan for the procedure, by using the below command. Next time, it will be using the newly generated plan.
EXEC sp_recompile `dbo.procedure`
If you want to measure performance improvement repeatedly in a test environment, you can go with below clearing approaches:
DBCC FREEPROCCACHE -- It will clear the plan cache completely
DBCC DROPCLEANBUFFERS -- It will clear the unchanged data brought from disk to memory.
More elegant approach is to write the dirty pages to disk and then issue the cleaning of unchanged data.
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS;
GO
DBCC FREEPROCCACHE;
GO

Stange behaviour in store procedure execution, only sp_recompile helps

I noticed strange behavior in store procedure execution in sql server. Suddenly it takin longer time with no end. SP getting called from another server through SSIS package. SP is without input parameters, so we cannot expect parameter sniffing here. But SP uses temp variables to declare a table. It might possibility that missing statistics from temp variable may cause sudden change in execution plan and sp run slow.
But then why only recompiling SP is helping here. Every day I have to recompile sp before it runs otherwise it showing same behavior, run longer and longer (no end).
My question is: why is sp_recompile required to run sp quick every day?
you can use with recompile hint, but I would strongly recommend (Since recompiling a stored procedure puts a load on the server and also if execution plan cache is full every recompilation would cause dropping of one of the cached execution plans, which can lead to dropping of good execution plans.) try and observe execution plan changing to find out why the execution plan become inefficient.

How to bypass SQL Server caching

When I am trying to examine the performance of a given query in my application I'm generally uninterested in the effect my code is having. I want to be able to watch the time taken in Sql Management Studio.
Unfortunately, I'm finding that some sort of caching must be going on, because one query that returns 10,000 results from a table with 26 or so columns, many of them large varchars, takes 12 seconds the first time I run it in a while and takes 6 seconds the following times unless I don't re-run it for a few minutes.
Is there any way to instruct it to bypass the cache and pretend like it had never run it before? I'm using SQL Server 10.0.
You can clean out the cache and sproc cache with
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
respectively

SQL Server 2005 Caching

It's my understanding that SQL Server 2005 does some sort of result or index caching. I'm currently profiling complex select statements which take several seconds to several minutes to complete. My problem is that a second run of a query never takes more than a second to run even if I don't alter it. I'm currently using SQL Server Management Studio Express to execute the queries against a SQL Server 2005 server.
My question is, Is there any way to avoid or clear the cache that is causing my queries to execute so quickly on a second run?
There are a couple different things that could at play here, the 3 that come to mind initially (in probable-ish order) would be as follows - if you would like some help interpreting the results, follow the instructions below and paste the stat outputs in the question:
Your query/batch is taking a long time to compile an execution plan. Execution plans are determined and cached (see this post on serverfault for an overview of understanding for how long, when they are rebuilt, etc.)
To verify this, turn on statistics time output, which will provide you information on how long the engine is taking to generate a query plan. For the query/batch in question:
DBCC FREEPROCCACHE
SET STATISTICS TIME ON
Execute the batch, capture the stats output
Execute the batch again, capture the stats output
Compare the 2 stat outputs, paying particular attention to the parse/compile time differences between the 2 executions.
If this is the problem, you can take a couple of approaches to resolving the issue, including specifying a plan guide, specifying a static plan with use plan, or possibly other options like creating a scheduled job to simply compile the plan every few minutes (not as good on option on Sql 2k5 as the others).
Your query/batch is touching a lot of data - on the first execution the data may not be in the buffer pool (basically the cached pages of data the server needs) and the query is performing physical IO operations as opposed to logical IO operations (i.e. reads from disk vs. reads from cache).
To verify this, turn on statistics io output, which will provide you information on the types of IOs and how many of those the engine is performing for the batch. For the query/batch in question:
DBCC DROPCLEANBUFFERS
SET STATISTICS IO ON
Execute the batch, capture the stats output
Execute the batch again, capture the stats output
Compare the 2 stat outputs, paying particular attention to the physical/read-ahead and logical IO outpus between the 2 executions.
To resolve this, you've basically got only 1 option - optimize the query in question so it performs fewer IO operations. You could consider creating a scheduled job that runs the query every so ofter to keep the data in the buffer pool, but this wouldn't be as good an option.
Your query/batch is getting a poor execution plan and/or a poor execution plan choice for different variable values - is this a batch/query that is using a parameterized statement (i.e. you are using variables/static values in the where/join clauses?)? If so, are you seeing the difference in execution times for the same values or different values? If for the same values, the answer is likely #1 or #2 - if for different values, this is potentially your problem. If you think this is the issue after researching #1 and #2, repost with the .sqlplan, the TSQL, and the different parameter values you are using.
I've found the only realiable metrics for performance tuning come from SQL Server's Profiler application. When looking at CPU Time, and Reads in particular, you become much more sheltered from 'other influences'.
For example, the OS being busy, or multiple users being active will reduce your share of CPU and so increase the time to execute. And you may or may not get parallelism over multiple CPUs. But either way, the total CPU Time (as opposed to execution time) will stay approximately the same.
Do this:
CHECKPOINT
DBCC DROPCLEANBUFFERS

Any downside to "WITH RECOMPILE" for monthly SQL Server stored proc processes?

I think the question says it all. I have several monthly processes in stored procedures which take anywhere from a minute to an hour. If I declare them WITH RECOMPILE, an execution plan will be generated each time.
If the underlying indexes or statistics or views are changed by the DBA, I don't want anyone to have to go in and force a recompile the SPs with an ALTER or whatever.
Is there any downside to this?
Under the circumstances, it would be completely harmless, and probably a good idea.
As I understand it, an SP should be re-compiled if needed automatically. So your concern about underlying changes doesn't really matter.
However, the server tries to cache compiled SP plans. Using WITH RECOMPILE will free the memory that would have been used to cache the compiled procedures (at least until the next time the cache is cleared). Since they're only run monthly this seems like a good idea.
Also, you might want to look at this article for other reason to use that directive:
https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-5662581.html
If each stored procedure is only run once per month, it is highly unlikely that the compiled procedure will still be in the procedure cache. Effectively it will be recompiling anyway.
Even if you run the same stored procedure 100 times on your reporting day, it will only take 0-2 seconds to compile each time (depending on the complexity of the stored procedure), so it's not a massive overhead. I'd feel comfortable setting WITH RECOMPILE on those stored procedures.

Resources