DBCC FREEPROCCACHE and DBCC DROPCLEANBUFFERS alike for specific scope - sql-server

I would like to check options to improve my queries.
sometimes, i want to do the tests on a production server so i can't use DBCC FREEPROCCACHE
and DBCC DROPCLEANBUFFERS to clear the entire server cache.
could you please share with me the way to do a kind of "cache clean" only for my connection/scope?
Thanks.

DBCC FREEPROCCACHE (plan_handle | sql_handle | pool_name)
By passing the plan_handle or the other options we will be able to cleare the cache of a particular sp or a query.

Buffers are not held in a user specific table, or stored per user - there is no way for SQL Server to selectively clear it, since it does not know what items are being held for which queries; doing so would produce unneeded overhead in almost every case (except what you are trying to do now. Sorry.) Despite this, there are options.
There are suggestions, however, to ameliorate the issue, even if the problem can't be avoided:
You can use with(RECOMPILE) to force the query to find a new plan, but that will not clear the cache.
You can run each query twice, to see how slowly/quickly it runs once the data is buffered.
You can repeatedly alternate the two methods, to see if they get faster, and what the speed difference converges towards.
You can run them a day or two apart, or after a server reset. (This is if the production server gets reset occasionally anyways.)
This article has additional ideas for testing in such a situation.

If downtime is OK you can take the database offline and take it online immediately afterwards.

Related

How to clear sqlserver cache to get correct execution plan

I had a query that was running slow (2.5 mins) on sqlserver.
I got actual execution plan, and there was a suggestion for an index. I created the index and now execution time is < 2 seconds.
Then we had to restart sql server.
Query went back to being slow (2.5 mins), again, I looked at execution plan. This time there was a suggestion for a different index!
It would appear that first execution plan index suggestion was taking into account some sort of cached index maybe?
How can I clear cache (if this is the issue) before looking at execution plan?
The symptoms suggest parameter sniffing, where the query plan was generated for the initially supplied parameter values but the plan is suboptimal for subsequent queries with different values. You can invalidate the currently cached plan for specific query by providing the plan handle to DBCC FREEPROCCACHE:
DBCC FREEPROCCACHE(plan_pandle);
There are a number of ways to avoid parameter sniffing. If the query is not executed frequently, a recompile query hint will provide the optimal plan for the parameter values supplied. Otherwise, you could specify an optimize for unknown hint or use the Query Store (depending on your SQL Server version) to force a specific plan or have SQL Server automatically identify plan regression and select a plan.
Dont clear the cache in PRODUCTION environment. It will lead to serious performance issues.
If you want to generate new plan instead of existing plan, you can go for RECOMPILE option as part of stored procedure execution to see whether new index is being considered in the new plan.
EXEC dbo.Procedure WITH RECOMPILE;
or you can regenerate the execution plan for the procedure, by using the below command. Next time, it will be using the newly generated plan.
EXEC sp_recompile `dbo.procedure`
If you want to measure performance improvement repeatedly in a test environment, you can go with below clearing approaches:
DBCC FREEPROCCACHE -- It will clear the plan cache completely
DBCC DROPCLEANBUFFERS -- It will clear the unchanged data brought from disk to memory.
More elegant approach is to write the dirty pages to disk and then issue the cleaning of unchanged data.
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS;
GO
DBCC FREEPROCCACHE;
GO

SQL query performance and dropcleanbuffers

There is a "best practice" that you have to run
DBCC FREESESSIONCACHE
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
Before doing performance analysis on a SQL query.
Yet, for example, the later one DROPCLEANBUFFERS:
Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache
without shutting down and restarting the server.
To drop clean buffers from the buffer pool, first use CHECKPOINT to
produce a cold buffer cache. This forces all dirty pages for the
current database to be written to disk and cleans the buffers. After
you do this, you can issue DBCC DROPCLEANBUFFERS command to remove all
buffers from the buffer pool.
I guess, this means that you will test your query as if it was the first query that has run in the server, thus the actual "real-life" impact of the query will be lower.
Is it really advisable to run the three commands to know the query cost or does it get you to a rather empirical results that have no close relation to actual query time in live environment?
I disagree it is best practice and very rarely use it.
A query that I tune should be a popular, often run one. This gives me most bang for my buck. It should rarely be run "cold" for either plan or data.
I'm testing the query execution: not the disk read system or the Query Optimiser compilation
This was asked on DBA.SE a while ago. See these please
https://dba.stackexchange.com/a/10820/630
https://dba.stackexchange.com/a/7870/630
Is it really advisable to run the three commands to know the query cost or does it get you to a rather empirical results that have no close relation to actual query time in live environment?
It depends.
If you don't run DBCC DROPCLEANBUFFERS then there is a chance that you will end up with some odd results unless you are very careful about the way that you do your performance analysis. For example, generally speaking the second time you run a query it's going to be quicker because the required pages are probably cached in memory - running DBCC DROPCLEANBUFFERS helps here because it ensures that you have a consistent starting point in your testing and it ensures that your query is not artificially running quickly just because it is skipping the expensive disk access portions of your query.
Like you say however, in live environments it could be that this data is always cached and so your test is not representative of production conditions - it depends on whether or not you are analysing the performance based on the assumption that the data is frequently accessed and so will generally be cached, or infrequently accessed and so disk asscess is likely to be involved.
The short answer is that running those 3 statements can help ensure that you get consistent results while performance testing, however you shouldn't necessarily always run these before testing, instead you should try to understand what each one does and what impact it will have on your query when compared to a production environment.
As an aside, Never run any of those 3 statements on a production server unless you know exactly what you are doing!
I agree with what #gbn states in his answer, and I don't think I've ever used the three commands for anything other than demonstrating a difference between possible approaches.
In addition it would be ill-advised in most cases to run these three DBCCs on a production environment just for testing. And performance tuning queries in a test environment, with test data and test load, will often lead you to draw the wrong conclusions regarding your query, anyway.
Usually, when I tune a query, I use the profiler to get actual execution stats from live, I use SSMS to get execution plans from live and I do a few test runs (on test data) to see what differs. For the more tricky problems, I also use the Windows Performance Monitor - and always in a situation that is as close to the real one as possible. Running DBCC would just remove the tuning effort from the real deal.

Prevent Caching in SQL Server

Having looked around the net using Uncle Google, I cannot find an answer to this question:
What is the best way to monitor the performance and responsiveness of production servers running IIS and MS SQL Server 2005?
I'm currently using Pingdom and would like it to point to a URL which basically mimics a 'real world query' but for obvious reasons do not want the query to run from cache. The URL will be called every 5 minutes.
I cannot clear out the cache, buffers, etc since this would impact negatively on the production server. I have tried using a random generated number within the SELECT statement in order to generate unique queries, but the cached query is still used.
Is there any way to simulate the NO_CACHE in MySQL?
Regards
To clear the SQL buffer and plan cache:
DBCC DROPCLEANBUFFERS
GO
DBCC FREEPROCCACHE
GO
A little info about these commands from MSDN:
Use DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server. (source)
Use DBCC FREEPROCCACHE to clear the plan cache carefully. Freeing the plan cache causes, for example, a stored procedure to be recompiled instead of reused from the cache. (source)
SQL Server does not have a results cache like MySQL or Oracle, so I am a bit confused about your question. If you want the server to recompile the plan cache for a stored procedure, you can execute it WITH RECOMPILE. You can drop your buffer cache, but that would affect all queries as you know.
At my company, we test availability and performance separately. I would suggest you use this query just to make sure you your system is working together from front-end to database, then write other tests that check the individual components to judge performance. SQL Server comes with an amazing amount of ways to check if you are experiencing bottlenecks and where they are. I use PerfMon and DMVs extensively. Using PerfMon, I check CPU and page life expectancy, as well as seeing how long my disk queue is. Using DMVs, I can find out if my queries are taking too long (sys.dm_exec_query_stats) or if wait times are long (sys.dm_os_wait_stats).
The two biggest bottlenecks with IIS tend to be CPU and memory, and IIS comes with its own suite of PerfMon objects to query, but I am not as familiar with those.

Query times out when executed from web, but super-fast when executed from SSMS

I'm trying to debug the source of a SQL timeout in a web application that I maintain. I have the source code of the C# code behind, so I know exactly what code is running. I have debugged the application right down to the line that executes the SQL code that times out, and I watch the query running in SQL profiler.
When this query executes from the web, it times out after 30 seconds. However, when I cut/paste the query exactly as presented in Profiler, and I put it into SSMS and run it, it returns almost instantly. I have traced the problem to ARITHABORT being set to OFF in the connection that the web is using (that is, if I turn ARITHABORT OFF in the SSMS session, it runs for a long time, and if I turn it back ON then it runs very quickly). However, reading the description of ARITHABORT, it doesn't seem to apply... I'm only doing a simple SELECT, and there is NO arithmetic being performed at all.. just a single INNER JOIN with a WHERE condition:
Why would ARITHABORT OFF be causing this behavior in this context?? Is there any way I can alter the ARITHABORT setting for that connection from SSMS? I'm using SQL Server 2008.
So your C# code is sending an ad hoc SQL query to SQL Server, using what method? Have you considered using a stored procedure? That would probably ensure the same performance (at least in the engine) regardless of who called it.
Why? The ARITHABORT setting is one of the things the optimizer looks at when it is determining how to execute your query (more specifically, for plan matching). It is possible that the plan in cache has the same setting as SSMS, so it uses the cached plan, but with the opposite setting your C# code is forcing a recompile (or perhaps you are hitting a really BAD plan in the cache), which can certainly hurt performance in a lot of cases.
If you are already calling a stored procedure (you didn't post your query, though I think you meant to), you can try adding OPTION (RECOMPILE) to the offending query (or queries) in the stored procedure. This will mean those statements will always recompile, but it could prevent the use of the bad plan you seem to be hitting. Another option is to make sure that when the stored procedure is compiled, the batch is executed with SET ARITHABORT ON.
Finally, you seem to be asking how you can change the ARITHABORT setting in SSMS. I think what you meant to ask is how you can force the ARITHABORT setting in your code. If you decide to continue sending ad hoc SQL from your C# app, then of course you can send a command as text that has multiple statements separated by semi-colons, e.g.:
SET ARITHABORT ON; SELECT ...
For more info on why this issue occurs, see Erland Sommarskog's great article:
Slow in the Application, Fast in SSMS? Understanding Performance Mysteries
This answer includes a way to resolve this issue:
By running the following commands as administrator on the database all queries run as expected regardless of the ARITHABORT setting.
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Update
It seems that most people end up having this problem occur very rarely, and the above technique is a decent one-time fix. But if a specific query exhibits this problem more than once, a more long-term solution to this problem would be to use Query Hints like OPTIMIZE FOR and OPTION(Recompile), as described in this article.
Update 2
SQL Server has had some improvements made to its query execution plan algorithms, and I find problems like this are increasingly rare on newer versions. If you are experiencing this problem, you might want to check the Compatibility Level setting on the database that you're executing against (not necessarily the one you're querying, but rather the default database--or "InitialCatalog"--for your connection). If you are stuck on an old compatibility level, you'll be using the old query execution plan generation techniques, which have a much higher chance of producing bad queries.
I've had this problem many times before but if you have a stored procedure with the same problem dropping and recreating the stored proc will solve the issue.
It's called parameter sniffing.
You need to always localize the parameters in the stored proc to avoid this issue in the future.
I understand this might not be what the original poster wants but might help someone with the same issue.
If using Entity Framework you must be aware that query parameters for string values are sent to database as nvarchar by default, if database column to compare is typed varchar, depending on your collation, query execution plan may require an "IMPLICIT CONVERSION" step, that forces a full scan. I could confirm it by looking in database monitoring in expensive queries option, which displays the execution plan.
Finally, a explanation on this behavior in this article:
https://www.sqlskills.com/blogs/jonathan/implicit-conversions-that-cause-index-scans/
Just using ARITHABORT wont solve the problem, especially if you use parameterised stored procedures.
Because parameterised stored procedures can cause "parameter sniffing", which uses cached query plan
So, before jumping into conclusion, please check below link.
the-elephant-and-the-mouse-or-parameter-sniffing-in-sql-server
I had the same problem and it was fixed by executing procedure "WITH RECOMPILE". You can also try using parameter sniffing. My issue was related to SQL cache.
If you can change your code to fix parameter sniffing optimize for unknown hint is your best option. If you cannot change your code the best option is exec sp_recompile 'name of proc' which will force only that one stored proc to get a new execution plan. Dropping and recreating a proc would have a similar effect but could cause errors if someone tries to execute the proc while you have it dropped. DBCC FREEPROCCACHE drops all your cached plans which can wreck havoc ok your system up to and including causing lots of timeouts in a heavy transactions production environment. Setting arithabort is not a solution to the problem but is a useful tool for discovering if parameter sniffing is the issue.
I have the same problem when trying to call SP from SMSS it took 2 sec, while from the webapp (ASP.NET) it took about 3 min.
I've tried all suggested solutions sp_recompile, DBCC FREEPROCCACHE and DBCC DROPCLEANBUFFERS but nothing fixed my problem, but when tried parameter sniffing it did the trick, and worked just fine.

How do you fix queries that only run slow until they're cached

I have some queries that are causing timeouts in our live environment. (>30 seconds)
If I run profiler and grab the exact SQL being run and run it from Management Studio then they take a long time to run the first time and then drop to a few hundred miliseconds each run after that.
This is obviously SQL caching the data and getting it all in memory.
I'm sure there are optimisations that can be made to the SQL that will make it run faster.
My question is, how can I "fix" these queries when the second time I run it the data has already been cached and is fast?
May I suggest that you inspect the execution plan for the queries that are responsible for your poor performance issues.
You need to identify, within the execution plan, which steps have the highest cost and why. It could be that your queries are performing a table scan, or that an inappropriate index is being used for example.
There is a very detailed, free ebook available from the RedGate website that concentrates specifically on understanding the contents of execution plans.
https://www.red-gate.com/Dynamic/Downloads/DownloadForm.aspx?download=ebook1
You may find that there is a particular execution plan that you would like to be used for your query. You can force which execution plan is used for a query in SQL Server using query hints. This is quite an advanced concept however and should be used with discretion. See the following Microsoft White Paper for more details.
http://www.microsoft.com/technet/prodtechnol/sql/2005/frcqupln.mspx
I would also not recommend that you clear the procedure cache on your production environment as this will be detrimental to the performance of all other queries on the platform that are not currently experience performance issues.
If you are executing a stored procedure for example you can ensure that a new execution plan is calculated for each execution of the procedure by using the WITH RECOMPILE command.
For overall performance tuning information, there are some excellent resources over at Brent Ozar’s blog.
http://www.brentozar.com/sql-server-performance-tuning/
Hope this helps. Cheers.
According to http://morten.lyhr.dk/2007/10/how-to-clear-sql-server-query-cache.html, you can run the following to clear the cache:
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
EDIT: I checked with the SQL Server documentation I have and this is at least true for SQL Server 2000.
Use can use
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
But only use this in your development environment whilst tuning the queries for deployment to a live server.
I think people are running off in the wrong direction. If I understand, you want the performance to be good all the time? Are they not running fast the 2nd (and subsequent executions) and are slow the first time?
The DBCC commands above clear out the cache, causing WORSE performance.
What you want, I think, is to prime the pump and cache the data. You can do this with some startup procedures that execute the queries and load data into memory.
Memory is a finite resource, so you can't load all data, likely, into memory, but you can find a balance. Brent has some good references above to help learn what you can do here.
Query optimisation is a large subject, there is no single answer to your question. The clues as to what to do are all in the query plan which should be the same regardless of whether the results are cached or not.
Look for the usual things such as table scans, indexes not being used when you expect them to be used, etc. etc. Ultimately you may have to revew your data model and perhaps implement a denormalisation strategy.
From MSDN:
"Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server."

Resources