There is a "best practice" that you have to run
DBCC FREESESSIONCACHE
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
Before doing performance analysis on a SQL query.
Yet, for example, the later one DROPCLEANBUFFERS:
Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache
without shutting down and restarting the server.
To drop clean buffers from the buffer pool, first use CHECKPOINT to
produce a cold buffer cache. This forces all dirty pages for the
current database to be written to disk and cleans the buffers. After
you do this, you can issue DBCC DROPCLEANBUFFERS command to remove all
buffers from the buffer pool.
I guess, this means that you will test your query as if it was the first query that has run in the server, thus the actual "real-life" impact of the query will be lower.
Is it really advisable to run the three commands to know the query cost or does it get you to a rather empirical results that have no close relation to actual query time in live environment?
I disagree it is best practice and very rarely use it.
A query that I tune should be a popular, often run one. This gives me most bang for my buck. It should rarely be run "cold" for either plan or data.
I'm testing the query execution: not the disk read system or the Query Optimiser compilation
This was asked on DBA.SE a while ago. See these please
https://dba.stackexchange.com/a/10820/630
https://dba.stackexchange.com/a/7870/630
Is it really advisable to run the three commands to know the query cost or does it get you to a rather empirical results that have no close relation to actual query time in live environment?
It depends.
If you don't run DBCC DROPCLEANBUFFERS then there is a chance that you will end up with some odd results unless you are very careful about the way that you do your performance analysis. For example, generally speaking the second time you run a query it's going to be quicker because the required pages are probably cached in memory - running DBCC DROPCLEANBUFFERS helps here because it ensures that you have a consistent starting point in your testing and it ensures that your query is not artificially running quickly just because it is skipping the expensive disk access portions of your query.
Like you say however, in live environments it could be that this data is always cached and so your test is not representative of production conditions - it depends on whether or not you are analysing the performance based on the assumption that the data is frequently accessed and so will generally be cached, or infrequently accessed and so disk asscess is likely to be involved.
The short answer is that running those 3 statements can help ensure that you get consistent results while performance testing, however you shouldn't necessarily always run these before testing, instead you should try to understand what each one does and what impact it will have on your query when compared to a production environment.
As an aside, Never run any of those 3 statements on a production server unless you know exactly what you are doing!
I agree with what #gbn states in his answer, and I don't think I've ever used the three commands for anything other than demonstrating a difference between possible approaches.
In addition it would be ill-advised in most cases to run these three DBCCs on a production environment just for testing. And performance tuning queries in a test environment, with test data and test load, will often lead you to draw the wrong conclusions regarding your query, anyway.
Usually, when I tune a query, I use the profiler to get actual execution stats from live, I use SSMS to get execution plans from live and I do a few test runs (on test data) to see what differs. For the more tricky problems, I also use the Windows Performance Monitor - and always in a situation that is as close to the real one as possible. Running DBCC would just remove the tuning effort from the real deal.
Related
I would like to check options to improve my queries.
sometimes, i want to do the tests on a production server so i can't use DBCC FREEPROCCACHE
and DBCC DROPCLEANBUFFERS to clear the entire server cache.
could you please share with me the way to do a kind of "cache clean" only for my connection/scope?
Thanks.
DBCC FREEPROCCACHE (plan_handle | sql_handle | pool_name)
By passing the plan_handle or the other options we will be able to cleare the cache of a particular sp or a query.
Buffers are not held in a user specific table, or stored per user - there is no way for SQL Server to selectively clear it, since it does not know what items are being held for which queries; doing so would produce unneeded overhead in almost every case (except what you are trying to do now. Sorry.) Despite this, there are options.
There are suggestions, however, to ameliorate the issue, even if the problem can't be avoided:
You can use with(RECOMPILE) to force the query to find a new plan, but that will not clear the cache.
You can run each query twice, to see how slowly/quickly it runs once the data is buffered.
You can repeatedly alternate the two methods, to see if they get faster, and what the speed difference converges towards.
You can run them a day or two apart, or after a server reset. (This is if the production server gets reset occasionally anyways.)
This article has additional ideas for testing in such a situation.
If downtime is OK you can take the database offline and take it online immediately afterwards.
Having looked around the net using Uncle Google, I cannot find an answer to this question:
What is the best way to monitor the performance and responsiveness of production servers running IIS and MS SQL Server 2005?
I'm currently using Pingdom and would like it to point to a URL which basically mimics a 'real world query' but for obvious reasons do not want the query to run from cache. The URL will be called every 5 minutes.
I cannot clear out the cache, buffers, etc since this would impact negatively on the production server. I have tried using a random generated number within the SELECT statement in order to generate unique queries, but the cached query is still used.
Is there any way to simulate the NO_CACHE in MySQL?
Regards
To clear the SQL buffer and plan cache:
DBCC DROPCLEANBUFFERS
GO
DBCC FREEPROCCACHE
GO
A little info about these commands from MSDN:
Use DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server. (source)
Use DBCC FREEPROCCACHE to clear the plan cache carefully. Freeing the plan cache causes, for example, a stored procedure to be recompiled instead of reused from the cache. (source)
SQL Server does not have a results cache like MySQL or Oracle, so I am a bit confused about your question. If you want the server to recompile the plan cache for a stored procedure, you can execute it WITH RECOMPILE. You can drop your buffer cache, but that would affect all queries as you know.
At my company, we test availability and performance separately. I would suggest you use this query just to make sure you your system is working together from front-end to database, then write other tests that check the individual components to judge performance. SQL Server comes with an amazing amount of ways to check if you are experiencing bottlenecks and where they are. I use PerfMon and DMVs extensively. Using PerfMon, I check CPU and page life expectancy, as well as seeing how long my disk queue is. Using DMVs, I can find out if my queries are taking too long (sys.dm_exec_query_stats) or if wait times are long (sys.dm_os_wait_stats).
The two biggest bottlenecks with IIS tend to be CPU and memory, and IIS comes with its own suite of PerfMon objects to query, but I am not as familiar with those.
It's my understanding that SQL Server 2005 does some sort of result or index caching. I'm currently profiling complex select statements which take several seconds to several minutes to complete. My problem is that a second run of a query never takes more than a second to run even if I don't alter it. I'm currently using SQL Server Management Studio Express to execute the queries against a SQL Server 2005 server.
My question is, Is there any way to avoid or clear the cache that is causing my queries to execute so quickly on a second run?
There are a couple different things that could at play here, the 3 that come to mind initially (in probable-ish order) would be as follows - if you would like some help interpreting the results, follow the instructions below and paste the stat outputs in the question:
Your query/batch is taking a long time to compile an execution plan. Execution plans are determined and cached (see this post on serverfault for an overview of understanding for how long, when they are rebuilt, etc.)
To verify this, turn on statistics time output, which will provide you information on how long the engine is taking to generate a query plan. For the query/batch in question:
DBCC FREEPROCCACHE
SET STATISTICS TIME ON
Execute the batch, capture the stats output
Execute the batch again, capture the stats output
Compare the 2 stat outputs, paying particular attention to the parse/compile time differences between the 2 executions.
If this is the problem, you can take a couple of approaches to resolving the issue, including specifying a plan guide, specifying a static plan with use plan, or possibly other options like creating a scheduled job to simply compile the plan every few minutes (not as good on option on Sql 2k5 as the others).
Your query/batch is touching a lot of data - on the first execution the data may not be in the buffer pool (basically the cached pages of data the server needs) and the query is performing physical IO operations as opposed to logical IO operations (i.e. reads from disk vs. reads from cache).
To verify this, turn on statistics io output, which will provide you information on the types of IOs and how many of those the engine is performing for the batch. For the query/batch in question:
DBCC DROPCLEANBUFFERS
SET STATISTICS IO ON
Execute the batch, capture the stats output
Execute the batch again, capture the stats output
Compare the 2 stat outputs, paying particular attention to the physical/read-ahead and logical IO outpus between the 2 executions.
To resolve this, you've basically got only 1 option - optimize the query in question so it performs fewer IO operations. You could consider creating a scheduled job that runs the query every so ofter to keep the data in the buffer pool, but this wouldn't be as good an option.
Your query/batch is getting a poor execution plan and/or a poor execution plan choice for different variable values - is this a batch/query that is using a parameterized statement (i.e. you are using variables/static values in the where/join clauses?)? If so, are you seeing the difference in execution times for the same values or different values? If for the same values, the answer is likely #1 or #2 - if for different values, this is potentially your problem. If you think this is the issue after researching #1 and #2, repost with the .sqlplan, the TSQL, and the different parameter values you are using.
I've found the only realiable metrics for performance tuning come from SQL Server's Profiler application. When looking at CPU Time, and Reads in particular, you become much more sheltered from 'other influences'.
For example, the OS being busy, or multiple users being active will reduce your share of CPU and so increase the time to execute. And you may or may not get parallelism over multiple CPUs. But either way, the total CPU Time (as opposed to execution time) will stay approximately the same.
Do this:
CHECKPOINT
DBCC DROPCLEANBUFFERS
I have some queries that are causing timeouts in our live environment. (>30 seconds)
If I run profiler and grab the exact SQL being run and run it from Management Studio then they take a long time to run the first time and then drop to a few hundred miliseconds each run after that.
This is obviously SQL caching the data and getting it all in memory.
I'm sure there are optimisations that can be made to the SQL that will make it run faster.
My question is, how can I "fix" these queries when the second time I run it the data has already been cached and is fast?
May I suggest that you inspect the execution plan for the queries that are responsible for your poor performance issues.
You need to identify, within the execution plan, which steps have the highest cost and why. It could be that your queries are performing a table scan, or that an inappropriate index is being used for example.
There is a very detailed, free ebook available from the RedGate website that concentrates specifically on understanding the contents of execution plans.
https://www.red-gate.com/Dynamic/Downloads/DownloadForm.aspx?download=ebook1
You may find that there is a particular execution plan that you would like to be used for your query. You can force which execution plan is used for a query in SQL Server using query hints. This is quite an advanced concept however and should be used with discretion. See the following Microsoft White Paper for more details.
http://www.microsoft.com/technet/prodtechnol/sql/2005/frcqupln.mspx
I would also not recommend that you clear the procedure cache on your production environment as this will be detrimental to the performance of all other queries on the platform that are not currently experience performance issues.
If you are executing a stored procedure for example you can ensure that a new execution plan is calculated for each execution of the procedure by using the WITH RECOMPILE command.
For overall performance tuning information, there are some excellent resources over at Brent Ozar’s blog.
http://www.brentozar.com/sql-server-performance-tuning/
Hope this helps. Cheers.
According to http://morten.lyhr.dk/2007/10/how-to-clear-sql-server-query-cache.html, you can run the following to clear the cache:
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
EDIT: I checked with the SQL Server documentation I have and this is at least true for SQL Server 2000.
Use can use
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
But only use this in your development environment whilst tuning the queries for deployment to a live server.
I think people are running off in the wrong direction. If I understand, you want the performance to be good all the time? Are they not running fast the 2nd (and subsequent executions) and are slow the first time?
The DBCC commands above clear out the cache, causing WORSE performance.
What you want, I think, is to prime the pump and cache the data. You can do this with some startup procedures that execute the queries and load data into memory.
Memory is a finite resource, so you can't load all data, likely, into memory, but you can find a balance. Brent has some good references above to help learn what you can do here.
Query optimisation is a large subject, there is no single answer to your question. The clues as to what to do are all in the query plan which should be the same regardless of whether the results are cached or not.
Look for the usual things such as table scans, indexes not being used when you expect them to be used, etc. etc. Ultimately you may have to revew your data model and perhaps implement a denormalisation strategy.
From MSDN:
"Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server."
What techinques do you use? How do you find out which jobs take the longest to run? Is there a way to find out the offending applications?
Step 1:
Install the SQL Server Performance Dashboard.
Step2:
Profit.
Seriously, you do want to start with a look at that dashboard. More about installing and using it can be found here and/or here
To identify problematic queries start the Profiler, select following Events:
TSQL:BatchCompleted
TSQL:StmtCompleted
SP:Completed
SP:StmtCompleted
filter output for example by
Duration > x ms (for example 100ms, depends mainly on your needs and type of system)
CPU > y ms
Reads > r
Writes > w
Depending on what you want to optimize.
Be sure to filter the output enough to not having thousands of datarows scrolling through your window, because that will impact your server performance!
Its helpful to log output to a database table to analyse it afterwards.
Its also helpful to run Windows system monitor in parallel to view cpu load, disk io and some sql server performance counters. Configure sysmon to save the data to a file.
Than you have to get production typical query load and data volumne on your database to see meaningfull values with profiler.
After getting some output from profiler, you can stop profiling.
Then load the stored data from the profiling table again into profiler, and use importmenu to import the output from systemmonitor and the profiler will correlate the sysmon output to your sql profiler data. Thats a very nice feature.
In that view you can immediately identifiy bootlenecks regarding to your memory, disk or cpu sytem.
When you have identified some queries you want to omtimize, go to query analyzer and watch the execution plan and try to omtimize index usage and query design.
I have had good sucess with the Database Tuning tools provided inside SSMS or SQL Profiler when working on SQL Server 2000.
The key is to work with a GOOD sample set, track a portion of TRUE production workload for analsys, that will get the best overall bang for the buck.
I use the SQL Profiler that comes with SQL Server. Most of the poorly performing queries I've found are not using a lot of CPU but are generating a ton of disk IO.
I tend to put in filters on disk reads and look for queries that tend to do more than 20,000 or so reads. Then I look at the execution plan for those queries which usually gives you the information you need to optimize either the query or the indexes on the tables involved.
I use a few different techniques.
If you're trying to optimize a specific query, use Query Analyzer. Use the tools in there like displaying the execution plan, etc.
For your situation where you're not sure WHICH query is running slowly, one of the most powerful tools you can use is SQL Profiler.
Just pick the database you want to profile, and let it do its thing.
You need to let it run for a decent amount of time (this varies on traffic to your application) and then you can dump the results in a table and start analyzing them.
You are going to want to look at queries that have a lot of reads, or take up a lot of CPU time, etc.
Optimization is a bear, but keep going at it, and most importantly, don't assume you know where the bottleneck is, find proof of where it is and fix it.