How to bypass SQL Server caching - sql-server

When I am trying to examine the performance of a given query in my application I'm generally uninterested in the effect my code is having. I want to be able to watch the time taken in Sql Management Studio.
Unfortunately, I'm finding that some sort of caching must be going on, because one query that returns 10,000 results from a table with 26 or so columns, many of them large varchars, takes 12 seconds the first time I run it in a while and takes 6 seconds the following times unless I don't re-run it for a few minutes.
Is there any way to instruct it to bypass the cache and pretend like it had never run it before? I'm using SQL Server 10.0.

You can clean out the cache and sproc cache with
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
respectively

Related

Stored Procedure is faster the 2nd time

I am running SP on SQL Server 2017
The first time I executed the SP it took 43 sec
but the 2nd time it took 1 sec only
How can I execute the SP everytime same as the first time without cache or any learning from the previous experience?
I am not asking why, as this question wants to know Why this is happening
First run slowness in a sql server stored procedure
I am asking, How to make it the same as the first time.
I want my procedure everytime it executes as if its the first time
If you want to remove stored plan caches, you can execute DBCC FREEPROCCACHE. Just beware that it will get a clean slate for all stored procedures.
Remember that once the plan is compiled, it will be stored for future calls until either the SP gets altered or a dependent object is also modified. So most of the times what you want to test is actually the performance of the already compiled plan, unless you constantly are clearing these caches, restarting the servers or triggering a recompile somehow.
Run:
CHECKPOINT;
DBCC DROPCLEANBUFFERS;
prior to each execution. This will clear the buffer pool so that the playing field is leveled and each iteration will incur roughly the same IO overhead.
Also see Performance testing with DBCC DROPCLEANBUFFERS for additional considerations for measuring performance with this method.

DBCC FREEPROCCACHE and DBCC DROPCLEANBUFFERS alike for specific scope

I would like to check options to improve my queries.
sometimes, i want to do the tests on a production server so i can't use DBCC FREEPROCCACHE
and DBCC DROPCLEANBUFFERS to clear the entire server cache.
could you please share with me the way to do a kind of "cache clean" only for my connection/scope?
Thanks.
DBCC FREEPROCCACHE (plan_handle | sql_handle | pool_name)
By passing the plan_handle or the other options we will be able to cleare the cache of a particular sp or a query.
Buffers are not held in a user specific table, or stored per user - there is no way for SQL Server to selectively clear it, since it does not know what items are being held for which queries; doing so would produce unneeded overhead in almost every case (except what you are trying to do now. Sorry.) Despite this, there are options.
There are suggestions, however, to ameliorate the issue, even if the problem can't be avoided:
You can use with(RECOMPILE) to force the query to find a new plan, but that will not clear the cache.
You can run each query twice, to see how slowly/quickly it runs once the data is buffered.
You can repeatedly alternate the two methods, to see if they get faster, and what the speed difference converges towards.
You can run them a day or two apart, or after a server reset. (This is if the production server gets reset occasionally anyways.)
This article has additional ideas for testing in such a situation.
If downtime is OK you can take the database offline and take it online immediately afterwards.

Drastic difference of the time cost for the same store procedure

I am using SQL Server 2008 R2.
The process is actually like this:
First, about 2 million records are pulled from a remote server,
then a join is done locally,
the final result is thousands of records.
The time cost varies from less one 1 min to 30 mins.
And after I experienced the 30 mins delay, it seems the following time costs are all only around 3 mins.
It is the same data, same SP.
What could cause this drastic difference?
Update
I delet the SP, re-start the SQL server service, and re-creat the SP. The execution took only 50 seconds!
What's wrong?
The behaviour you describe seems extreme - but (if you exclude the client), there are 3 logical places to look.
The first is the query execution on the database server. It's worth using the Query Analyzer tool to see if it's using any indices - by far the most common reason for variable performance of database queries is that the query is not using (the right) indices, and that therefore the impact of the query cache plays a big part. SQL Server will cache a lot of data, and the first run of your proc populates that cache; the second run is faster because it hits the cache. After a while, the cache goes stale, and running the proc slows down again.
The second possibility is that the database server is wobbly - it may just not be powerful enough to do all the work it's supposed to do. In that case, one moment you get lucky, have all the server resources to yourself; the next, someone else is running a query and yours slows down. That would make all queries slow, not just this one - so it doesn't sound likely.
Third possibility is networking weirdness - as Phil says, "thousands of records" is nothing too scary, but if they're big, and your network is saturated with pictures of kittens, it might have an impact. Again, that would manifest in general network slowness, and is unlikely to explain a delay of 30 minutes...
Fourth, is anything going on at the same time?
Fifth, does your SP use dynamically generated SQL statements? This would cause the SP not to become pre-compiled. If possible seperate such statements into child SPs.

Prevent Caching in SQL Server

Having looked around the net using Uncle Google, I cannot find an answer to this question:
What is the best way to monitor the performance and responsiveness of production servers running IIS and MS SQL Server 2005?
I'm currently using Pingdom and would like it to point to a URL which basically mimics a 'real world query' but for obvious reasons do not want the query to run from cache. The URL will be called every 5 minutes.
I cannot clear out the cache, buffers, etc since this would impact negatively on the production server. I have tried using a random generated number within the SELECT statement in order to generate unique queries, but the cached query is still used.
Is there any way to simulate the NO_CACHE in MySQL?
Regards
To clear the SQL buffer and plan cache:
DBCC DROPCLEANBUFFERS
GO
DBCC FREEPROCCACHE
GO
A little info about these commands from MSDN:
Use DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server. (source)
Use DBCC FREEPROCCACHE to clear the plan cache carefully. Freeing the plan cache causes, for example, a stored procedure to be recompiled instead of reused from the cache. (source)
SQL Server does not have a results cache like MySQL or Oracle, so I am a bit confused about your question. If you want the server to recompile the plan cache for a stored procedure, you can execute it WITH RECOMPILE. You can drop your buffer cache, but that would affect all queries as you know.
At my company, we test availability and performance separately. I would suggest you use this query just to make sure you your system is working together from front-end to database, then write other tests that check the individual components to judge performance. SQL Server comes with an amazing amount of ways to check if you are experiencing bottlenecks and where they are. I use PerfMon and DMVs extensively. Using PerfMon, I check CPU and page life expectancy, as well as seeing how long my disk queue is. Using DMVs, I can find out if my queries are taking too long (sys.dm_exec_query_stats) or if wait times are long (sys.dm_os_wait_stats).
The two biggest bottlenecks with IIS tend to be CPU and memory, and IIS comes with its own suite of PerfMon objects to query, but I am not as familiar with those.

SQL Server 2005 Caching

It's my understanding that SQL Server 2005 does some sort of result or index caching. I'm currently profiling complex select statements which take several seconds to several minutes to complete. My problem is that a second run of a query never takes more than a second to run even if I don't alter it. I'm currently using SQL Server Management Studio Express to execute the queries against a SQL Server 2005 server.
My question is, Is there any way to avoid or clear the cache that is causing my queries to execute so quickly on a second run?
There are a couple different things that could at play here, the 3 that come to mind initially (in probable-ish order) would be as follows - if you would like some help interpreting the results, follow the instructions below and paste the stat outputs in the question:
Your query/batch is taking a long time to compile an execution plan. Execution plans are determined and cached (see this post on serverfault for an overview of understanding for how long, when they are rebuilt, etc.)
To verify this, turn on statistics time output, which will provide you information on how long the engine is taking to generate a query plan. For the query/batch in question:
DBCC FREEPROCCACHE
SET STATISTICS TIME ON
Execute the batch, capture the stats output
Execute the batch again, capture the stats output
Compare the 2 stat outputs, paying particular attention to the parse/compile time differences between the 2 executions.
If this is the problem, you can take a couple of approaches to resolving the issue, including specifying a plan guide, specifying a static plan with use plan, or possibly other options like creating a scheduled job to simply compile the plan every few minutes (not as good on option on Sql 2k5 as the others).
Your query/batch is touching a lot of data - on the first execution the data may not be in the buffer pool (basically the cached pages of data the server needs) and the query is performing physical IO operations as opposed to logical IO operations (i.e. reads from disk vs. reads from cache).
To verify this, turn on statistics io output, which will provide you information on the types of IOs and how many of those the engine is performing for the batch. For the query/batch in question:
DBCC DROPCLEANBUFFERS
SET STATISTICS IO ON
Execute the batch, capture the stats output
Execute the batch again, capture the stats output
Compare the 2 stat outputs, paying particular attention to the physical/read-ahead and logical IO outpus between the 2 executions.
To resolve this, you've basically got only 1 option - optimize the query in question so it performs fewer IO operations. You could consider creating a scheduled job that runs the query every so ofter to keep the data in the buffer pool, but this wouldn't be as good an option.
Your query/batch is getting a poor execution plan and/or a poor execution plan choice for different variable values - is this a batch/query that is using a parameterized statement (i.e. you are using variables/static values in the where/join clauses?)? If so, are you seeing the difference in execution times for the same values or different values? If for the same values, the answer is likely #1 or #2 - if for different values, this is potentially your problem. If you think this is the issue after researching #1 and #2, repost with the .sqlplan, the TSQL, and the different parameter values you are using.
I've found the only realiable metrics for performance tuning come from SQL Server's Profiler application. When looking at CPU Time, and Reads in particular, you become much more sheltered from 'other influences'.
For example, the OS being busy, or multiple users being active will reduce your share of CPU and so increase the time to execute. And you may or may not get parallelism over multiple CPUs. But either way, the total CPU Time (as opposed to execution time) will stay approximately the same.
Do this:
CHECKPOINT
DBCC DROPCLEANBUFFERS

Resources