How to let SQL Server know not to use Cache in Queries? - sql-server

Just a general question:
Is there a query/command I can pass to SQL Server not to use cache when executing a particularly query?
I am looking for a query/command that I can set rather than a configuration setting. Is there no need to do this?

DBCC FREEPROCCACHE
Will remove all cached procedures execution plans. This would cause all subsequent procedure calls to be recompiled.
Adding WITH RECOMPILE to a procedure definition would cause the procedure to be recompiled every time it was called.
I do not believe that (in SQL 2005 or earlier) there is any way to clear the procedrue cache of a single procedures execution plan, and I'd doubt you could do it in 2008 either.

If you want to force a query to not use the data cache, the best approach is to clear the cache before you run the query:
CHECKPOINT
DBCC DROPCLEANBUFFERS
Note that forcing a recompile will have no effect on the query's use of the data cache.
One reason you can't just mark an individual query to avoid the cache is the cache use is an integral part of executing the query. If the data is already in cache, then what would SQL Server do with the data if it was read a second time? Not to mention sync issues, etc.

Another more localized way to not use the MS-SQL Server Cache is to use the OPTION(RECOMPILE) keyword at the end of your statement.
E.g.
SELECT Columnname
FROM TableName
OPTION(RECOMPILE)
For more information about this and other similar query-cache clues to help identify problems with a query, Pinal Dave (no affiliation) has some helpful info about this.

use
WITH RECOMPILE

Related

Does updating statistics recompile stored procedures in sql server

Does updating statistics recompile stored procedures in sql server or even after updating statistics( Auto or manual) the procedures run with the same execution plan it first compiled with?
MSDN has a lengthy article on that. To sum it up:
Therefore, plan optimality-related reasons have close association with
the statistics.
Looks like it depends on how much the statistics changed. So updating statistics may lead to a recompile but does not have to. To force removal of all cached query plans, you can run:
DBCC FREEPROCCACHE

DBCC FREEPROCCACHE and DBCC DROPCLEANBUFFERS alike for specific scope

I would like to check options to improve my queries.
sometimes, i want to do the tests on a production server so i can't use DBCC FREEPROCCACHE
and DBCC DROPCLEANBUFFERS to clear the entire server cache.
could you please share with me the way to do a kind of "cache clean" only for my connection/scope?
Thanks.
DBCC FREEPROCCACHE (plan_handle | sql_handle | pool_name)
By passing the plan_handle or the other options we will be able to cleare the cache of a particular sp or a query.
Buffers are not held in a user specific table, or stored per user - there is no way for SQL Server to selectively clear it, since it does not know what items are being held for which queries; doing so would produce unneeded overhead in almost every case (except what you are trying to do now. Sorry.) Despite this, there are options.
There are suggestions, however, to ameliorate the issue, even if the problem can't be avoided:
You can use with(RECOMPILE) to force the query to find a new plan, but that will not clear the cache.
You can run each query twice, to see how slowly/quickly it runs once the data is buffered.
You can repeatedly alternate the two methods, to see if they get faster, and what the speed difference converges towards.
You can run them a day or two apart, or after a server reset. (This is if the production server gets reset occasionally anyways.)
This article has additional ideas for testing in such a situation.
If downtime is OK you can take the database offline and take it online immediately afterwards.

Query times out when executed from web, but super-fast when executed from SSMS

I'm trying to debug the source of a SQL timeout in a web application that I maintain. I have the source code of the C# code behind, so I know exactly what code is running. I have debugged the application right down to the line that executes the SQL code that times out, and I watch the query running in SQL profiler.
When this query executes from the web, it times out after 30 seconds. However, when I cut/paste the query exactly as presented in Profiler, and I put it into SSMS and run it, it returns almost instantly. I have traced the problem to ARITHABORT being set to OFF in the connection that the web is using (that is, if I turn ARITHABORT OFF in the SSMS session, it runs for a long time, and if I turn it back ON then it runs very quickly). However, reading the description of ARITHABORT, it doesn't seem to apply... I'm only doing a simple SELECT, and there is NO arithmetic being performed at all.. just a single INNER JOIN with a WHERE condition:
Why would ARITHABORT OFF be causing this behavior in this context?? Is there any way I can alter the ARITHABORT setting for that connection from SSMS? I'm using SQL Server 2008.
So your C# code is sending an ad hoc SQL query to SQL Server, using what method? Have you considered using a stored procedure? That would probably ensure the same performance (at least in the engine) regardless of who called it.
Why? The ARITHABORT setting is one of the things the optimizer looks at when it is determining how to execute your query (more specifically, for plan matching). It is possible that the plan in cache has the same setting as SSMS, so it uses the cached plan, but with the opposite setting your C# code is forcing a recompile (or perhaps you are hitting a really BAD plan in the cache), which can certainly hurt performance in a lot of cases.
If you are already calling a stored procedure (you didn't post your query, though I think you meant to), you can try adding OPTION (RECOMPILE) to the offending query (or queries) in the stored procedure. This will mean those statements will always recompile, but it could prevent the use of the bad plan you seem to be hitting. Another option is to make sure that when the stored procedure is compiled, the batch is executed with SET ARITHABORT ON.
Finally, you seem to be asking how you can change the ARITHABORT setting in SSMS. I think what you meant to ask is how you can force the ARITHABORT setting in your code. If you decide to continue sending ad hoc SQL from your C# app, then of course you can send a command as text that has multiple statements separated by semi-colons, e.g.:
SET ARITHABORT ON; SELECT ...
For more info on why this issue occurs, see Erland Sommarskog's great article:
Slow in the Application, Fast in SSMS? Understanding Performance Mysteries
This answer includes a way to resolve this issue:
By running the following commands as administrator on the database all queries run as expected regardless of the ARITHABORT setting.
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Update
It seems that most people end up having this problem occur very rarely, and the above technique is a decent one-time fix. But if a specific query exhibits this problem more than once, a more long-term solution to this problem would be to use Query Hints like OPTIMIZE FOR and OPTION(Recompile), as described in this article.
Update 2
SQL Server has had some improvements made to its query execution plan algorithms, and I find problems like this are increasingly rare on newer versions. If you are experiencing this problem, you might want to check the Compatibility Level setting on the database that you're executing against (not necessarily the one you're querying, but rather the default database--or "InitialCatalog"--for your connection). If you are stuck on an old compatibility level, you'll be using the old query execution plan generation techniques, which have a much higher chance of producing bad queries.
I've had this problem many times before but if you have a stored procedure with the same problem dropping and recreating the stored proc will solve the issue.
It's called parameter sniffing.
You need to always localize the parameters in the stored proc to avoid this issue in the future.
I understand this might not be what the original poster wants but might help someone with the same issue.
If using Entity Framework you must be aware that query parameters for string values are sent to database as nvarchar by default, if database column to compare is typed varchar, depending on your collation, query execution plan may require an "IMPLICIT CONVERSION" step, that forces a full scan. I could confirm it by looking in database monitoring in expensive queries option, which displays the execution plan.
Finally, a explanation on this behavior in this article:
https://www.sqlskills.com/blogs/jonathan/implicit-conversions-that-cause-index-scans/
Just using ARITHABORT wont solve the problem, especially if you use parameterised stored procedures.
Because parameterised stored procedures can cause "parameter sniffing", which uses cached query plan
So, before jumping into conclusion, please check below link.
the-elephant-and-the-mouse-or-parameter-sniffing-in-sql-server
I had the same problem and it was fixed by executing procedure "WITH RECOMPILE". You can also try using parameter sniffing. My issue was related to SQL cache.
If you can change your code to fix parameter sniffing optimize for unknown hint is your best option. If you cannot change your code the best option is exec sp_recompile 'name of proc' which will force only that one stored proc to get a new execution plan. Dropping and recreating a proc would have a similar effect but could cause errors if someone tries to execute the proc while you have it dropped. DBCC FREEPROCCACHE drops all your cached plans which can wreck havoc ok your system up to and including causing lots of timeouts in a heavy transactions production environment. Setting arithabort is not a solution to the problem but is a useful tool for discovering if parameter sniffing is the issue.
I have the same problem when trying to call SP from SMSS it took 2 sec, while from the webapp (ASP.NET) it took about 3 min.
I've tried all suggested solutions sp_recompile, DBCC FREEPROCCACHE and DBCC DROPCLEANBUFFERS but nothing fixed my problem, but when tried parameter sniffing it did the trick, and worked just fine.

A T-SQL query executes in 15s on sql 2005, but hangs in SSRS (no changes)?

When I execute a T-SQL query it executes in 15s on sql 2005.
SSRS was working fine until yesterday. I had to crash it after 30min.
I made no changes to anything in SSRS.
Any ideas? Where do I start looking?
Start your query in SSIS then look into the Activity Monitor of Management Studio. See if the query is currently blocked by any chance, and in that case, what it is blocked on.
Alternatively you can use sys.dm_exec_requests and check the same thing, w/o the user interface getting in the way. Look at the session executing the query from SSIS, check it's blocking_session_id, wait_type, wait_time and wait_resource columns. If you find that the query is blocked, the SSIS has no fault probably and something in your environment is blocking the query execution. If on the other hand the query is making progress (the wait_resource changes) then it just executes slowly and its time to check its execution plan.
Have you tried making the query a stored procedure to see if that helps? This way execution plans are cached.
Updated: You could also make the query a view to achieve the same affect.
Also, SQL Profiler can help you determine what is being executed. This will allow you to see if the SQL is the cause of the issue, or Reporting Services rendering the report (ie: not fetching the data)
There are a number of connection-specific things that can vastly change performance - for example the SET options that are active.
In particular, some of these can play havoc if you have a computed+persisted (and possibly indexed) column. If the settings are a match for how the column was created, it can use the stored value; otherwise, it has to recalculate it per row. This is especially expensive if the column is a promoted column from xml.
Does any of that apply?
Are you sure the problem is your query? There could be SQL Server problems. Don't forget about the ReportServer and ReportServerTempDB databases. Maybe they need some maintenance.
The first port of call for any performance problems like this is to get an execution plan. You can either get this by running an SQL Profiler Trace with the ShowPlan Xml event, or if this isn't possible (you probably shouldn't do this on loaded production servers) you can extract the cached execution plan that's being used from the DMVs.
Getting the plan from a trace is preferable however, as that plan will include statistics about how long the different nodes took to execute. (The trace wont cripple your server or anything, but it will have some performance impact)

How do you fix queries that only run slow until they're cached

I have some queries that are causing timeouts in our live environment. (>30 seconds)
If I run profiler and grab the exact SQL being run and run it from Management Studio then they take a long time to run the first time and then drop to a few hundred miliseconds each run after that.
This is obviously SQL caching the data and getting it all in memory.
I'm sure there are optimisations that can be made to the SQL that will make it run faster.
My question is, how can I "fix" these queries when the second time I run it the data has already been cached and is fast?
May I suggest that you inspect the execution plan for the queries that are responsible for your poor performance issues.
You need to identify, within the execution plan, which steps have the highest cost and why. It could be that your queries are performing a table scan, or that an inappropriate index is being used for example.
There is a very detailed, free ebook available from the RedGate website that concentrates specifically on understanding the contents of execution plans.
https://www.red-gate.com/Dynamic/Downloads/DownloadForm.aspx?download=ebook1
You may find that there is a particular execution plan that you would like to be used for your query. You can force which execution plan is used for a query in SQL Server using query hints. This is quite an advanced concept however and should be used with discretion. See the following Microsoft White Paper for more details.
http://www.microsoft.com/technet/prodtechnol/sql/2005/frcqupln.mspx
I would also not recommend that you clear the procedure cache on your production environment as this will be detrimental to the performance of all other queries on the platform that are not currently experience performance issues.
If you are executing a stored procedure for example you can ensure that a new execution plan is calculated for each execution of the procedure by using the WITH RECOMPILE command.
For overall performance tuning information, there are some excellent resources over at Brent Ozar’s blog.
http://www.brentozar.com/sql-server-performance-tuning/
Hope this helps. Cheers.
According to http://morten.lyhr.dk/2007/10/how-to-clear-sql-server-query-cache.html, you can run the following to clear the cache:
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
EDIT: I checked with the SQL Server documentation I have and this is at least true for SQL Server 2000.
Use can use
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
But only use this in your development environment whilst tuning the queries for deployment to a live server.
I think people are running off in the wrong direction. If I understand, you want the performance to be good all the time? Are they not running fast the 2nd (and subsequent executions) and are slow the first time?
The DBCC commands above clear out the cache, causing WORSE performance.
What you want, I think, is to prime the pump and cache the data. You can do this with some startup procedures that execute the queries and load data into memory.
Memory is a finite resource, so you can't load all data, likely, into memory, but you can find a balance. Brent has some good references above to help learn what you can do here.
Query optimisation is a large subject, there is no single answer to your question. The clues as to what to do are all in the query plan which should be the same regardless of whether the results are cached or not.
Look for the usual things such as table scans, indexes not being used when you expect them to be used, etc. etc. Ultimately you may have to revew your data model and perhaps implement a denormalisation strategy.
From MSDN:
"Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server."

Resources