SQL Server 2008 plan cache is almost always empty - sql-server

In order to investigate query plan usage I'm trying to understand what kind of query plan is stored in the memory.
Using this query:
SELECT objtype AS 'Cached Object Type',
COUNT(*) AS 'Numberof Plans',
SUM(CAST(size_in_bytes AS BIGINT))/1048576 AS 'Plan Cache SIze (MB)',
AVG(usecounts) AS 'Avg Use Counts'
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY objtype
I got almost empty plan cache structure. .
There is 128Gb of RAM on the server and ~20% is free. SQL Server instance is not constrained by memory.
Yes basically I have Adhoc queries (not parameterized, not stored procedures).
But why SQL Server empties the query plan cache so frequent? What kind of issue do I have?

Finally, only instance restart solved my problem. Now plan cache looks more healthy.

If the server isn't under memory pressure then some other possibilities from the plan caching white paper are below.
Are any of these actions scheduled frequently? Do you have auto close enabled?
The following operations flush the entire plan cache, and therefore,
cause fresh compilations of batches that are submitted the first time
afterwards:
Detaching a database
Upgrading a database to SQL Server 2005
Upgrading a database to SQL Server 2008
Restoring a database
DBCC FREEPROCCACHE command
RECONFIGURE command
ALTER DATABASE ,,, MODIFY FILEGROUP command
Modifying a collation using ALTER DATABASE … COLLATE command
The following operations flush the plan cache entries that refer to a
particular database, and cause fresh compilations afterwards.
DBCC FLUSHPROCINDB command
ALTER DATABASE … MODIFY NAME = command
ALTER DATABASE … SET ONLINE command
ALTER DATABASE … SET OFFLINE command
ALTER DATABASE … SET EMERGENCY command
DROP DATABASE command
When a database auto-closes
When a view is created with CHECK OPTION, the plan cache entries of the database in which the view is created are flushed.
When DBCC CHECKDB is run, a replica of the specified database is created. As part of DBCC CHECKDB's execution, some queries against the
replica are executed, and their plans cached. At the end of DBCC
CHECKDB's execution, the replica is deleted and so are the query plans
of the queries posed on the replica.
The following sp_configure/reconfigure operations also clear the procedure cache:
access check cache bucket count
access check cache quota
clr enabled
cost threshold for parallelism
cross db ownership chaining
index create memory
max degree of parallelism
max server memory
max text repl size
max worker threads
min memory per query
min server memory
query governor cost limit
query wait
remote query timeout
user options

I had the same issue just about a week ago and also posted several questions. Even though I have not actually found the answer to the problem I 've got some insight on the process. And silly as it sounds SQL Server service restart helped but raised another problem - the recovery process continued for 4 hours. Seems like a pretty large transaction was in place...
empty-plan-cache-problem
Almost empty plan cache
Almost empty plan cache

Related

How can I improve cardinality estimates on staging tables?

I support a process that runs every night and looks at various clients that have invoices with unpaid line items. The process starts with deleting all records from a staging table and then inserting a number of invoices line items into the staging table. The process run on a per-client basis, so some clients may have 200 line items, some clients may have 50,000. We are constantly having issues with the process running an exorbitant amount of time. The issue seems to stem from an inability for SQL server to estimate the correct number of rows that are in the staging table at the time and therefore is generating a bad execution plan. My question is, is there a way to manually set the estimated number of rows to improve cardinality estimates for the stored procedures involved? Perhaps this could be done through a select count(primaryKey) at the beginning of the run, right after the current runs staging table is populated?
You are executing big batch processes on this table. It's a good approach to delete all indexes before your batch and create them again after the batch.
If you do this, your statistics will be updated and won't be the cause of your problem.
Pay heed also to more generic information about statistics: The update statistics changed a lot between SQL Server 2014 and SQL Server 2016. If you are running SQL Server 2016, you need to check if your database is using the new cardinality estimator created for SQL Server 2016. Just check if your database is running with SQL Server 2016 compatibility level.
If you are running SQL Server 2014, a good option is to enable the trace flag 2371. This trace flag improves the criteria SQL Server uses to automatic update statistics. You should use SQL Server Configuration Manager to enable this trace flag.
However, if you follow the first suggestion, deleting and creating the indexes, the other two suggestion will have low or none impact.

Slow performance when using fully qualified name in SELECT

I'm using SQL Server 2008 R2 for this issue.
In one of my apps, I need to refer to a table from another database. So I do a query:
USE Db1
SELECT * FROM Db2.dbo.Table1
It takes ~2 seconds for the query to complete even for a table with just 300 records. The delay is consistent, I ran it in Management Studio and hit Execute and the result is the same. I did this for around 10 times with consistent results.
Now when I run the query but this time running it in the context of the actual database:
USE Db2
SELECT * FROM Table1
There's virtually no wait time when the same results are returned.
Now the weird part is, when I go back to my first query, the delay no longer happens! And this behavior is reproduced every time I restart SQL Server.
Has anyone encountered this behavior before? Do you have any ideas on what I could be doing wrong?
Finally figured this one out. The Auto Close property for the database being referenced in the SELECT was set to True. I set this to False and the delay during the SELECT calls disappeared.
So what was happening was it was always starting up the database for every SELECT statement! I checked the Event Viewer and true enough, a log of the database starting up is there for every call.
To set this to false, I used Management Studio, right click on the Database, then go to Properties. In the Properties window, select Options and under the Automatic group, the first item is Auto Close. Set this to False.
See the link below for more information on the Auto Close property. It is set to True by default. Set this to False, and you should not encounter this problem.
Auto_Close
There is no mystery here, and the fully qualified name plays no role at all.
The table data are cached in memory the first time you ask for them. Any subsequent calls will read the data from memory instead of reading them from disk. Additionally, SQL Server caches compiled execution plans and reuses them for new queries.
Each time you restart SQL Server you start with empty the memory buffers and execution plan caches so the first query you execute will be significantly slower.
In order to get meaningful results, you need to clear the buffer and execution plan cache using these commands:
DBCC FREEPROCCACHE will clean the execution plan cache, forcing new queries to be recompiled
DBCC DROPCLEANBUFFERS will clean the memory buffers, forcing SQL Server to reload data from disk

query hangs, but ok if I update statistics

Got an issue where i have a complicated sql query that occasionally hangs and doesn't execute on MS SQL. However, when i run update statistics on the tables involved in the query, the query executes normally.
Any idea or pointers on the cause?
Thanks!
SQL Server creates an "execution plan" that uses the statistics info to determine an optimal order to filter the data/reduce access to the database tables.
This execution plan is stored in the database cache and is re-used as long as the database is online; the statistics are not rebuild and the query is not modified.
When you update the indexes, the statistics are updated as well.
As a result, the stored execution plan for your query is no longer optimal and as a result will not be used any more.
I expect SQL Server also closes unused locks and transactions for the table before rebuilding the index. That is an undocumented feature.

how to force a stored procedure be pre-cached and stay in memory?

Stored procedures are compiled on first use.
There are options to clear cache:
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
--To Verify whether the cache is emptied
--DBCC PROCCACHE
or to recompile or to reduce recompilations.
But is it possible to force frequently used stored procedures' execution plans be pre-cached and stay in memory?
I know how to do it in ADO.NET, i.e. from outside of SQL Server, but this question is how to do inside SQL Server - to be launched with the start of SQL Server itself.
(*) For example, I see in SSMS Activity Monitor a running process (Task State: RUNNING, Command: SELECT) that is continuously executing T-SQL (according to Profiler) in context of tempdb database though SQL Server Agent is disabled and SQL Server is not loaded by anything, see "Details of session 54" in "Where are all those SQL Server sessions from?".
How would I do the similar resident process (or, rather, auto-starting by SQL Server start service or session) periodically recycling stored procedure?
Related question:
Stored procedure executes slowly on first run
Update:
Might be I should have forked this question in 2 but my main curiosity is how to have periodic/ looping activity with SQL Server Agent disabled?
How was it made with mentioned above RUNNING SELECT session (*)?
Update2:
Frequently I observe considerable delays while executing stored procedures querying very small amount of data which cannot be explained only through necessity to read huge amounts of data.
Can we consider this - considerable delays on insignificantly small data - as context of this question?
Just execute it from a script. You could do this after any sql server restart. If they are frequently used, it shouldn't be much of a problem after that.
Seems like this question eventually got answered in:
Can I get SQL Server to call a stored proc every n seconds?
Update: These tips will do the trick:
Keeping data available in the SQL Server data cache with PINTABLE
Automatically Running Stored Procedures at SQL Server Startup

How can I clear the SQL Server query cache?

I've got a simple query running against SQL Server 2005
SELECT *
FROM Table
WHERE Col = 'someval'
The first time I execute the query can take > 15 secs. Subsequent executes are back in < 1 sec.
How can I get SQL Server 2005 not to use any cached results? I've tried running
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
But this seems to have no effect on the query speed (still < 1 sec).
Here is some good explaination. check out it.
http://www.mssqltips.com/tip.asp?tip=1360
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS;
GO
From the linked article:
If all of the performance testing is conducted in SQL Server the best approach may be to issue a CHECKPOINT and then issue the DBCC DROPCLEANBUFFERS command. Although the CHECKPOINT process is an automatic internal system process in SQL Server and occurs on a regular basis, it is important to issue this command to write all of the dirty pages for the current database to disk and clean the buffers. Then the DBCC DROPCLEANBUFFERS command can be executed to remove all buffers from the buffer pool.
Eight different ways to clear the plan cache
1. Remove all elements from the plan cache for the entire instance
DBCC FREEPROCCACHE;
Use this to clear the plan cache carefully. Freeing the plan cache causes, for example, a stored procedure to be recompiled instead of reused from the cache. This can cause a sudden, temporary decrease in query performance.
2. Flush the plan cache for the entire instance and suppress the regular completion message
"DBCC execution completed. If DBCC printed error messages, contact your system administrator."
DBCC FREEPROCCACHE WITH NO_INFOMSGS;
3. Flush the ad hoc and prepared plan cache for the entire instance
DBCC FREESYSTEMCACHE ('SQL Plans');
4. Flush the ad hoc and prepared plan cache for one resource pool
DBCC FREESYSTEMCACHE ('SQL Plans', 'LimitedIOPool');
5. Flush the entire plan cache for one resource pool
DBCC FREEPROCCACHE ('LimitedIOPool');
6. Remove all elements from the plan cache for one database (does not work in SQL Azure)
-- Get DBID from one database name first
DECLARE #intDBID INT;
SET #intDBID = (SELECT [dbid]
FROM master.dbo.sysdatabases
WHERE name = N'AdventureWorks2014');
DBCC FLUSHPROCINDB (#intDBID);
7. Clear plan cache for the current database
USE AdventureWorks2014;
GO
-- New in SQL Server 2016 and SQL Azure
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
8. Remove one query plan from the cache
USE AdventureWorks2014;
GO
-- Run a stored procedure or query
EXEC dbo.uspGetEmployeeManagers 9;
-- Find the plan handle for that query
-- OPTION (RECOMPILE) keeps this query from going into the plan cache
SELECT cp.plan_handle, cp.objtype, cp.usecounts,
DB_NAME(st.dbid) AS [DatabaseName]
FROM sys.dm_exec_cached_plans AS cp CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE OBJECT_NAME (st.objectid)
LIKE N'%uspGetEmployeeManagers%' OPTION (RECOMPILE);
-- Remove the specific query plan from the cache using the plan handle from the above query
DBCC FREEPROCCACHE (0x050011007A2CC30E204991F30200000001000000000000000000000000000000000000000000000000000000);
Source 1 2 3
Note that neither DBCC DROPCLEANBUFFERS; nor DBCC FREEPROCCACHE; is supported in SQL Azure / SQL Data Warehouse.
However, if you need to reset the plan cache in SQL Azure, you can alter one of the tables in the query (for instance, just add then remove a column), this will have the side-effect of removing the plan from the cache.
I personally do this as a way of testing query performance without having to deal with cached plans.
More details about SQL Azure Procedure Cache here
While the question is just a bit old, this might still help. I'm running into similar issues and using the option below has helped me. Not sure if this is a permanent solution, but it's fixing it for now.
OPTION (OPTIMIZE FOR UNKNOWN)
Then your query will be like this
select * from Table where Col = 'someval' OPTION (OPTIMIZE FOR UNKNOWN)
EXEC sys.sp_configure N'max server memory (MB)', N'2147483646'
GO
RECONFIGURE WITH OVERRIDE
GO
What value you specify for the server memory is not important, as long as it differs from the current one.
Btw, the thing that causes the speedup is not the query cache, but the data cache.

Resources