query hangs, but ok if I update statistics - sql-server

Got an issue where i have a complicated sql query that occasionally hangs and doesn't execute on MS SQL. However, when i run update statistics on the tables involved in the query, the query executes normally.
Any idea or pointers on the cause?
Thanks!

SQL Server creates an "execution plan" that uses the statistics info to determine an optimal order to filter the data/reduce access to the database tables.
This execution plan is stored in the database cache and is re-used as long as the database is online; the statistics are not rebuild and the query is not modified.
When you update the indexes, the statistics are updated as well.
As a result, the stored execution plan for your query is no longer optimal and as a result will not be used any more.
I expect SQL Server also closes unused locks and transactions for the table before rebuilding the index. That is an undocumented feature.

Related

Does the execution plan depend on data?

I run my SQL statement twice, including 'Actual Execution Plan'. However, I got two different execution plans for the same query. The only difference is I change ID.
First, I run sql with ID which is related to only 5 records.
Second, I run sql with ID which is related to 5000+ records.
Does the execution plan change depending on the data?
I'm using SQL Server 2008 R2
Does the execution plan change depending on the data?
Strictly speaking, No. The execution plan will change if the query changes or the statistics (which do depend on the data) change.
If the data changes but the stats do not, then your execution plan will stay the same.

SQL Server 2008 plan cache is almost always empty

In order to investigate query plan usage I'm trying to understand what kind of query plan is stored in the memory.
Using this query:
SELECT objtype AS 'Cached Object Type',
COUNT(*) AS 'Numberof Plans',
SUM(CAST(size_in_bytes AS BIGINT))/1048576 AS 'Plan Cache SIze (MB)',
AVG(usecounts) AS 'Avg Use Counts'
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY objtype
I got almost empty plan cache structure. .
There is 128Gb of RAM on the server and ~20% is free. SQL Server instance is not constrained by memory.
Yes basically I have Adhoc queries (not parameterized, not stored procedures).
But why SQL Server empties the query plan cache so frequent? What kind of issue do I have?
Finally, only instance restart solved my problem. Now plan cache looks more healthy.
If the server isn't under memory pressure then some other possibilities from the plan caching white paper are below.
Are any of these actions scheduled frequently? Do you have auto close enabled?
The following operations flush the entire plan cache, and therefore,
cause fresh compilations of batches that are submitted the first time
afterwards:
Detaching a database
Upgrading a database to SQL Server 2005
Upgrading a database to SQL Server 2008
Restoring a database
DBCC FREEPROCCACHE command
RECONFIGURE command
ALTER DATABASE ,,, MODIFY FILEGROUP command
Modifying a collation using ALTER DATABASE … COLLATE command
The following operations flush the plan cache entries that refer to a
particular database, and cause fresh compilations afterwards.
DBCC FLUSHPROCINDB command
ALTER DATABASE … MODIFY NAME = command
ALTER DATABASE … SET ONLINE command
ALTER DATABASE … SET OFFLINE command
ALTER DATABASE … SET EMERGENCY command
DROP DATABASE command
When a database auto-closes
When a view is created with CHECK OPTION, the plan cache entries of the database in which the view is created are flushed.
When DBCC CHECKDB is run, a replica of the specified database is created. As part of DBCC CHECKDB's execution, some queries against the
replica are executed, and their plans cached. At the end of DBCC
CHECKDB's execution, the replica is deleted and so are the query plans
of the queries posed on the replica.
The following sp_configure/reconfigure operations also clear the procedure cache:
access check cache bucket count
access check cache quota
clr enabled
cost threshold for parallelism
cross db ownership chaining
index create memory
max degree of parallelism
max server memory
max text repl size
max worker threads
min memory per query
min server memory
query governor cost limit
query wait
remote query timeout
user options
I had the same issue just about a week ago and also posted several questions. Even though I have not actually found the answer to the problem I 've got some insight on the process. And silly as it sounds SQL Server service restart helped but raised another problem - the recovery process continued for 4 hours. Seems like a pretty large transaction was in place...
empty-plan-cache-problem
Almost empty plan cache
Almost empty plan cache

when is execution plans in SQL Server generated?

In a stored procedure in SQL Server, when is the actual SQL query plans generated? When the SQL is run for the first time or when the stored procedure is compiled? Any idea how expensive the generation of query plans is in comparison to Oracle?
When a query is run, SQL Server will check to see if an execution plan already exists for that query in the execution plan cache. If it finds one, it can reuse that execution plan. If it doesn't find one in the cache, it then generates a plan, puts in the cache ready for subsequent calls to reuse, and then executes the query. So it does this at the time when the query is executed.
How long a plan stays in the cache for is down to a number of factors, including:
- how often that plan is used
- how much "value" that plan offers
- memory pressure on the server
So a given query could have an execution plan generated multiple times over the course of a given period, if it's plan is not managing to stay in the cache. Also, when SQL Server restarts, the cache is cleared.
There's a good MSDN article on Execution Plan Caching and Reuse

A T-SQL query executes in 15s on sql 2005, but hangs in SSRS (no changes)?

When I execute a T-SQL query it executes in 15s on sql 2005.
SSRS was working fine until yesterday. I had to crash it after 30min.
I made no changes to anything in SSRS.
Any ideas? Where do I start looking?
Start your query in SSIS then look into the Activity Monitor of Management Studio. See if the query is currently blocked by any chance, and in that case, what it is blocked on.
Alternatively you can use sys.dm_exec_requests and check the same thing, w/o the user interface getting in the way. Look at the session executing the query from SSIS, check it's blocking_session_id, wait_type, wait_time and wait_resource columns. If you find that the query is blocked, the SSIS has no fault probably and something in your environment is blocking the query execution. If on the other hand the query is making progress (the wait_resource changes) then it just executes slowly and its time to check its execution plan.
Have you tried making the query a stored procedure to see if that helps? This way execution plans are cached.
Updated: You could also make the query a view to achieve the same affect.
Also, SQL Profiler can help you determine what is being executed. This will allow you to see if the SQL is the cause of the issue, or Reporting Services rendering the report (ie: not fetching the data)
There are a number of connection-specific things that can vastly change performance - for example the SET options that are active.
In particular, some of these can play havoc if you have a computed+persisted (and possibly indexed) column. If the settings are a match for how the column was created, it can use the stored value; otherwise, it has to recalculate it per row. This is especially expensive if the column is a promoted column from xml.
Does any of that apply?
Are you sure the problem is your query? There could be SQL Server problems. Don't forget about the ReportServer and ReportServerTempDB databases. Maybe they need some maintenance.
The first port of call for any performance problems like this is to get an execution plan. You can either get this by running an SQL Profiler Trace with the ShowPlan Xml event, or if this isn't possible (you probably shouldn't do this on loaded production servers) you can extract the cached execution plan that's being used from the DMVs.
Getting the plan from a trace is preferable however, as that plan will include statistics about how long the different nodes took to execute. (The trace wont cripple your server or anything, but it will have some performance impact)

How often do you update statistics in SQL Server 2000?

I'm wondering if updating statistics has helped you before and how did you know to update them?
exec sp_updatestats
Yes, updating statistics can be very helpful if you find that your queries are not performing as well as they should. This is evidenced by inspecting the query plan and noticing when, for example, table scans or index scans are being performed instead of index seeks. All of this assumes that you have set up your indexes correctly.
There is also the UPDATE STATISTICS command, but I've personally never used that.
It's common to add your statistics update to a maintenance plan (as in an Enterprise Manager-defined Maintenance plan). That way it happens on a schedule - daily, weekly, whatever.
SQL Server 2000 uses statistics to make good decisions about query execution so they definitely help.
It's a good idea to rebuild your indexes at the same time (DBCC DBREINDEX and DBCC INDEXDEFRAG).
If you rebuild indexes, then the statistics for those indexes are automatically rebuilt.
If your timeframes allow, then running UPDATE STATISTICS of part of a maintenance plan is a good idea, as frequently as nightly (if your indexes are being rebuilt less frequently than that).
SQL Server: To determine if out of date statistics are the cause of a query performing poorly, turn on 'Query->Display Estimated Execution plan' (CTRL-L) in Management Studio and run the query. Open another window, paste in the same query and turn on 'Query->Display ActualExecution plan' (CTRL-M) in Management Studio and re-run the query. If the execution plans are different then statistics are most likely out of date.
Updating statistics becomes necessary after the following events:
- Records are inserted into your table
- Records are deleted from your table
- Records are updated in your table
If you have a large database with millions of records that gets lots of writes per day you probably should be determining an off-peak time to schedule index updates.
Also, you need to consider your type of traffic. If you have a lot (millions) of records in tables with many foreign key dependencies and you have a larger proportion of writes to reads you might want to consider turning off automatic statistics recomputation (NOTE: this feature will be removed in a future version of SQL Server, but for SQL Server 2000 you should be OK). This tells the engine to not recompute statistics on every INSERT, DELETE, or UPDATE and makes those actions much more performant.
Indexes are no laughing matter. They are the heart and soul of a performant database.

Resources