I'm having a problem with an ad-hoc query that manages a fairly high amount of data. Upon executing the query, the status immediately goes into suspended state. It will stay suspended for around 25 minutes and then complete execution.
I have a mirror environment with SQL2K and the same query executes in around 2 minutes and never goes into suspended state.
##version =
Microsoft SQL Server 2005 - 9.00.3068.00 (Intel IA-64) Feb 26 2008 21:28:22 Copyright (c) 1988-2005 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2)
Perhaps the statistics are out of date and need updated.
Update them but better to rebuild indexes at the same time.
Or, you don't have any. Are stats set to create and update automatically?
I've seen cases where they're switched off because someone does not understand what they are for or how updates happen.
Note: the sampling rate of stats is based on the last stats update. So if you last sampled 100%, it may take some time.
What happens when you run the query twice? Is it quicker the second time?
It's hard to tell from the limited information, but I'd be curious to know what's happening from a performance perspective on the server while the query is running. You can capture performance metrics with Perfmon, and I've got a tutorial about it here:
http://www.brentozar.com/perfmon
While the query's running, what's the statistics each of those counters look like? If you capture the statistics as described in that article, you can email 'em to me at brento#brentozar.com and I'll take a look at 'em to see what's going on.
Another thing that'd help is the execution plan of the query. Go into SQL Server Management Studio, put the query in, and click Query, Display Estimated Execution Plan. Right-click anywhere on the plan and save it as a file, and then other people can see what the query looks like.
Then ideally, click Query, Include Actual Execution Plan, run the query, and then go to the Execution Plan tab. Save that one too. If you post the two plans (or email 'em to me) you'll get better answers about what's going on.
Related
I have a requirement to generate CPU usage reports for my SQL server for previous 7 days. I will use a graph to represent it.
Also, I have to keep track of top 10 queries which consumed maximum CPU each day.
I got one post below but I have few doubts.
CPU utilization by database?
Doubt:
How I will know that, how was the overall CPU usage yesterday? Do I have to add all the AvgCPU time for distinct queries ran yesterday?
There is no reliable way in Getting cpu usage per day/last 5 days..I see SQLServer has below columns..
select
creation_time,
last_worker_time,
total_worker_time,
execution_count,
last_execution_time
from sys.dm_exec_query_stats
And those reported below on my test instance..
As you can see from Screenshot above..
We can't reliably get ,count of instances a particular query got executed on a particular day..And moreover you will see this entire data gets reset if you restart SQLServer
If you really want to show data on daily basis,you could use perfmon..Here are some tutorials which may help you..
1.Collecting Performance Data into a SQL Server Table
2.Using PerfMon for SQL Server Reporting Services Performance Management
You may also take a look on MS SQL Data collection sets. Easy to deploy, easy to keep necessary data (as long it stored on dedicated DB), and at least its really fits your requirements for top 10 CPU-expensive queries.
You can also slightly modify t-sql for collector agent and target tables on collector server in order to obtain some extra CPU info if you need it.
In December 2015 I deployed a small azure web app (webapi, 1 controller, 2 REST end points) along with an Azure SQL db (1 table, 1.7M rows, 3 stored procedures).
I could call my rest endpoints and get data back within a few seconds. Happy days.
Now I make the same call and my app throws a 500 error. Closer examination shows the SQL access timed out.
I can open the db (using Visual Studio data tools) and run the queries and call the stored procedures. For my main sproc execution time is about 50 seconds - way too long for the app to wait.
The data in the table has not changed since deployment, and the app and db have been untouched for the last few months, so how come it ran OK back in December but fails miserably now?
All help greatly appreciated.
The Query Store is available in SQL Server 2016 and Azure SQL Database. It is a sort of "flight recorder" which records a history of query executions.
Its purpose is to identify what has gone wrong, when a query execution plan suddenly becomes slow. Unlike DMVs, the Query Store data is persisted in tables, so it isn't lost when SQL Server is restarted, and can be retained for months.
It has four reports in SSMS. This picture shows the Top Resource Consuming Queries. The top left pane shows a bar graph where each bar represents a query, ordered by descending resource usage.
You can select a particular query of interest, then the top right pane shows a timeline with points for each execution. In this example, you can see that the query has got much worse, because the second dot is showing much higher resource usage. (Actually I forced this to happen by deliberately dropping a covering index.)
Then you can click on a particular dot and the graphical execution plan is displayed in the lower pane. So in this example, I can compare the two plans to see what has changed. The graphical execution plan is telling me there is a missing index (this feature in itself is not new), and if I clicked on the previous dot this message wouldn't appear. So that's a pretty good clue as to what's gone wrong!
The Regressed Queries report has the same format, but it shows only queries that have "regressed" or got worse. So it is ideal for troubleshooting.
I know this doesn't resolve your present situation, unless you happened to have Query Store enabled. However it could be very useful for the future and for other people reading this.
See MSDN > Monitoring Performance By Using the Query Store: https://msdn.microsoft.com/en-GB/library/dn817826.aspx
We have a site in development that when we deployed it to the client's production server, we started getting query timeouts after a couple of hours.
This was with a single user testing it and on our server (which is identical in terms of Sql Server version number - 2005 SP3) we have never had the same problem.
One of our senior developers had come across similar behaviour in a previous job and he ran a query to manually update the statistics and the problem magically went away - the query returned in a few miliseconds.
A couple of hours later, the same problem occurred.So we again manually updated the statistics and again, the problem went away. We've checked the database properties and sure enough, auto update statistics isTRUE.
As a temporary measure, we've set a task to update stats periodically, but clearly, this isn't a good solution.
The developer who experienced this problem before is certain it's an environment problem - when it occurred for him previously, it went away of its own accord after a few days.
We have examined the SQL server installation on their db server and it's not what I would regard as normal. Although they have SQL 2005 installed (and not 2008) there's an empty "100" folder in installation directory. There is also MSQL.1, MSQL.2, MSQL.3 and MSQL.4 (which is where the executables and data are actually stored).
If anybody has any ideas we'd be very grateful - I'm of the opinion that rather than the statistics failing to update, they are somehow becoming corrupt.
Many thanks
Tony
Disagreeing with Remus...
Parameter sniffing allows SQL Server to guess the optimal plan for a wide range of input values. Some times, it's wrong and the plan is bad because of an atypical value or a poorly chosen default.
I used to be able to demonstrate this on demand by changing a default between 0 and NULL: plan and performance changed dramatically.
A statistics update will invalidate the plan. The query will thus be compiled and cached when next used
The workarounds are one of these follows:
parameter masking
use OPTIMISE FOR UNKNOWN hint
duplicate "default"
See these SO questions
Why does the SqlServer optimizer get so confused with parameters?
At some point in your career with SQL Server does parameter sniffing just jump out and attack?
SQL poor stored procedure execution plan performance - parameter sniffing
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
...and Google search on SO
Now, Remus works for the SQL Server development team. However, this phenomenon is well documented by Microsoft on their own website so blaming developers is unfair
How Data Access Code Affects Database Performance (MSDN mag)
Suboptimal index usage within stored procedure (MS Connect)
Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005 (an excellent white paper)
Is not that the statistics are outdated. What happens when you update statistics all plans get invalidated and some bad cached plan gets evicted. Things run smooth until a bad plan gets again cached and causes slow execution.
The real question is why do you get bad plans to start with? We can get into lengthy technical and philosophical arguments whether a query processor shoudl create a bad plan to start with, but the thing is that, when applications are written in a certain way, bad plans can happen. The typical example is having a where clause like (#somevaribale is null) or (somefield= #somevariable). Ultimately 99% of the bad plans can be traced to developers writing queries that have C style procedural expectation instead of sound, set based, relational processing.
What you need to do now is to identify the bad queries. Is really easy, just check sys.dm_exec_query_stats, the bad queries will stand out in terms of total_elapsed_time and total_logical_reads. Once you identified the bad plan, you can take corrective measures which depend from query to query.
When I execute a T-SQL query it executes in 15s on sql 2005.
SSRS was working fine until yesterday. I had to crash it after 30min.
I made no changes to anything in SSRS.
Any ideas? Where do I start looking?
Start your query in SSIS then look into the Activity Monitor of Management Studio. See if the query is currently blocked by any chance, and in that case, what it is blocked on.
Alternatively you can use sys.dm_exec_requests and check the same thing, w/o the user interface getting in the way. Look at the session executing the query from SSIS, check it's blocking_session_id, wait_type, wait_time and wait_resource columns. If you find that the query is blocked, the SSIS has no fault probably and something in your environment is blocking the query execution. If on the other hand the query is making progress (the wait_resource changes) then it just executes slowly and its time to check its execution plan.
Have you tried making the query a stored procedure to see if that helps? This way execution plans are cached.
Updated: You could also make the query a view to achieve the same affect.
Also, SQL Profiler can help you determine what is being executed. This will allow you to see if the SQL is the cause of the issue, or Reporting Services rendering the report (ie: not fetching the data)
There are a number of connection-specific things that can vastly change performance - for example the SET options that are active.
In particular, some of these can play havoc if you have a computed+persisted (and possibly indexed) column. If the settings are a match for how the column was created, it can use the stored value; otherwise, it has to recalculate it per row. This is especially expensive if the column is a promoted column from xml.
Does any of that apply?
Are you sure the problem is your query? There could be SQL Server problems. Don't forget about the ReportServer and ReportServerTempDB databases. Maybe they need some maintenance.
The first port of call for any performance problems like this is to get an execution plan. You can either get this by running an SQL Profiler Trace with the ShowPlan Xml event, or if this isn't possible (you probably shouldn't do this on loaded production servers) you can extract the cached execution plan that's being used from the DMVs.
Getting the plan from a trace is preferable however, as that plan will include statistics about how long the different nodes took to execute. (The trace wont cripple your server or anything, but it will have some performance impact)
What techinques do you use? How do you find out which jobs take the longest to run? Is there a way to find out the offending applications?
Step 1:
Install the SQL Server Performance Dashboard.
Step2:
Profit.
Seriously, you do want to start with a look at that dashboard. More about installing and using it can be found here and/or here
To identify problematic queries start the Profiler, select following Events:
TSQL:BatchCompleted
TSQL:StmtCompleted
SP:Completed
SP:StmtCompleted
filter output for example by
Duration > x ms (for example 100ms, depends mainly on your needs and type of system)
CPU > y ms
Reads > r
Writes > w
Depending on what you want to optimize.
Be sure to filter the output enough to not having thousands of datarows scrolling through your window, because that will impact your server performance!
Its helpful to log output to a database table to analyse it afterwards.
Its also helpful to run Windows system monitor in parallel to view cpu load, disk io and some sql server performance counters. Configure sysmon to save the data to a file.
Than you have to get production typical query load and data volumne on your database to see meaningfull values with profiler.
After getting some output from profiler, you can stop profiling.
Then load the stored data from the profiling table again into profiler, and use importmenu to import the output from systemmonitor and the profiler will correlate the sysmon output to your sql profiler data. Thats a very nice feature.
In that view you can immediately identifiy bootlenecks regarding to your memory, disk or cpu sytem.
When you have identified some queries you want to omtimize, go to query analyzer and watch the execution plan and try to omtimize index usage and query design.
I have had good sucess with the Database Tuning tools provided inside SSMS or SQL Profiler when working on SQL Server 2000.
The key is to work with a GOOD sample set, track a portion of TRUE production workload for analsys, that will get the best overall bang for the buck.
I use the SQL Profiler that comes with SQL Server. Most of the poorly performing queries I've found are not using a lot of CPU but are generating a ton of disk IO.
I tend to put in filters on disk reads and look for queries that tend to do more than 20,000 or so reads. Then I look at the execution plan for those queries which usually gives you the information you need to optimize either the query or the indexes on the tables involved.
I use a few different techniques.
If you're trying to optimize a specific query, use Query Analyzer. Use the tools in there like displaying the execution plan, etc.
For your situation where you're not sure WHICH query is running slowly, one of the most powerful tools you can use is SQL Profiler.
Just pick the database you want to profile, and let it do its thing.
You need to let it run for a decent amount of time (this varies on traffic to your application) and then you can dump the results in a table and start analyzing them.
You are going to want to look at queries that have a lot of reads, or take up a lot of CPU time, etc.
Optimization is a bear, but keep going at it, and most importantly, don't assume you know where the bottleneck is, find proof of where it is and fix it.