I've been trying to diagnose a performance issue in my database and have googled a lot on maxdop. I have seen in many places where ActualNumberOfRows, ActualRebinds etc. are shown in properties view but the first thing I see is DefinedValues.
After running execution plan I right click an Index Scan for example and expect to see these fields so I can determine how rows are distributed amongst threads.
I am using SQL Server 2005 Enterprise.
include the Actual Execution plan and in that click on the arrow button, there we can see the Actual Number of Rows
Related
I'm pretty new to azure and cloud computing in general and would like to ask your help in figuring out issue.
Issue was first seen when we had webpage that time outs due to sql timeout set to (30 seconds).
First thing I did was connect to the Production database using MS SQL management studio 2014 (Connected to the azure prod db)
Ran the stored procedure being used by the low performing page but got the return less than 0 seconds. This made me confused since what could be causing the issue.
By accident i also tried to run the same query in the Azure SQL query editor and was shock that it took 29 seconds to run it.
My main question is why is there a difference between running the query in azure sql query editor vs Management studio. This is the exact same database.
DTU usage is at 98% and im thingking there is a performance issue with the stored proc but want to know first why sql editor is running the SP slower than Management studio.
Current azure db has 50 dtu's.
Two guesses (posting query plans will help get you an answer for situations like this):
SQL Server has various session-level settings. For example, there is one to determine if you should use ansi_nulls behavior (vs. the prior setting from very old versions of SQL Server). There are others for how identifiers are quoted and similar. Due to legacy reasons, some of the drivers have different default settings. These different settings can impact which query plans get chosen, in the limit. While they won't always impact performance, there is a chance that you get a scan instead of a seek on some query of interest to you.
The other main possible path for explaining this kind of issue is that you have a parameter sniffing difference. SQL's optimizer will peek into the parameter values used to pick a better plan (hoping that the value will represent the average use case for future parameter values). Oracle calls this bind peeking - SQL calls it parameter sniffing. Here's the post I did on this some time ago that goes through some examples:
https://blogs.msdn.microsoft.com/queryoptteam/2006/03/31/i-smell-a-parameter/
I recommend you do your experiments and then look at the query store to see if there are different queries or different plans being picked. You can learn about the query store and the SSMS UI here:
https://learn.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
For this specific case, please note that the query store exposes those different session-level settings using "context settings". Each unique combination of context settings will show up as a different context settings id, and this will inform how query texts are interpreted. In query store parlance, the same query text can be interpreted different ways under different context settings, so two different context settings for the same query text would imply two semantically different queries.
Hope that helps - best of luck on your perf problem
In December 2015 I deployed a small azure web app (webapi, 1 controller, 2 REST end points) along with an Azure SQL db (1 table, 1.7M rows, 3 stored procedures).
I could call my rest endpoints and get data back within a few seconds. Happy days.
Now I make the same call and my app throws a 500 error. Closer examination shows the SQL access timed out.
I can open the db (using Visual Studio data tools) and run the queries and call the stored procedures. For my main sproc execution time is about 50 seconds - way too long for the app to wait.
The data in the table has not changed since deployment, and the app and db have been untouched for the last few months, so how come it ran OK back in December but fails miserably now?
All help greatly appreciated.
The Query Store is available in SQL Server 2016 and Azure SQL Database. It is a sort of "flight recorder" which records a history of query executions.
Its purpose is to identify what has gone wrong, when a query execution plan suddenly becomes slow. Unlike DMVs, the Query Store data is persisted in tables, so it isn't lost when SQL Server is restarted, and can be retained for months.
It has four reports in SSMS. This picture shows the Top Resource Consuming Queries. The top left pane shows a bar graph where each bar represents a query, ordered by descending resource usage.
You can select a particular query of interest, then the top right pane shows a timeline with points for each execution. In this example, you can see that the query has got much worse, because the second dot is showing much higher resource usage. (Actually I forced this to happen by deliberately dropping a covering index.)
Then you can click on a particular dot and the graphical execution plan is displayed in the lower pane. So in this example, I can compare the two plans to see what has changed. The graphical execution plan is telling me there is a missing index (this feature in itself is not new), and if I clicked on the previous dot this message wouldn't appear. So that's a pretty good clue as to what's gone wrong!
The Regressed Queries report has the same format, but it shows only queries that have "regressed" or got worse. So it is ideal for troubleshooting.
I know this doesn't resolve your present situation, unless you happened to have Query Store enabled. However it could be very useful for the future and for other people reading this.
See MSDN > Monitoring Performance By Using the Query Store: https://msdn.microsoft.com/en-GB/library/dn817826.aspx
I recently used a free sql profiler product from Anjlab that was great and allowed me to sort the trace results even while the trace is running. The next time I tried to do this in the Sql Profiler that actually comes with Sql Server I didn't see a way to sort the trace results. Am I missing something or does the profiler that comes with Sql Server just really not let you do that?
You can when the trace is stopped go to File -> Properties -> Events Selection -> Organise Columns and set up "Grouping" by the desired sort column(s) and then select "Grouped View" rather than "Aggregated View" in the short cut menu to get the results displayed sorted.
Doesn't look as though the grouping columns are alterable in a running trace however as the buttons are greyed out.
I'm not aware of a way to sort SQL Profiler output while the trace is running.
You can set up "groups" before you start a trace that including some sorting, but they're a bit klunky.
What I usually do is to have SQL Profiler save the results in a table, and do my analysis from there, using T-SQL.
When I execute a T-SQL query it executes in 15s on sql 2005.
SSRS was working fine until yesterday. I had to crash it after 30min.
I made no changes to anything in SSRS.
Any ideas? Where do I start looking?
Start your query in SSIS then look into the Activity Monitor of Management Studio. See if the query is currently blocked by any chance, and in that case, what it is blocked on.
Alternatively you can use sys.dm_exec_requests and check the same thing, w/o the user interface getting in the way. Look at the session executing the query from SSIS, check it's blocking_session_id, wait_type, wait_time and wait_resource columns. If you find that the query is blocked, the SSIS has no fault probably and something in your environment is blocking the query execution. If on the other hand the query is making progress (the wait_resource changes) then it just executes slowly and its time to check its execution plan.
Have you tried making the query a stored procedure to see if that helps? This way execution plans are cached.
Updated: You could also make the query a view to achieve the same affect.
Also, SQL Profiler can help you determine what is being executed. This will allow you to see if the SQL is the cause of the issue, or Reporting Services rendering the report (ie: not fetching the data)
There are a number of connection-specific things that can vastly change performance - for example the SET options that are active.
In particular, some of these can play havoc if you have a computed+persisted (and possibly indexed) column. If the settings are a match for how the column was created, it can use the stored value; otherwise, it has to recalculate it per row. This is especially expensive if the column is a promoted column from xml.
Does any of that apply?
Are you sure the problem is your query? There could be SQL Server problems. Don't forget about the ReportServer and ReportServerTempDB databases. Maybe they need some maintenance.
The first port of call for any performance problems like this is to get an execution plan. You can either get this by running an SQL Profiler Trace with the ShowPlan Xml event, or if this isn't possible (you probably shouldn't do this on loaded production servers) you can extract the cached execution plan that's being used from the DMVs.
Getting the plan from a trace is preferable however, as that plan will include statistics about how long the different nodes took to execute. (The trace wont cripple your server or anything, but it will have some performance impact)
Are there SQL commands that I could use to extract performance monitoring data from MS SQL 2005, such as:
transactions per second
page reads/writes
connections (##CONNECTIONS gives the total, but what about current)
physical reads
locks and blocks
other counters that might be interesting?
You want to look at Dynamic Management VIews (DMVs), introduced with SQL 2005.
This is a really great document from MS that gives you an overview as to how to use DMVs troubleshoot performance issues:
http://download.microsoft.com/download/1/3/4/134644fd-05ad-4ee8-8b5a-0aed1c18a31e/TShootPerfProbs.doc
The best way of seeing what's going on under the hood in SqlServer is to use the Performance Monitor built into windows, click Admin Tools -> Performance. If you haven't used it before the trick is to start it, then click the + icon at the centre top of the window, a dialog opens with 100s of different measures that you can then chart,.watch, or log.
SQL Server has loads of counters that you check out, what all the data means is of course a different question. This solution doesn't integrate with TSQL or Management Studio, but it is the best way of finding out what's going on.
A great place to learn how to performance tune SQL Server is Brent Ozar's website.
It includes details of how to use Performance Monitor, DMV's and how to data mine and interpret the results.
http://www.brentozar.com/sql-server-performance-tuning/