Find out which query increasing DTU in SQL Azure - sql-server

I am using SQL Azure SQL Server for my App. The DB tier, which my app was in was working perfectly till recently and the avg DTU usage was under control. But off late, during peak hours, the DTU spikes to touch 100% consistently. Upgrading to the Next tier is an option, but first i want to figure out which query is responsible for this. Is this possible to find in Azure, which query made the DTU jump to 100%?

Simply put DTU is a blend of CPU,IO,Memory you will get based on your service tier..It would be really great if there is column which shows how much DTU the query used compared to total DTU..
I would deal this issue in two steps..
Step1:
Find out Which metric is consistently more than 90%
--This DMV stores DTU usage for every 15 seconds upto 14 days..
SELECT
AVG(avg_cpu_percent) AS 'Average CPU Utilization In Percent',
MAX(avg_cpu_percent) AS 'Maximum CPU Utilization In Percent',
AVG(avg_data_io_percent) AS 'Average Data IO In Percent',
MAX(avg_data_io_percent) AS 'Maximum Data IO In Percent',
AVG(avg_log_write_percent) AS 'Average Log Write Utilization In Percent',
MAX(avg_log_write_percent) AS 'Maximum Log Write Utilization In Percent',
AVG(avg_memory_usage_percent) AS 'Average Memory Usage In Percent',
MAX(avg_memory_usage_percent) AS 'Maximum Memory Usage In Percent'
FROM sys.dm_db_resource_stats;
Step2:
If you see any of the above metric consistently more than 90%,you can tune those queries..
For Example,if cpu is more than 90% ,you can start tracking the queries which are having high cpu usage and tune them..
Updated as of 20171701:
SQLAzure introduced Query Performance insight,which shows DTU used by a Query.You will have to enable Querystore for this to work..
As you can see from screenshot above,you can see exact DTU usage for each query

Related

How can I reduce my high page reads/sec in SQL Server?

I have an extremely high page reads/sec on my SQL Server instance:
Memory looks good (64GB overall):
Most blogs/articles online point to increasing RAM but what else can I do to reduce these high page reads/sec?
Per DanGuzman, the query above is a cumulative count. Use Process Monitor for understanding the actual reads/sec.

SQL Azure DTUs at 100% troubleshoot

I am performing load testing of my app using visual studio load tests
my tests get data from about 40 tables in sql azure database in seperate api request for each table.
The load testing is simulating around 200 users over 10 mins and about 80-100 requests per second.
During the tests, I have seen that my SQL Azure server DTUs are touching 100% which is clearly not a good sign for the production performance.
This also stays the same, even if I Scale Up my Database with more DTUs.
How do I troubleshoot or address the issue , if it's a specific query which I need to look at or just a scaling issue to address
Take a look at the Query Performance Insight blade on your Azure SQL Database. This will show you the top queries that are consuming resources and are good candidates for optimization.
Also check out the Performance Recommendations blade on your Azure SQL Database. This will show indexing recommendations after the Analyzer has enough history to see usage and determine if adding (or removing) indexes will deliver benefits.
If you're using an Azure SQL DB with a Web App, plug in Application Insights. This will give you a lot of visibility into what queries are long-running, and what queries are being executed a bunch of times that could benefit from minor optimization.
Take a look at the Query Store optimize top queries showing high costs of resources and long running queries. Click here to know more about Query Store.
Run the following query to know what resources show more consumption. Then identify top queries using the most of those resources.
SELECT
AVG(avg_cpu_percent) AS 'Average CPU Utilization In Percent',
MAX(avg_cpu_percent) AS 'Maximum CPU Utilization In Percent',
AVG(avg_data_io_percent) AS 'Average Data IO In Percent',
MAX(avg_data_io_percent) AS 'Maximum Data IO In Percent',
AVG(avg_log_write_percent) AS 'Average Log Write Utilization In Percent',
MAX(avg_log_write_percent) AS 'Maximum Log Write Utilization In Percent',
AVG(avg_memory_usage_percent) AS 'Average Memory Usage In Percent',
MAX(avg_memory_usage_percent) AS 'Maximum Memory Usage In Percent'
FROM sys.dm_db_resource_stats;
Hope this helps.

Are all available DTU used to exec a query?

I have a not simple query.
When I had 10 DTUs for my database, it took about 17 seconds to execute the query.
I increased the level to 50 DTU - now the execution takes 3-4 seconds.
This ratio corresponds to the documentation - more DTU = work faster.
But!
1 On my PC I can execute the query in 1 sec.
2 In portal-statistics I see that I use only 12 DTU (max DTU percentage = 25% ).
In sys.dm_db_resource_stats I see that MAX(avg_cpu_percent) is about 25% and the other params are less.
So the question is - Why my query takes 3-4 sec to exec?
It can be executed in 1 sec. And server does not use all my DTU.
How to make server use all available resources to exec queries faster?
DTU is a combined measurement of CPU, memory, data I/O and transaction log I/O.
This means that reaching a DTU bottleneck can mean any of those.
This question may help you to measure the different aspects: Azure SQL Database "DTU percentage" metric
And here's more info on DTU: https://learn.microsoft.com/en-us/azure/sql-database/sql-database-what-is-a-dtu
On my PC I can execute the query in 1 sec
We should not be comparing our Onprem computing power with DTU.
DTU is a combination of CPU,IO,Memory you will be getting based on your performance tier.so the comparison is not valid.
How to make server use all available resources to exec queries faster?
This is simply not possible,since when sql runs a query,memory is the only constraint ,that can prevent the query from even starting.Rest of the resources like CPU,IO speed can increase or decrease based on what query does
In summary,you will have to ensure ,queries are not constrained due to resource crunch,they can use up all resources if they need and can release them when not needed.
You also will have to look at wait types and further fine tune the query.
As Bernard Vander Beken mentioned:
DTU is a combined measurement of CPU, memory, data I/O and transaction
log I/O.
I'll also add that Microsoft does not share the formula used to calculate DTUs. You mentioned that you are not seeing DTUs peg at 100% during query execution. But since we do not know the formula, you may very well be pegging components of DTU, but not pegging DTU itself.
Azure SQL is a shared environment, and each tenant will be throttled to ensure that the minimum SLA for all tenants
What a DTU is is quite fuzzy.
We have done an experiment where we run a set of benchmarks on machines with the same amount of DTU on different data centers.
http://dbwatch.com/azure-database-performance-measured
It turns out that the actual performance varies by a factor of 5.
We have also seen instances where the performance of a repeated query on the same database varies drastically.
We provide our database performance benchmarks for free if you would like to compare the instance you run on your PC with the instance in the azure cloud.

Azure SQL monitoring high DTU

We're getting spikes from time to time but can't find what causes it.
How to monitor the Azure SQL DTU usage?
How can I find what are the high DTU queries in live?
The following will show you a log of the 100 most recent DTU logs. As of now, a log entry is created every 15 seconds.
SELECT TOP 100 *
FROM sys.dm_db_resource_stats
ORDER BY end_time DESC
Please have a check on the below links which talks about Azure SQL Database Throughput Unit (DTU).
http://azure.microsoft.com/blog/2014/09/11/azure-sql-database-introduces-new-near-real-time-performance-metrics/
Azure SQL Database "DTU percentage" metric
http://msdn.microsoft.com/en-us/library/azure/dn741336.aspx
Regards,
Mekh.

Is there a SQL server performance counter for average execution time?

I want to tune a production SQL server. After making adjustments (such as changing the degree of parallelism) I want to know if it helped or hurt query execution times.
This seems like an obvious performance counter, but for the last half hour I've been searching Google and the counter list in perfmon, and I have not been able to find a performance counter for SQL server to give me the average execution time for all queries hitting a server. The SQL Server equivalent of the ASP.NET Request Execution Time.
Does one exist that I'm missing? Is there another effective way of monitoring the average query times for a server?
I don't believe there is a PerfMon but there is a report within SQL Server Management Studio:
Right click on the database, select Reports > Standard Reports > Object Execution Statistics. This will give you several very good statistics about what's running within the database, how long it's taking, how much memory/io processing it takes, etc.
You can also run this on the server level across all databases.
You can use Query Analyzer (which is one of the tools with SQL Server) and see how they are executed internally so you can optimize indexing etc. That wouldn't tell you about the average, or round-trip back to the client. To do that you'd have to log it on the client and analyze the data yourself.
I managed to do it by saving the Trace to SQL. When the trace is open
File > Save As > Trace Table
Select the SQL, and once its imported run
select avg(duration) from dbo.[YourTableImportName]
You can very easily perform other stats, max, min, counts etc... Much better way of interrogating the trace result
An other solution is to run multiple time the query and get the average query time:
DO $proc$
DECLARE
StartTime timestamptz;
EndTime timestamptz;
Delta double precision;
BEGIN
StartTime := clock_timestamp();
FOR i IN 1..100 LOOP
PERFORM * FROM table_name;
END LOOP;
EndTime := clock_timestamp();
Delta := 1000 * (extract(epoch FROM EndTime) - extract(epoch FROM StartTime)) / 100;
RAISE NOTICE 'Average duration in ms = %', Delta;
END;
$proc$;
Here it run 100 time the query:
PERFORM * FROM table_name;
Just replace SELECT by PERFORM
Average over what time and for which queries? You need to further define what you mean by "average" or it has no meaning, which is probably why it's not a simple performance counter.
You could capture this information by running a trace, capturing that to a table, and then you could slice and dice the execution times in one of many ways.
It doesn't give exactly what you need, but I'd highly recommend trying the SQL Server 2005 Performance Dashboard Reports, which can be downloaded here. It includes a report of the top 20 queries and their average execution time and a lot of other useful ones as well (top queries by IO, wait stats etc). If you do install it be sure to take note of where it installs and follow the instructions in the Additional Info section.
The profiler will give you statistics on query execution times and activities on the server. Overall query times may or may not mean very much without tying them to specific jobs and query plans.
Other indicators of performance bottlenecks are resource contention counters (general statistics, latches, locks). You can see these through performance counters. Also looking for large number of table-scan or other operations that do not make use of indexes can give you an indication that indexing may be necessary.
On a loaded server increasing parallelism is unlikely to materially affect performance as there are already many queries active at any given time. Where parallelism gets you a win is on large infrequently run batch jobs such as ETL processes. If you need to reduce the run-time of such a process then parallelism might be a good place to look. On a busy server doing a transactional workload with many users the system resources will be busy from the workload so parallelism is unlikely to be a big win.
You can use Activity Monitor. It's built into SSMS. It will give you real-time tracking of all current expensive queries on the server.
To open Activity Monitor:
In Sql Server Management Studio (SSMS), Right click on the server and select Activity Monitor.
Open Recent Expensive Queries to see CPU Usage, Average Query Time, etc.
Hope that helps.
There are counters in 'SQL Server:Batch Resp Statistics' group, which are able to track SQL Batch Response times. Counters are divided based on response time intervals, for example, from 0 ms to 1 ms, ..., from 10 ms to 20 ms, ..., from 1000 ms to 2000 ms and so on, So proper counters can be selected for the desired time interval.
Hope it helps.

Resources