I have a large execution plan. Is there a way to quickly view the total sum of IO cost, CPU cost, etc. for the full plan without manually calculating each node?
SentryOne Plan Explorer is a great tool for analysing SQL Server execution plans. I don't work for them, I use this tool very often.
It can show the costs of individual operator or cumulative costs.
Related
Is there a way to predict the cost of a query I'm going to execute on Snowflake before actually executing it? We would like to avoid high costs, so having the cost calculated after executing the query doesn't help much.
Other technologies such as BigQuery provide calculators to estimate the query's cost before executing, but I didn't find such an option for Snowflake.
There isn't a tool within Snowflake that will do this for you automatically unfortunately, you'd have to manually figure it out yourself.
For example, a naive way of calculating it could be: run the query on a 10% sample of the table(s) and then take the credit usage of that query and multiply it by 10.
I have a not simple query.
When I had 10 DTUs for my database, it took about 17 seconds to execute the query.
I increased the level to 50 DTU - now the execution takes 3-4 seconds.
This ratio corresponds to the documentation - more DTU = work faster.
But!
1 On my PC I can execute the query in 1 sec.
2 In portal-statistics I see that I use only 12 DTU (max DTU percentage = 25% ).
In sys.dm_db_resource_stats I see that MAX(avg_cpu_percent) is about 25% and the other params are less.
So the question is - Why my query takes 3-4 sec to exec?
It can be executed in 1 sec. And server does not use all my DTU.
How to make server use all available resources to exec queries faster?
DTU is a combined measurement of CPU, memory, data I/O and transaction log I/O.
This means that reaching a DTU bottleneck can mean any of those.
This question may help you to measure the different aspects: Azure SQL Database "DTU percentage" metric
And here's more info on DTU: https://learn.microsoft.com/en-us/azure/sql-database/sql-database-what-is-a-dtu
On my PC I can execute the query in 1 sec
We should not be comparing our Onprem computing power with DTU.
DTU is a combination of CPU,IO,Memory you will be getting based on your performance tier.so the comparison is not valid.
How to make server use all available resources to exec queries faster?
This is simply not possible,since when sql runs a query,memory is the only constraint ,that can prevent the query from even starting.Rest of the resources like CPU,IO speed can increase or decrease based on what query does
In summary,you will have to ensure ,queries are not constrained due to resource crunch,they can use up all resources if they need and can release them when not needed.
You also will have to look at wait types and further fine tune the query.
As Bernard Vander Beken mentioned:
DTU is a combined measurement of CPU, memory, data I/O and transaction
log I/O.
I'll also add that Microsoft does not share the formula used to calculate DTUs. You mentioned that you are not seeing DTUs peg at 100% during query execution. But since we do not know the formula, you may very well be pegging components of DTU, but not pegging DTU itself.
Azure SQL is a shared environment, and each tenant will be throttled to ensure that the minimum SLA for all tenants
What a DTU is is quite fuzzy.
We have done an experiment where we run a set of benchmarks on machines with the same amount of DTU on different data centers.
http://dbwatch.com/azure-database-performance-measured
It turns out that the actual performance varies by a factor of 5.
We have also seen instances where the performance of a repeated query on the same database varies drastically.
We provide our database performance benchmarks for free if you would like to compare the instance you run on your PC with the instance in the azure cloud.
I am new to database. I try to run a simple query on SQL Server 2014 and Oracle 12c.
This is the execution plan I get using SQL Server. It contains information about I/O cost and CPU cost in seconds.
However I can't find the same information using Oracle. The CPU cost shown in the execution plan is not based on execution time.
I want to do some comparison between the two databases. How I can obtain the same information in Oracle as in SQL Server? Besides, how I can know the cache hit ratio?
Thank you.
The cost estimate is in fact based on time.
It is a non-dimensionalised measurement that expresses the estimated time for the query to complete in terms of the equivalent number of logical reads, so if a logical read is expected to take 0.001 seconds then a cost of 12 is 0.012 seconds.
Although it is commonly stated that the cost between different queries cannot be compared, this was only definitively true in earlier versions. The difficulty in comparing query costs relates to how long single block and multiblock reads, writes and CPU operations take. This can depend on such a multitude of factors (other activity on the system, and activity immediately prior that affects the likelihood of blocks being cached by the instance or the i/o subsystem) that it is highly unlikely that you really expect to derive a time from a cost.
Cache hit ratios have been discredited for quite some time as a measurement of system efficiency. It is possible to improve the cache hit ratio to an arbitrary number by simply running particular types of highly inefficient queries.
Use the Oracle Database 12c: EM Express Performance Hub to get both estimates and actual values for queries and their operations. Regular explain plans are helpful, but they just show you what Oracle thinks will happen, not necessarily what will happen.
Specifically, use either the SQL Details (aggregate) or the SQL Monitor Details (last execution) information.
You're close, very close.
Run with AutoTrace.
I talk more about the feature here, or you can of course read up on the docs or the Help.
On a powerful machine the SQL Server query is running too slowly.
In the execution plan I can see that most of the time spent goes to a "Lazy Index Spooling" process. In the query some aggregate functions are being used for calculation of values.
How can I speed-up the query (machine resources are enough)?
Thanks in advance.
I want to tune a production SQL server. After making adjustments (such as changing the degree of parallelism) I want to know if it helped or hurt query execution times.
This seems like an obvious performance counter, but for the last half hour I've been searching Google and the counter list in perfmon, and I have not been able to find a performance counter for SQL server to give me the average execution time for all queries hitting a server. The SQL Server equivalent of the ASP.NET Request Execution Time.
Does one exist that I'm missing? Is there another effective way of monitoring the average query times for a server?
I don't believe there is a PerfMon but there is a report within SQL Server Management Studio:
Right click on the database, select Reports > Standard Reports > Object Execution Statistics. This will give you several very good statistics about what's running within the database, how long it's taking, how much memory/io processing it takes, etc.
You can also run this on the server level across all databases.
You can use Query Analyzer (which is one of the tools with SQL Server) and see how they are executed internally so you can optimize indexing etc. That wouldn't tell you about the average, or round-trip back to the client. To do that you'd have to log it on the client and analyze the data yourself.
I managed to do it by saving the Trace to SQL. When the trace is open
File > Save As > Trace Table
Select the SQL, and once its imported run
select avg(duration) from dbo.[YourTableImportName]
You can very easily perform other stats, max, min, counts etc... Much better way of interrogating the trace result
An other solution is to run multiple time the query and get the average query time:
DO $proc$
DECLARE
StartTime timestamptz;
EndTime timestamptz;
Delta double precision;
BEGIN
StartTime := clock_timestamp();
FOR i IN 1..100 LOOP
PERFORM * FROM table_name;
END LOOP;
EndTime := clock_timestamp();
Delta := 1000 * (extract(epoch FROM EndTime) - extract(epoch FROM StartTime)) / 100;
RAISE NOTICE 'Average duration in ms = %', Delta;
END;
$proc$;
Here it run 100 time the query:
PERFORM * FROM table_name;
Just replace SELECT by PERFORM
Average over what time and for which queries? You need to further define what you mean by "average" or it has no meaning, which is probably why it's not a simple performance counter.
You could capture this information by running a trace, capturing that to a table, and then you could slice and dice the execution times in one of many ways.
It doesn't give exactly what you need, but I'd highly recommend trying the SQL Server 2005 Performance Dashboard Reports, which can be downloaded here. It includes a report of the top 20 queries and their average execution time and a lot of other useful ones as well (top queries by IO, wait stats etc). If you do install it be sure to take note of where it installs and follow the instructions in the Additional Info section.
The profiler will give you statistics on query execution times and activities on the server. Overall query times may or may not mean very much without tying them to specific jobs and query plans.
Other indicators of performance bottlenecks are resource contention counters (general statistics, latches, locks). You can see these through performance counters. Also looking for large number of table-scan or other operations that do not make use of indexes can give you an indication that indexing may be necessary.
On a loaded server increasing parallelism is unlikely to materially affect performance as there are already many queries active at any given time. Where parallelism gets you a win is on large infrequently run batch jobs such as ETL processes. If you need to reduce the run-time of such a process then parallelism might be a good place to look. On a busy server doing a transactional workload with many users the system resources will be busy from the workload so parallelism is unlikely to be a big win.
You can use Activity Monitor. It's built into SSMS. It will give you real-time tracking of all current expensive queries on the server.
To open Activity Monitor:
In Sql Server Management Studio (SSMS), Right click on the server and select Activity Monitor.
Open Recent Expensive Queries to see CPU Usage, Average Query Time, etc.
Hope that helps.
There are counters in 'SQL Server:Batch Resp Statistics' group, which are able to track SQL Batch Response times. Counters are divided based on response time intervals, for example, from 0 ms to 1 ms, ..., from 10 ms to 20 ms, ..., from 1000 ms to 2000 ms and so on, So proper counters can be selected for the desired time interval.
Hope it helps.