On a powerful machine the SQL Server query is running too slowly.
In the execution plan I can see that most of the time spent goes to a "Lazy Index Spooling" process. In the query some aggregate functions are being used for calculation of values.
How can I speed-up the query (machine resources are enough)?
Thanks in advance.
Related
Hello I'm looking at Performance Insights in AWS RDS (Postgres 10)
I slice by "Waits"
When I see Top databases, Top Applications, Top session Types and Top Users they are all actually higher than the SQL queries it self
From these metrics how do you tell what is bottlenecking the CPU?
Top waits, Top SQL, etc. are all different dimensions that you can use to understand what's contributing to database load. Dimensions are not comparable with each other.
It sounds like you want to diagnose what's contributing to the PostgreSQL "CPU" wait event. You can find more information on this topic in the official RDS docs on Tuning with wait event.
If the issue turns out to be suboptimal queries, then you can find the worst performers in the Top SQL tab (dimension) of Performance Insights.
Here are some cases that could cause high CPU usage of Postgres.
Incorrect indexes are used in the query
Debug method: Check the query plan - Through EXPLAIN, we could check the query plan, if the index is used in the query, the Index Scan could be found in the query plan result.
Solution: add the corresponding index to reduce CPU usage
Query with sort operation
Debug method: Check EXPLAIN (analyze, buffers) - If the memory is insufficient to do the sorting operation, the temporary file could be used to do the sorting, and high CPU usage is comes up.
Note: DO NOT "EXPLAIN (analyze)" in a busy production system as it actually executes the query behind the scenes to provide more accurate planner information and its impact is significant
Solution: Tune up the work_mem and sorting operations
Sample: Tune sorting operations in PostgreSQL with work_mem
Long-running transactions
Debug method: SELECT * FROM pg_stat_activity where (state = 'idle in transaction') and xact_start is not null;
Solution: kill the long-running transaction through select pg_terminate_backend(pid)
A synchronization program is syncing data between our SQL server and an online database. Every 5 minutes the program runs query's on all tables, al in the format:
select max(ID) from table
After that, the program retreives information from the online database, using the max(ID) to retreive only newer records.
The query runs fast on small tables, but some tables have millions of records.
The performance can be boosted by using a Where statement:
select max(ID) from table where date >= dateadd(dd,-30,getdate())
Unfortunately it's an old program, which cannot be changed anymore.
(no supplier and no source code)
I read something about Plan Guides, which should give performance boosts to query's.
Can I use a plan guide to alter these query's so they run much faster??
I would try to stay clear of plan guides unless you have tried everything else to help SQL Server choose a better execution plan; reason, you are forcing SQL Server to choose maybe not the best execution plan and if your statistics are right, SQL Server does a heck of a job to provide you with a very good estimation plan.
Sorry if this is going to be long winded but SQL Server performance is based on statistics and if your statistics are off, your query will not perform optimally. This is where I would first start.
I would first run your query and generate an estimated execution plan and actual execution plan. If you have never worked with execution plans, here are a few sites to start you off to understand operators and how to interpret some of the more common operators; Rasanu Consulting (Joins), Microsoft (Joins), Simple Talk (Common Operatiors), Microsoft (All Operators). The execution plans are generated from statistics SQL Server stores on indexes and non-indexed columns. I am assuming your query is stored in a procedure where the initial execution plan is stored. Unfortunately, changes to the query in the stored proc can cause performance issues because of an outdated plan and the proc will need to be recompiled.
Based on the results of your execution plan, you may find a simple clustered index will help performance or find that fine tuning your where clause will result in better carnality estimates. Here are a few sites that do a really good job explaining statistics; Patrick Keisler, Itzik Ben-Gan, Microsoft
You may be wondering "what does this have to do with my question?" which would expect a simple yes or no answer but, effectively correcting query performance starts with an understanding statistics and execution plans.
Hope this helps!
I have a complex SQL query produced by LINQ To Entities.
It takes 8s when execution plan is not cached in SQL Server.
It takes 2s when execution plan is cached in SQL Server.
There is a way in EF or in SQL Server to prewarm execution plan caches?
Thanks
No.
You have a performance problem and address it as a performance problem, by taking measurements and investigating the bottlenecks. Follow the excellent Waits and Queues methodology. Read Understanding how SQL Server executes a query to understand what happens when your query executes.
You need to isolate some problems:
is it a cold plan cache, as you state, or a cold data cache (more likely)?
if is a cold plan cache, does compilation really last 6 seconds? I don't buy this.
if is a cold data cache, why is your query issuing 6 seconds worth of IO?
even with a warm cache, your query burns 2 seconds of execution. Why? Does it scan tables end-to-end? Are you missing an index or more? (hint: yes, you do).
Reading the Waits and Queues paper will teach you how to answer these questions.
Address the cause, not the symptom.
I'm running into a performance issue with the current schema. So I built an equivalent schema to solve the issue.
I ran some tests on both schemas and the results are hard to understand. For the record, the data is the same.
I get the following from the Profiler when executing equivalent requests on the two schemas.
Old schema:
1,300,000 reads
5,000 CPU
4 seconds execution time
New schema:
30,000 reads
3,000 CPU
6 seconds execution time
The difference seems to be in the query plan used. The old schema has parallelism in the query plan. The new schema isn't using parallelism.
Has anyone faced similar situations (less IO/CPU but more execution time). How did you solve it?
Is there a way to force parallelism? I've played with query hints(http://msdn.microsoft.com/en-us/library/ms18171). I'm able to stop parallelism on the old schema but can't seem the query on the new schema to use parallelism.
Thanks in advance.
Louis,
Currently there is no way to force parallelism in SQL Server straight out of the box but Adam Machanic did some work to do that though.
http://whoisactive.com
Coming to your first question, yes we have seen cases like that too. Note that Parallelism is cpu bound and that's why you are seeing more cpu time but overall less execution time as you have multiple threads doing the work for you.
http://www.simple-talk.com/sql/learn-sql-server/understanding-and-using-parallelism-in-sql-server/
Make sure you have proper indexes in place and also stats are updated with full scan. In the long run it is best if Query Optimizer makes the decisions by itself but if you want to overwrite the QO plans then you may have to add lot more details. Schema, data and repro.
HTH
What is the usage of Execution Plan in SQL Server? When can these plans help me?
When your queries all run fast, all is good in the world and execution plans don't really matter that much. However, when something is running slow, they are very important. They are primarily used to help tune (speed up) slow SQL. Without execution plans, you'd just be guessing at what to change to make your SQL go faster.
Here is the simplest way they can help you. Take a slow query and do the following in a SQL Server Management Studio query window:
1) run the command:
SET SHOWPLAN_ALL ON
2) run your slow query
3) your query will not run, but the execution plan will be returned.
4) look through the PhysicalOp column output for the word SCAN within any text in this column, this is usually the part of the query that is causing the slowdown. Analyze your joins and index usage in regards to this row of the output and if you can eliminate the scan, you will usually improve the query speed.
There are may useful columns (TotalSubTreeCost, etc) in the output, you will become familiar with them as you learn how to read execution plans and tune your slow queries.
When you need to perform performance profiling on a specific query.
Have a look at SQL Server Query Execution Plan Analysis
When it comes time to analyze the
performance of a specific query, one
of the best methods is to view the
query execution plan. A query
execution plan outlines how the SQL
Server query optimizer actually ran
(or will run) a specific query. This
information if very valuable when it
comes time to find out why a specific
query is running slow.
It's helpful in identifying where bottlenecks are occurring in long running queries. You can make some quite impressive performance improvements simply by knowing how the server executes your complex query.
If I remember correctly it also identifies good candidates for indexing which is another way to increase performance.
These plans describe how SQL Server goes about executing your query. It's the result of a cost-based algorithm by the SQL Server query optimiser, which comes up with a plan for how to get to the end result in the expected best possible way.
It's useful because it will show you where time is being spent in the query, whether indexes are being used or not, what type of process is being done on those indexes (scan, seek) etc.
So if you have a poorly performing query, the execution plan will highlight what the costliest parts are and allow you to see what needs optimising (e.g. may be a missing index, may be an inefficiently written query resulting in an index scan instead of a seek).