Can I use Plan Guides to optimize a slow performing Query? - sql-server

A synchronization program is syncing data between our SQL server and an online database. Every 5 minutes the program runs query's on all tables, al in the format:
select max(ID) from table
After that, the program retreives information from the online database, using the max(ID) to retreive only newer records.
The query runs fast on small tables, but some tables have millions of records.
The performance can be boosted by using a Where statement:
select max(ID) from table where date >= dateadd(dd,-30,getdate())
Unfortunately it's an old program, which cannot be changed anymore.
(no supplier and no source code)
I read something about Plan Guides, which should give performance boosts to query's.
Can I use a plan guide to alter these query's so they run much faster??

I would try to stay clear of plan guides unless you have tried everything else to help SQL Server choose a better execution plan; reason, you are forcing SQL Server to choose maybe not the best execution plan and if your statistics are right, SQL Server does a heck of a job to provide you with a very good estimation plan.
Sorry if this is going to be long winded but SQL Server performance is based on statistics and if your statistics are off, your query will not perform optimally. This is where I would first start.
I would first run your query and generate an estimated execution plan and actual execution plan. If you have never worked with execution plans, here are a few sites to start you off to understand operators and how to interpret some of the more common operators; Rasanu Consulting (Joins), Microsoft (Joins), Simple Talk (Common Operatiors), Microsoft (All Operators). The execution plans are generated from statistics SQL Server stores on indexes and non-indexed columns. I am assuming your query is stored in a procedure where the initial execution plan is stored. Unfortunately, changes to the query in the stored proc can cause performance issues because of an outdated plan and the proc will need to be recompiled.
Based on the results of your execution plan, you may find a simple clustered index will help performance or find that fine tuning your where clause will result in better carnality estimates. Here are a few sites that do a really good job explaining statistics; Patrick Keisler, Itzik Ben-Gan, Microsoft
You may be wondering "what does this have to do with my question?" which would expect a simple yes or no answer but, effectively correcting query performance starts with an understanding statistics and execution plans.
Hope this helps!

Related

SQL server force multitheading when executing SSIS package

Here is the question:
I am using VisualCron to run a ssis package on SQL Server 2008 R2. The SSIS package will run a query which get millions of rows and output it into a flat file. Sometimes, I found when I run this SSIS package, the sql server doesn't use multi-threading(I can tell that from the activity monitor) , this lead to very long running time about 20 hours. But, if it was using multi-threading it could be done in 8 minutes.
Is there a way to force sql server to use multi-threading whenever it is running this SSIS package?
There are a few options for optimizing your query to handle multiple simultaneous operations... or at least improving performance.
Apply OPTION MAXDOP in your query to apply the maximum number of processors (parallelism) available with the operating system. Listed below is an example and here is a link with more detail.
SELECT FirstName, LastName
FROM dbo.Customer
OPTION (MAXDOP 1)
Apply NOLOCK in the your query if there is no concern with data in the tables being updated during the SSIS package operation. That is, this works if there is no concern with "dirty reads." See following link and example.
SELECT FirstName, LastName
FROM dbo.Customer WITH(NOLOCK)
Review this link for best practices for improving query performance. There may be additional steps you have not taken or tools you have not applied that can greatly assist in improving performance.
The problem is getting clearer.
It's related to how sql decide to use a serial execution plan or a parallel execution plan. That's the optimizer's job. It turns out that I have two tasks in VisualCron that are scheduled to run, they both will run the same big query. The difference is they will get different input parameters.
The first one get parameters that will not deal with too much data.
The second one get parameters that deal with a big amount of data.
I assume the SQL optimizer first see the submitted query, and the query will not get too much data, so it decide to use serial plan.
I guess the plan for this same query is cached, so when it check the second submitted query, the optimizer might check the cache to see if any past evaluated plan exist for this query. Then, if it exist, it will use it.
That's why it still choose to use serial plan for the second query.
After I change the order of the two task(I execute the one which will deal with more data first, and then the one that deal with less data), it works, it is now using parallel plan for both. (You may need to restart the instance to clear the cached execution plans)
How the optimizer works is still my assumption.
Other people's post, explaining how optimizer are playing a important role here
http://web.archive.org/web/20180404164406/http://sqlblog.com/blogs/paul_white/archive/2011/12/23/forcing-a-parallel-query-execution-plan.aspx

SQL Server 2008 R2 - optimization issue

I'm running into a performance issue with the current schema. So I built an equivalent schema to solve the issue.
I ran some tests on both schemas and the results are hard to understand. For the record, the data is the same.
I get the following from the Profiler when executing equivalent requests on the two schemas.
Old schema:
1,300,000 reads
5,000 CPU
4 seconds execution time
New schema:
30,000 reads
3,000 CPU
6 seconds execution time
The difference seems to be in the query plan used. The old schema has parallelism in the query plan. The new schema isn't using parallelism.
Has anyone faced similar situations (less IO/CPU but more execution time). How did you solve it?
Is there a way to force parallelism? I've played with query hints(http://msdn.microsoft.com/en-us/library/ms18171). I'm able to stop parallelism on the old schema but can't seem the query on the new schema to use parallelism.
Thanks in advance.
Louis,
Currently there is no way to force parallelism in SQL Server straight out of the box but Adam Machanic did some work to do that though.
http://whoisactive.com
Coming to your first question, yes we have seen cases like that too. Note that Parallelism is cpu bound and that's why you are seeing more cpu time but overall less execution time as you have multiple threads doing the work for you.
http://www.simple-talk.com/sql/learn-sql-server/understanding-and-using-parallelism-in-sql-server/
Make sure you have proper indexes in place and also stats are updated with full scan. In the long run it is best if Query Optimizer makes the decisions by itself but if you want to overwrite the QO plans then you may have to add lot more details. Schema, data and repro.
HTH

What is the usage of Execution Plan in SQL Server?

What is the usage of Execution Plan in SQL Server? When can these plans help me?
When your queries all run fast, all is good in the world and execution plans don't really matter that much. However, when something is running slow, they are very important. They are primarily used to help tune (speed up) slow SQL. Without execution plans, you'd just be guessing at what to change to make your SQL go faster.
Here is the simplest way they can help you. Take a slow query and do the following in a SQL Server Management Studio query window:
1) run the command:
SET SHOWPLAN_ALL ON
2) run your slow query
3) your query will not run, but the execution plan will be returned.
4) look through the PhysicalOp column output for the word SCAN within any text in this column, this is usually the part of the query that is causing the slowdown. Analyze your joins and index usage in regards to this row of the output and if you can eliminate the scan, you will usually improve the query speed.
There are may useful columns (TotalSubTreeCost, etc) in the output, you will become familiar with them as you learn how to read execution plans and tune your slow queries.
When you need to perform performance profiling on a specific query.
Have a look at SQL Server Query Execution Plan Analysis
When it comes time to analyze the
performance of a specific query, one
of the best methods is to view the
query execution plan. A query
execution plan outlines how the SQL
Server query optimizer actually ran
(or will run) a specific query. This
information if very valuable when it
comes time to find out why a specific
query is running slow.
It's helpful in identifying where bottlenecks are occurring in long running queries. You can make some quite impressive performance improvements simply by knowing how the server executes your complex query.
If I remember correctly it also identifies good candidates for indexing which is another way to increase performance.
These plans describe how SQL Server goes about executing your query. It's the result of a cost-based algorithm by the SQL Server query optimiser, which comes up with a plan for how to get to the end result in the expected best possible way.
It's useful because it will show you where time is being spent in the query, whether indexes are being used or not, what type of process is being done on those indexes (scan, seek) etc.
So if you have a poorly performing query, the execution plan will highlight what the costliest parts are and allow you to see what needs optimising (e.g. may be a missing index, may be an inefficiently written query resulting in an index scan instead of a seek).

What steps should be necessary to optimize a poorly performing query?

I know this is a broad question, but I've inherited several poor performers and need to optimize them badly. I was wondering what are the most common steps involved to optimize. So, what steps do some of you guys take when faced with the same situation?
Related Question:
What generic techniques can be applied to optimize SQL queries?
Look at the execution plan in query analyzer
See what step costs the most
Optimize the step!
Return to step 1 [thx to Vinko]
In SQL Server you can look at the Query Plan in Query Analyzer or Management Studio. This will tell you the rough percentage of time spent in each batch of statements. You'll want to look for the following:
Table scans; this means you are completely missing indexes
Index scans; your query may not be using the correct indexes
The thickness of the arrows between each step in a query tells you how many rows are being produced by that step, very thick arrows means you are processing a lot of rows, and can indicate that some joins need to be optimized.
Some other general tips:
A large amount of conditional statements, such as multiple if-else statements, can cause SQL Server to constantly rebuild the query plan. You can check for this using Profiler.
Make sure that different queries aren't blocking each other, such as an update statement blocking a select statement. This can be avoided by specifying the (nolock) hint in SQL Server select statements.
As others have mentioned, try out the Performance Tuning wizard in Management Studio.
Finally, I would highly recommend creating a set of load tests (using Visual Studio 2008 Test Edition), which you can use to simulate your application's behavior when dealing with a large amount of requests. Some SQL performance bottlenecks only manifest themselves under these circumstances, and being able to reproduce them makes it a lot easier to fix.
Indexes may be a good place to start...
The low hanging fruit can be knocked down with the SQL Server Index Tuning Wizard.
I'm not sure about other databases, but for SQL Server I recommend the Execution Plan. It very clearly (albeit with lots of vertical and horizontal scrolling, unless you've got a 400" monitor!) shows what steps of your query are sucking up the time.
If you've got one step that takes a crazy 80%, then maybe an index could be added, then after tweaking the index, re-run the Execution Plan to find your next biggest step.
After a couple tweaks you may find that there really are no steps that stand out from the others i.e. they're all 1-2% each. If that is the case, then you might then need to see if there is a way you can cut down the amount of data included in your query, do those four million closed sales orders need to be included in the "Active Sales Orders" query? No, so exclude all those with STATUS='C' ... or something like that.
Another improvement you'll see from the Execution Plan is bookmark lookups, basically it finds a match in the index, but then SQL Server has to quickly trawl through the table to find the record you want. This operation might at times take longer than just scanning the table in the first place would have, if that is the case, do you really need that index?
With indexes, and especially with SQL Server 2005 you should look to the INCLUDE clause, this basically allows you to have a column in an index without really being in the index, so if all the data you need for your query is in your index or is an included columnn then SQL Server doesn't have to even look at the table, a big performance pickup.
There are a couple of things you can look at to optimize your query performance.
Ensure that you just have the minimum of data. Make sure you select only the columns you need. Reduce field sizes to a minimum.
Consider de-normalising your database to reduce joins
Avoid loops (i.e. fetch cursors), stick to set operations.
Implement the query as a stored procedure as this is pre-compiled and will execute faster.
Make sure that you have the correct indexes set up. If your database is used mostly for searching then consider more indexes.
Use the execution plan to see how the processing is done. What you want to avoid is a table scan as this is costly.
Make sure that the Auto Statistics is set to on. SQL needs this to help decide the optimal execution. See Mike Gunderloy's great post for more info. Basics of Statistics in SQL Server 2005
Make sure your indexes are not fragmented Reducing SQL Server Index Fragmentation
Make sure your tables are not fragmented. How to Detect Table Fragmentation in SQL Server 2000 and 2005
The execution plan is a great start and will help you figure out what part of your query you need to tackle.
Once you figure out the where, it is time to tackle the how and why. Take a look at the type of queries you are trying to preform. Avoid loops at all cost as they are slow. Avoid cursors at all costs because they are slow. Stick to set based queries when ever possible.
There are ways to give sql hints on the type of joins to use if you are using joins. Be careful here though, while one hint may speed up your query once, it may slow down your query 10 fold the next time through depending on the data and parameters.
Finally, make sure your database is well indexed. A good place to start is any field that is contained in a where clause probably should have a index on it.
Look at the indexes on the tables that make the query. An indexes may be needed on particular fields that participate in the where clause. Also look at the fields used in the joins in the query (if joins exist). If indexes already exist, look at the type of index.
Failing that (because there are negatives to using locking hints) Look at locking hints and explicitly naming the index to use in the join. Using NOLOCKS is more obvious if you're getting a lot of deadlocked transactions.
Do what roman and Andy S mentioned first though.

How often do you update statistics in SQL Server 2000?

I'm wondering if updating statistics has helped you before and how did you know to update them?
exec sp_updatestats
Yes, updating statistics can be very helpful if you find that your queries are not performing as well as they should. This is evidenced by inspecting the query plan and noticing when, for example, table scans or index scans are being performed instead of index seeks. All of this assumes that you have set up your indexes correctly.
There is also the UPDATE STATISTICS command, but I've personally never used that.
It's common to add your statistics update to a maintenance plan (as in an Enterprise Manager-defined Maintenance plan). That way it happens on a schedule - daily, weekly, whatever.
SQL Server 2000 uses statistics to make good decisions about query execution so they definitely help.
It's a good idea to rebuild your indexes at the same time (DBCC DBREINDEX and DBCC INDEXDEFRAG).
If you rebuild indexes, then the statistics for those indexes are automatically rebuilt.
If your timeframes allow, then running UPDATE STATISTICS of part of a maintenance plan is a good idea, as frequently as nightly (if your indexes are being rebuilt less frequently than that).
SQL Server: To determine if out of date statistics are the cause of a query performing poorly, turn on 'Query->Display Estimated Execution plan' (CTRL-L) in Management Studio and run the query. Open another window, paste in the same query and turn on 'Query->Display ActualExecution plan' (CTRL-M) in Management Studio and re-run the query. If the execution plans are different then statistics are most likely out of date.
Updating statistics becomes necessary after the following events:
- Records are inserted into your table
- Records are deleted from your table
- Records are updated in your table
If you have a large database with millions of records that gets lots of writes per day you probably should be determining an off-peak time to schedule index updates.
Also, you need to consider your type of traffic. If you have a lot (millions) of records in tables with many foreign key dependencies and you have a larger proportion of writes to reads you might want to consider turning off automatic statistics recomputation (NOTE: this feature will be removed in a future version of SQL Server, but for SQL Server 2000 you should be OK). This tells the engine to not recompute statistics on every INSERT, DELETE, or UPDATE and makes those actions much more performant.
Indexes are no laughing matter. They are the heart and soul of a performant database.

Resources