I've been hearing a lot lately that I ought to take a look at the execution plan of my SQL to make a judgment on how well it will perform. However, I'm not really sure where to begin with this feature or what exactly it means.
I'm looking for either a good explanation of what the execution plan does, what its limitations are, and how I can utilize it or direction to a resource that does.
It describes actual algorithms which the server uses to retrieve your data.
An SQL query like this:
SELECT *
FROM mytable1
JOIN mytable2
ON …
GROUP BY
…
ORDER BY
…
, describes what should be done but not how it should be done.
The execution plan shows how: which indexes are used, which join methods are chosen (nested loops or hash join or merge join), how the results are grouped (using sorting or hashing), how they are ordered etc.
Unfortunately, even modern SQL engines cannot automatically find the optimal plans for more or less complex queries, it still takes an SQL developer to reformulate the queries so that they are performant (even they do what the original query does).
A classical example would be these too queries:
SELECT (
SELECT COUNT(*)
FROM mytable mi
WHERE mi.id <= mo.id
)
FROM mytable mo
ORDER BY
id
and
SELECT RANK() OVER (ORDER BY id)
FROM mytable
, which do the same and in theory should be executed using the same algorithms.
However, no actual engine will optimize the former query to implement the same algorithms, i. e. store a counter in a variable and increment it.
It will do what it's told to do: count the rows over and over and over again.
To optimize the queries you need to actually see what's happening behind the scenes, and that's what the execution plans show you.
You may want to read this article in my blog:
Double-thinking in SQL
Here and Here are some article check it out. Execution plans lets you identify the area which is time consuming and therefore allows you to improve your query.
An execution plan shows exactly how SQL Server processes a query
it is produced as part of the query optimisation process that SQL Server does. It is not something that you directly create.
it will show what indexes it has decided are best to be used, and basically is a plan for how SQL server processes a query
the query optimiser will take a query, analyse it and potentially come up with a number of different execution plans. It's a cost-based optimisation process, and it will choose the one that it feels is the best.
once an execution plan has been generated, it will go into the plan cache so that subsequent calls for that same query can reuse the same plan again to save having to redo the work to come up with a plan.
execution plans automatically get dropped from the cache, depending on their value (low value plans get removed before high value plans do in order to provide maximum performance gain)
execution plans help you spot performance issues such as where indexes are missing
A way to ease into this, is simply by using "Ctrl L" (Query | Display Estimated Execution Plan) for some of your queries, in SQL Management Studio.
This will result in showing a graphic view of Execution Plan, which, at first are easier to "decode" than the text version thereof.
Query plans in a tiny nutshell:
Essentially the query plan show the way SQL Server intends to use in resolving a query.
There are indeed many options, even with simple queries.
For example when dealing with a JOIN, one needs to decide whether to loop through the [filtered] rows of "table A" and to lookup the rows of "table B", or to loop through "table B" first instead (this is a simplified example, as there are many other tricks which can be used in dealing with JOINs). Typically, SQL will estimate the number of [filtered] rows which will be produced by either table and pick the one which the smallest count for the outer loop (as this will reduce the number of lookups in the other table)
Another example, is to decide which indexes to use (or not to use).
There are many online resources as well as books which describe the query plans in more detail, the difficulty is that SQL performance optimization is a very broad and complex problem, and many such resources tend to go into too much detail for the novice; One first needs to understand the fundamental principles and structures which underlie SQL Server (the way indexes work, the way the data is stored, the difference between clustered indexes and heaps...) before diving into many of the [important] details of query optimization. It is a bit like baseball: first you need to know the rules before understanding all the subtle [and important] concepts related to the game strategy.
See this related SO Question for additional pointers.
Here's a great resource to help you understand them
http://downloads.red-gate.com/ebooks/HighPerformanceSQL_ebook.zip
This is from red-gate which is a company that makes great SQL server tools, it's free and it's well worth the time to download and read.
it is a very serious part of knowledge. And I highly to recommend special training courses about that. As for me after spent week on courses I boosted performance of queries about 1000 times (nostalgia)
The Execution Plan shows you how the database is fetching, sorting and filtering the data required for your query.
For example:
SELECT
*
FROM
TableA
INNER JOIN
TableB
ON
TableA.Id = TableB.TableAId
WHERE
TableB.TypeId = 2
ORDER BY
TableB.Date ASC
Would result in an execution plan showing the database getting records from TableA and TableB, matching them to satisfy the JOIN, filtering to satisfy the WHERE and sorting to satisfy the ORDER BY.
From this, you can work out what is slowing down the query, whether it would be beneficial to review your indexes or if you can speed things up in another way.
Related
Scenario
Quick background on this one: I am attempting to optimize the use of an inline table-valued function uf_GetVisibleCustomers(#cUserId). The iTVF wraps a view CustomerView and filters out all rows containing data for customers whom the provided requesting user is not permitted to see. This way, should selection criteria ever change in the future for certain user types, we won't have to implement that new condition a hundred times (hyperbole) all over the SQL codebase.
Performance is not great, however, so I want to fix that before encouraging use of the iTVF. Changed database object names here just so it's easier to demonstrate (hopefully).
Queries
In attempting to optimize our iTVF uf_GetVisibleCustomers, I've noticed that the following SQL …
CREATE TABLE #tC ( idCustomer INT )
INSERT #tC
SELECT idCustomer
FROM [dbo].[uf_GetVisibleCustomers]('requester')
SELECT T.fAmount
FROM [Transactions] T
JOIN #tC C ON C.idCustomer = T.idCustomer
… is orders of magnitude faster than my original (IMO more readable, likely to be used) SQL here…
SELECT T.fAmount
FROM [Transactions] T
JOIN [dbo].[uf_GetVisibleCustomers]('requester') C ON C.idCustomer = T.idCustomer
I don't get why this is. The former (top block of SQL) returns ~700k rows in 17 seconds on a fairly modest development server. The latter (second block of SQL) returns the same number of rows in about ten minutes when there is no other user activity on the server. Maybe worth noting that there is a WHERE clause, however I have omitted it here for simplicity; it is the same for both queries.
Execution Plan
Below is the execution plan for the first. It enjoys automatic parallelism as mentioned while the latter query isn't worth displaying here because it's just massive (expands the entire iTVF and underlying view, subqueries). Anyway, the latter also does not execute in parallel (AFAIK) to any extent.
My Questions
Is it possible to achieve performance comparable to the first block without a temp table?
That is, with the relative simplicity and human-readability of the slower SQL.
Why is a join to a temp table faster than a join to iTVF?
Why is it faster to use a temp table than an in-memory table populated the same way?
Beyond those explicit questions, if someone can point me in the right direction toward understanding this better in general then I would be very grateful.
Without seeing the DDL for your inline function - it's hard to say what the issue is. It would also help to see the actual execution plans for both queries (perhaps you could try: https://www.brentozar.com/pastetheplan/). That said, I can offer some food for thought.
As you mentioned, the iTVF accesses the underlying tables, views and associated indexes. If your statistics are not up-to-date you can get a bad plan, that won't happen with your temp table. On that note, too, how long does it take to populate that temp table?
Another thing to look at (again, this is why DDL is helpful) is: are the data type's the same for Transactions.idCustomer and #TC.idCustomer? I see a hash match in the plan you posted which seems bad for a join between two IDs (a nested loops or merge join would be better). This could be slowing both queries down but would appear to have a more dramatic impact on the query that leverages your iTVF.
Again this ^^^ is speculation based on my experience. A couple quick things to try (not as a perm fix but for troubleshooting):
1. Check to see if re-compiling your query when using the iTVF speeds things up (this would be a sign of a bad stats or a bad execution plan being cached and re-used)
2. Try forcing a parallel plan for the iTVF query. You can do this by adding OPTION (QUERYTRACEON 8649) to the end of your query of by using make_parallel() by Adam Machanic.
We are trying to optimize some of our queries.
One query is doing the following:
SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date,
INTO [#Gadget]
FROM task t
SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client
FROM [#Gadget]
order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC
DROP TABLE [#Gadget]
(I have removed the complex subquery. I don't think it's relevant other than to explain why this query has been done as a two stage process.)
I thought it would be far more efficient to merge this down into a single query using subqueries as:
SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID)
FROM
(
SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date,
FROM task t
) as sub
order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC
This would give the optimizer better information to work out what was going on and avoid any temporary tables. I assumed it should be faster.
But it turns out it is a lot slower. 8 seconds vs. under 5 seconds.
I can't work out why this would be the case, as all my knowledge of databases imply that subqueries would always be faster than using temporary tables.
What am I missing?
Edit --
From what I have been able to see from the query plans, both are largely identical, except for the temporary table which has an extra "Table Insert" operation with a cost of 18%.
Obviously as it has two queries the cost of the Sort Top N is a lot higher in the second query than the cost of the Sort in the Subquery method, so it is difficult to make a direct comparison of the costs.
Everything I can see from the plans would indicate that the subquery method would be faster.
"should be" is a hazardous thing to say of database performance. I have often found that temp tables speed things up, sometimes dramatically. The simple explanation is that it makes it easier for the optimiser to avoid repeating work.
Of course, I've also seen temp tables make things slower, sometimes much slower.
There is no substitute for profiling and studying query plans (read their estimates with a grain of salt, though).
Obviously, SQL Server is choosing the wrong query plan. Yes, that can happen, I've had exactly the same scenario as you a few times.
The problem is that optimizing a query (you mention a "complex subquery") is a non-trivial task: If you have n tables, there are roughly n! possible join orders -- and that's just the beginning. So, it's quite plausible that doing (a) first your inner query and (b) then your outer query is a good way to go, but SQL Server cannot deduce this information in reasonable time.
What you can do is to help SQL Server. As Dan Tow writes in his great book "SQL Tuning", the key is usually the join order, going from the most selective to the least selective table. Using common sense (or the method described in his book, which is a lot better), you could determine which join order would be most appropriate and then use the FORCE ORDER query hint.
Anyway, every query is unique, there is no "magic button" to make SQL Server faster. If you really want to find out what is going on, you need to look at (or show us) the query plans of your queries. Other interesting data is shown by SET STATISTICS IO, which will tell you how much (costly) HDD access your query produces.
I have re-iterated this question here: How can I force a subquery to perform as well as a #temp table?
The nub of it is, yes, I get that sometimes the optimiser is right to meddle with your subqueries as if they weren't fully self contained but sometimes it makes a bad wrong turn when it tries to be clever in a way that we're all familiar with. I'm saying there must be a way of switching off that "cleverness" where necessary instead of wrecking a View-led approach with temp tables.
I have a relation between two tables with 600K rows and my first question is, is that a lot of data? It doesn't seem like a lot (in terms of rows, not bytes)
I can write a query like this
SELECT EntityID, COUNT(*)
FROM QueryMembership
GROUP BY EntityID
And it completes in now time at all, but when I do this.
SELECT EntityID, COUNT(*)
FROM QueryMembership
WHERE PersonID IN (SELECT PersonID FROM GetAcess(1))
GROUP BY EntityID
The thing takes 3-4 seconds to complete, despite just returning about 183 rows. SELECT * FROM QueryMembership takes about 12-13 seconds.
What I don't understand is how a filter like this would take so long, as soon as I introduce this table value function. The function it self doesn't take any time at all to return it's result and no matter if I write it as a CTE or some bizarre sub query the result is the same.
However, if it defer the filter, by inserting the result of the first select into a temporary table #temp then using the GetAccess UDF the entire thing goes about three times as fast.
I would really like some in-depth technical help on this matter. Where I should start look, and how I can analyze the execution plan to figure out what's going on.
There's an excellent series of posts on execution plans and how to read and interpret them - and a totally free e-book on the topic as well! - on the excellent Simple-Talk site.
Check them out - well worth the time!
Execution Plan Basics
SQL Server Execution Plans
Understanding More Complex Query Plans
Graphical Execution Plans for Simple SQL Queries
SQL Server Execution Plans - free e-book download
600k rows is not a particularly large amount. However, you are getting to the point where server configuration (disks, non-SQL load, etc) matters, so if your server wasn't carefully put together you should look at that now rather than later.
Analyzing execution plans is one of those things that you tend to pick up over time. The book "Inside SQL Server" is (was?) pretty nice for learning how things work internally, which helps guide you a bit as you're optimzing.
I would personally try rewriting the above query as a join, IN often doesn't perform as well as you might hope. Something like:
SELECT
EntityID,
COUNT(*)
FROM
QueryMembership q
join GetAccess(1) a on a.PersonID = q.PersonID
GROUP BY
EntityID
SELECT EntityID, COUNT(*)
FROM QueryMembership
WHERE PersonID IN (SELECT PersonID FROM GetAcess(1))
GROUP BY EntityID
The embedded subquery is expensive. as you said using a temporary table is perfect alternative solution.
I suspect that the reasons for your slowdown may be similar to those in this quesiton:
how to structure an index for group by in Sql Server
An execution plan will answer the question as to why the second query is slower, however I suspect it will be because SQL server can use indexes to look up aggregate functions (such as COUNT and MAX) using relatively inexpensive operations on some index.
If you combine a filter and a group however, SQL server can no longer use this trick and is forced to evaluate the value of COUNT or MAX based on the filtered result set, leading to expensive lookups.
600k rows is a fairly reasonable / small table size, however its big enough so that things like table scans or RDI lookups against large portions of the table will start becoming expensive.
I'd be interested to see the execution plan to understand whats going on.
I'm in a situation where I have to improve the performance of about 75 stored procedures (created by someone else) used for reporting. The first part of my solution was creating about 6 denormalized tables that will be used for the bulk of the reporting. Now that I've created the tables I have the somewhat daunting task of determining what Indexes I should create to best improve the performance of these stored procs.
I'm curious to see if anyone has any suggestions for finding what columns would make sense to include in the indexes? I've contemplated using Profiler/DTA, or possibly fasioning some sort of query like the one below to figure out the popular columns.
SELECT name, Count(so.name) as hits, so.xtype
from syscomments as sc
INNER JOIN sysobjects so ON sc.id=so.id
WHERE sc.text like '%ColumnNamme%'
AND xtype = 'P'
Group by name,so.xtype
ORDER BY hits desc
Let me know if you have any ideas that would help me not have to dig through these 75 procs by hand.
Also, inserts are only performed on this DB once per day so insert performance is not a huge concern for me.
Any suggestions for identifying what indexes need to be created?
Yes! Ask Sql Server to tell you.
Sql Server automatically keeps statistics for what indexes it can use to improve performance. This is already going on in the background for you. See this link:
http://msdn.microsoft.com/en-us/library/ms345417.aspx
Try running a query like this (taken right from msdn):
SELECT mig.*, statement AS table_name,
column_id, column_name, column_usage
FROM sys.dm_db_missing_index_details AS mid
CROSS APPLY sys.dm_db_missing_index_columns (mid.index_handle)
INNER JOIN sys.dm_db_missing_index_groups AS mig ON mig.index_handle = mid.index_handle
ORDER BY mig.index_group_handle, mig.index_handle, column_id;
Just be careful. I've seen people take the missing index views as Gospel, and use them to push out a bunch of indexes they don't really need. Indexes have costs, in terms of upkeep at insert, update, and delete time, as well as disk space and memory use. To make real, accurate use of this information you want to profile actual execution times of your key procedures both before and after any changes, to make sure the benefits of an index (singly or cumulative) aren't outweighed by the costs.
If you know all of the activity is coming from the 75 stored procedures then I would use profiler to track which stored procedures take the longest and are called the most. Once you know which ones are then look at those procs and see what columns are being used most often in the Where clause and JOIN ON sections. Most likely, those are the columns you will want to put non-clustered indexes on. If a set of columns are often times used together then there is a good chance you will want to make 1 non-clustered index for the group. You can have many non-clustered indexes on a table (250) but you probably don't want to put more than a handful on it. I think you will find the data is being searched and joined on the same columns over and over. Remember the 80/20 rule. You will probably get 80% of your speed increases in the first 20% of the work you do. There will be a point where you get very little speed increase for the added indexes, that is when you want to stop.
I concur with bechbd - use a good sample of your database traffic (by running a server trace on a production system during real office hours, to get the best snapshot), and let the Database Tuning Advisor analyze that sampling.
I agree with you - don't blindly rely on everything the Database Tuning Advisor tells you to do - it's just a recommendation, but the DTA can't take everything into account. Sure - by adding indices you can speed up querying - but you'll slow down inserts and updates at the same time.
Also - to really find out if something helps, you need to implement it, measure again, and compare - that's really the only reliable way. There are just too many variables and unknowns involved.
And of course, you can use the DTA to fine-tune a single query to perform outrageously well - but that might neglect the fact that this query is only ever called one per week, or that by tuning this one query and adding an index, you hurt other queries.
Index tuning is always a balance, a tradeoff, and a trial-and-error kind of game - it's not an exact science with a formula and a recipe book to strictly determine what you need.
You can use SQL Server profiler in SSMS to see what and how your tables are being called then using the Database Tuning Tool in profiler to at least start you down the correct path. I know most DBA's will probably scream at me for recommending this but for us non-DBA types such as myself it at least gives us a starting point.
If this is strictly a reporting database and you need performance, consider moving to a data warehouse design. A star or snowflake schema will outperform even a denormalized relational design when it comes to reporting.
I have two insert statements, almost exactly the same, which run in two different schemas on the same Oracle instance. What the insert statement looks like doesn't matter - I'm looking for a troubleshooting strategy here.
Both schemas have 99% the same structure. A few columns have slightly different names, other than that they're the same. The insert statements are almost exactly the same. The explain plan on one gives a cost of 6, the explain plan on the other gives a cost of 7. The tables involved in both sets of insert statements have exactly the same indexes. Statistics have been gathered for both schemas.
One insert statement inserts 12,000 records in 5 seconds.
The other insert statement inserts 25,000 records in 4 minutes 19 seconds.
The number of records being insert is correct. It's the vast disparity in execution times that confuses me. Given that nothing stands out in the explain plan, how would you go about determining what's causing this disparity in runtimes?
(I am using Oracle 10.2.0.4 on a Windows box).
Edit: The problem ended up being an inefficient query plan, involving a cartesian merge which didn't need to be done. Judicious use of index hints and a hash join hint solved the problem. It now takes 10 seconds. Sql Trace / TKProf gave me the direction, as I it showed me how many seconds each step in the plan took, and how many rows were being generated. Thus TKPROF showed me:-
Rows Row Source Operation
------- ---------------------------------------------------
23690 NESTED LOOPS OUTER (cr=3310466 pr=17 pw=0 time=174881374 us)
23690 NESTED LOOPS (cr=3310464 pr=17 pw=0 time=174478629 us)
2160900 MERGE JOIN CARTESIAN (cr=102 pr=0 pw=0 time=6491451 us)
1470 TABLE ACCESS BY INDEX ROWID TBL1 (cr=57 pr=0 pw=0 time=23978 us)
8820 INDEX RANGE SCAN XIF5TBL1 (cr=16 pr=0 pw=0 time=8859 us)(object id 272041)
2160900 BUFFER SORT (cr=45 pr=0 pw=0 time=4334777 us)
1470 TABLE ACCESS BY INDEX ROWID TBL1 (cr=45 pr=0 pw=0 time=2956 us)
8820 INDEX RANGE SCAN XIF5TBL1 (cr=10 pr=0 pw=0 time=8830 us)(object id 272041)
23690 MAT_VIEW ACCESS BY INDEX ROWID TBL2 (cr=3310362 pr=17 pw=0 time=235116546 us)
96565 INDEX RANGE SCAN XPK_TBL2 (cr=3219374 pr=3 pw=0 time=217869652 us)(object id 272084)
0 TABLE ACCESS BY INDEX ROWID TBL3 (cr=2 pr=0 pw=0 time=293390 us)
0 INDEX RANGE SCAN XIF1TBL3 (cr=2 pr=0 pw=0 time=180345 us)(object id 271983)
Notice the rows where the operations are MERGE JOIN CARTESIAN and BUFFER SORT. Things that keyed me into looking at this were the number of rows generated (over 2 million!), and the amount of time spent on each operation (compare to other operations).
Use the SQL Trace facility and TKPROF.
The main culprits in insert slow downs are indexes, constraints, and oninsert triggers. Do a test without as many of these as you can remove and see if it's fast. Then introduce them back in and see which one is causing the problem.
I have seen systems where they drop indexes before bulk inserts and rebuild at the end -- and it's faster.
The first thing to realize is that, as the documentation says, the cost you see displayed is relative to one of the query plans. The costs for 2 different explains are not comparable. Secondly the costs are based on an internal estimate. As hard as Oracle tries, those estimates are not accurate. Particularly not when the optimizer misbehaves. Your situation suggests that there are two query plans which, according to Oracle, are very close in performance. But which, in fact, perform very differently.
The actual information that you want to look at is the actual explain plan itself. That tells you exactly how Oracle executes that query. It has a lot of technical gobbeldy-gook, but what you really care about is knowing that it works from the most indented part out, and at each step it merges according to one of a small number of rules. That will tell you what Oracle is doing differently in your two instances.
What next? Well there are a variety of strategies to tune bad statements. The first option that I would suggest, if you're in Oracle 10g, is to try their SQL tuning advisor to see if a more detailed analysis will tell Oracle the error of its ways. It can then store that plan, and you will use the more efficient plan.
If you can't do that, or if that doesn't work, then you need to get into things like providing query hints, manual stored query outlines, and the like. That is a complex topic. This is where it helps to have a real DBA. If you don't, then you'll want to start reading the documentation, but be aware that there is a lot to learn. (Oracle also has a SQL tuning class that is, or at least used to be, very good. It isn't cheap though.)
I've put up my general list of things to check to improve performance as an answer to another question:
Favourite performance tuning tricks
... It might be helpful as a checklist, even though it's not Oracle-specific.
I agree with a previous poster that SQL Trace and tkprof are a good place to start. I also highly recommend the book Optimizing Oracle Performance, which discusses similar tools for tracing execution and analyzing the output.
SQL Trace and tkprof are only good if you have access to theses tools. Most of the large companies that I do work for do not allow developers to access anything under the Oracle unix IDs.
I believe you should be able to determine the problem by first understanding the question that is being asked and by reading the explain plans for each of the queries. Many times I find that the big difference is that there are some tables and indexes that have not been analyzed.
Another good reference that presents a general technique for query tuning is the book SQL Tuning by Dan Tow.
When the performance of a sql statement isn't as expected / desired, one of the first things I do is to check the execution plan.
The trick is to check for things that aren't as expected. For example you might find table scans where you think an index scan should be faster or vice versa.
A point where the oracle optimizer sometimes takes a wrong turn are the estimates how many rows a step will return. If the execution plan expects 2 rows, but you know it will more like 2000 rows, the execution plan is bound to be less than optimal.
With two statements to compare you can obviously compare the two execution plans to see where they differ.
From this analysis, I come up with an execution plan that I think should be suited better. This is not an exact execution plan, but just some crucial changes, to the one I found, like: It should use Index X or a Hash Join instead of a nested loop.
Next thing is to figure out a way to make Oracle use that execution plan. Often by using Hints, or creating additonal indexes, sometimes changing the SQL statement. Then of course test that the changed statement
a) still does what it is supposed to do
b) is actually faster
With b it is very important to make sure you are testing the correct use case. A typical pit fall is the difference between returning the first row, versus returning the last row. Most tools show you the first results as soon as they are available, with no direct indication, that there is more work to be done. But if your actual program has to process all rows before it continues to the next processing step, it is almost irrelevant when the first row appears, it is only relevant when the last row is available.
If you find a better execution plan, the final step is to make you database actually use it in the actual program. If you added an index, this will often work out of the box. Hints are an option, but can be problematic if a library creates your sql statement, those ofte don't support hints. As a last resort you can save and fix execution plans for specific sql statements. I'd avoid this approach, because its easy to become forgotten and in a year or so some poor developer will scratch her head why the statement performs in a way that might have been apropriate with the data one year ago, but not with the current data ...
analyzing the oI also highly recommend the book Optimizing Oracle Performance, which discusses similar tools for tracing execution and utput.