Best way to execute an huge query - database

Let me share a very simple question for you. Imagine this situation.
Table 1: pk1, field11, field12,field13,field14
Table 2: pk2,field21, field2,field23,field24
Table 3: pk3, field31,field32,field33,field34
and this relationship:
field11 = pk2
field24=pk3
And we need: field11, field12, field, 23,field24, field 31,field34
In this 3 tables can be 1 million of nodes in each one.
I was thinking about to solve it with 2 inner join but maybe any other solution for this kind of huge query?
Regards,
Jose

Even if you torture your query by a contorted code, the fact that a query is translated in an algebraic formulae will produce the same execution plan because the mathematical simplification process will ends up to the same mathematical expression (the optimizer do the job for you).
So, do not torture your mind, simplify your life, and write simple queries without thinking what is expected to be the best !

Related

Breaking join in Case Of Large Tables

I just need a common approach.
When we are joining suppose 7 table out of them 5 are huge table and 2 are small tables.
So if we execute the query in one go it will take a numerous amount of time.
So, what is the basic approach we follow to break the query.
And all the tables are having only one joining condition to the next table.
Is their any approach that can help us to optimize the joins and getting a better execution plan.

What is the reason for performance difference?

I am creating a report, for that I have written 2 different type of query. But I am seeing a huge performance difference between these 2 methods. What may be the reason? My main table (suppose table A) contain a date column. I am filtering the data based on date. Around 10 table join I have to do with this table.
First method:
select A.id,A1.name,...
from table A
join A1
join A2 ....A10
where A.Cdate >= #date
and A.Cdate <= #date
Second method:
With CTE as(select A.id from A where A.Cdate>=#date and A.Cdate<=#date)
select CTE.id, A1.name,.... from CTE join A1 join A2....A10
Here second method is fast. What is the reason? In first method, the filtered data of A only will be join with other tables data right?
Execution plans will tell us for sure, but likely if the CTE is able to filter out a lot of rows before the join, a more optimal join approach may have been chosen (for example merge instead of hash). SQL isn't supposed to work that way - in theory those two should execute the same way. But in practice we find that SQL Server's optimizer isn't perfect and can be swayed in different ways based on a variety of factors, including statistics, selectivity, parallelism, a pre-existing version of that plan in the cache, etc.
One suggestion.You should be able to answer this question yourself as you have both the plans. Did you compare the two plans? Are those similar? Also, when performance is bad what do you mean ,is it time or cpu time or IO's or what did you compare?
Thus before you post any question you should check these counters and I am sure they will provide with some kind of answers in most of cases.
CTE is for managing the code it wont improve the performance of query automatically. CTE's will be expanded by optimizer and thus in your case these both should have same queries after transformation or expansion and thus similar plans.
CTE is basically for the handling the more complex code(be it recursive or having subquery), but you need to check the execution plan regarding the improvement of the 2 different queries.
You can check for the use of CTE at : http://msdn.microsoft.com/en-us/library/ms190766(v=sql.105).aspx
Just a note, in many scenarios, temp tables gives better performance then CTE also, so you should give a try to temp tables as well.
Reference : http://social.msdn.microsoft.com/Forums/en/transactsql/thread/d040d19d-016e-4a21-bf44-a0359fb3c7fb

Small table has very high cost in query plan

I am having an issue with a query where the query plan says that 15% of the execution cost is for one table. However, this table is very small (only 9 rows).
Clearly there is a problem if the smallest table involved in the query has the highest cost.
My guess is that the query keeps on looping over the same table again and again, rather than caching the results.
What can I do about this?
Sorry, I can't paste the exact code (which is quite complex), but here is something similar:
SELECT Foo.Id
FROM Foo
-- Various other joins have been removed for the example
LEFT OUTER JOIN SmallTable as st_1 ON st_1.Id = Foo.SmallTableId1
LEFT OUTER JOIN SmallTable as st_2 ON st_2.Id = Foo.SmallTableId2
WHERE (
-- various where clauses removed for the example
)
AND (st_1.Id is null OR st_1.Code = 7)
AND (st_2.Id is null OR st_2.Code = 4)
Take these execution-plan statistics with a wee grain of salt. If this table is "disproportionately small," relative to all the others, then those cost-statistics probably don't actually mean a hill o' beans.
I mean... think about it ... :-) ... if it's a tiny table, what actually is it? Probably, "it's one lousy 4K storage-page in a file somewhere." We read it in once, and we've got it, period. End of story. Nothing (actually...) there to index; no (actual...) need to index it; and, at the end of the day, the DBMS will understand this just as well as we do. Don't worry about it.
Now, having said that ... one more thing: make sure that the "cost" which seems to be attributed to "the tiny table" is not actually being incurred by very-expensive access to the tables to which it is joined. If those tables don't have decent indexes, or if the query as-written isn't able to make effective use of them, then there's your actual problem; that's what the query optimizer is actually trying to tell you. ("It's just a computer ... backwards things says it sometimes.")
Without the query plan it's difficult to solve your problem here, but there is one glaring clue in your example:
AND (st_1.Id is null OR st_1.Code = 7)
AND (st_2.Id is null OR st_2.Code = 4)
This is going to be incredibly difficult for SQL Server to optimize because it's nearly impossible to accurately estimate the cardinality. Hover over the elements of your query plan and look at EstimatedRows vs. ActualRows and EstimatedExecutions vs. ActualExecutions. My guess is these are way off.
Not sure what the whole query looks like, but you might want to see if you can rewrite it as two queries with a UNION operator rather than using the OR logic.
Well, with the limited information available, all I can suggest is that you ensure all columns being used for comparisons are properly indexed.
In addition, you haven't stated if you have an actual performance problem. Even if those table accesses took up 90% of the query time, it's most likely not a problem if the query only takes (for example) a tenth of a second.

Why would using a temp table be faster than a nested query?

We are trying to optimize some of our queries.
One query is doing the following:
SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date,
INTO [#Gadget]
FROM task t
SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client
FROM [#Gadget]
order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC
DROP TABLE [#Gadget]
(I have removed the complex subquery. I don't think it's relevant other than to explain why this query has been done as a two stage process.)
I thought it would be far more efficient to merge this down into a single query using subqueries as:
SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID)
FROM
(
SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date,
FROM task t
) as sub
order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC
This would give the optimizer better information to work out what was going on and avoid any temporary tables. I assumed it should be faster.
But it turns out it is a lot slower. 8 seconds vs. under 5 seconds.
I can't work out why this would be the case, as all my knowledge of databases imply that subqueries would always be faster than using temporary tables.
What am I missing?
Edit --
From what I have been able to see from the query plans, both are largely identical, except for the temporary table which has an extra "Table Insert" operation with a cost of 18%.
Obviously as it has two queries the cost of the Sort Top N is a lot higher in the second query than the cost of the Sort in the Subquery method, so it is difficult to make a direct comparison of the costs.
Everything I can see from the plans would indicate that the subquery method would be faster.
"should be" is a hazardous thing to say of database performance. I have often found that temp tables speed things up, sometimes dramatically. The simple explanation is that it makes it easier for the optimiser to avoid repeating work.
Of course, I've also seen temp tables make things slower, sometimes much slower.
There is no substitute for profiling and studying query plans (read their estimates with a grain of salt, though).
Obviously, SQL Server is choosing the wrong query plan. Yes, that can happen, I've had exactly the same scenario as you a few times.
The problem is that optimizing a query (you mention a "complex subquery") is a non-trivial task: If you have n tables, there are roughly n! possible join orders -- and that's just the beginning. So, it's quite plausible that doing (a) first your inner query and (b) then your outer query is a good way to go, but SQL Server cannot deduce this information in reasonable time.
What you can do is to help SQL Server. As Dan Tow writes in his great book "SQL Tuning", the key is usually the join order, going from the most selective to the least selective table. Using common sense (or the method described in his book, which is a lot better), you could determine which join order would be most appropriate and then use the FORCE ORDER query hint.
Anyway, every query is unique, there is no "magic button" to make SQL Server faster. If you really want to find out what is going on, you need to look at (or show us) the query plans of your queries. Other interesting data is shown by SET STATISTICS IO, which will tell you how much (costly) HDD access your query produces.
I have re-iterated this question here: How can I force a subquery to perform as well as a #temp table?
The nub of it is, yes, I get that sometimes the optimiser is right to meddle with your subqueries as if they weren't fully self contained but sometimes it makes a bad wrong turn when it tries to be clever in a way that we're all familiar with. I'm saying there must be a way of switching off that "cleverness" where necessary instead of wrecking a View-led approach with temp tables.

How do I troubleshoot performance problems with an Oracle SQL statement

I have two insert statements, almost exactly the same, which run in two different schemas on the same Oracle instance. What the insert statement looks like doesn't matter - I'm looking for a troubleshooting strategy here.
Both schemas have 99% the same structure. A few columns have slightly different names, other than that they're the same. The insert statements are almost exactly the same. The explain plan on one gives a cost of 6, the explain plan on the other gives a cost of 7. The tables involved in both sets of insert statements have exactly the same indexes. Statistics have been gathered for both schemas.
One insert statement inserts 12,000 records in 5 seconds.
The other insert statement inserts 25,000 records in 4 minutes 19 seconds.
The number of records being insert is correct. It's the vast disparity in execution times that confuses me. Given that nothing stands out in the explain plan, how would you go about determining what's causing this disparity in runtimes?
(I am using Oracle 10.2.0.4 on a Windows box).
Edit: The problem ended up being an inefficient query plan, involving a cartesian merge which didn't need to be done. Judicious use of index hints and a hash join hint solved the problem. It now takes 10 seconds. Sql Trace / TKProf gave me the direction, as I it showed me how many seconds each step in the plan took, and how many rows were being generated. Thus TKPROF showed me:-
Rows Row Source Operation
------- ---------------------------------------------------
23690 NESTED LOOPS OUTER (cr=3310466 pr=17 pw=0 time=174881374 us)
23690 NESTED LOOPS (cr=3310464 pr=17 pw=0 time=174478629 us)
2160900 MERGE JOIN CARTESIAN (cr=102 pr=0 pw=0 time=6491451 us)
1470 TABLE ACCESS BY INDEX ROWID TBL1 (cr=57 pr=0 pw=0 time=23978 us)
8820 INDEX RANGE SCAN XIF5TBL1 (cr=16 pr=0 pw=0 time=8859 us)(object id 272041)
2160900 BUFFER SORT (cr=45 pr=0 pw=0 time=4334777 us)
1470 TABLE ACCESS BY INDEX ROWID TBL1 (cr=45 pr=0 pw=0 time=2956 us)
8820 INDEX RANGE SCAN XIF5TBL1 (cr=10 pr=0 pw=0 time=8830 us)(object id 272041)
23690 MAT_VIEW ACCESS BY INDEX ROWID TBL2 (cr=3310362 pr=17 pw=0 time=235116546 us)
96565 INDEX RANGE SCAN XPK_TBL2 (cr=3219374 pr=3 pw=0 time=217869652 us)(object id 272084)
0 TABLE ACCESS BY INDEX ROWID TBL3 (cr=2 pr=0 pw=0 time=293390 us)
0 INDEX RANGE SCAN XIF1TBL3 (cr=2 pr=0 pw=0 time=180345 us)(object id 271983)
Notice the rows where the operations are MERGE JOIN CARTESIAN and BUFFER SORT. Things that keyed me into looking at this were the number of rows generated (over 2 million!), and the amount of time spent on each operation (compare to other operations).
Use the SQL Trace facility and TKPROF.
The main culprits in insert slow downs are indexes, constraints, and oninsert triggers. Do a test without as many of these as you can remove and see if it's fast. Then introduce them back in and see which one is causing the problem.
I have seen systems where they drop indexes before bulk inserts and rebuild at the end -- and it's faster.
The first thing to realize is that, as the documentation says, the cost you see displayed is relative to one of the query plans. The costs for 2 different explains are not comparable. Secondly the costs are based on an internal estimate. As hard as Oracle tries, those estimates are not accurate. Particularly not when the optimizer misbehaves. Your situation suggests that there are two query plans which, according to Oracle, are very close in performance. But which, in fact, perform very differently.
The actual information that you want to look at is the actual explain plan itself. That tells you exactly how Oracle executes that query. It has a lot of technical gobbeldy-gook, but what you really care about is knowing that it works from the most indented part out, and at each step it merges according to one of a small number of rules. That will tell you what Oracle is doing differently in your two instances.
What next? Well there are a variety of strategies to tune bad statements. The first option that I would suggest, if you're in Oracle 10g, is to try their SQL tuning advisor to see if a more detailed analysis will tell Oracle the error of its ways. It can then store that plan, and you will use the more efficient plan.
If you can't do that, or if that doesn't work, then you need to get into things like providing query hints, manual stored query outlines, and the like. That is a complex topic. This is where it helps to have a real DBA. If you don't, then you'll want to start reading the documentation, but be aware that there is a lot to learn. (Oracle also has a SQL tuning class that is, or at least used to be, very good. It isn't cheap though.)
I've put up my general list of things to check to improve performance as an answer to another question:
Favourite performance tuning tricks
... It might be helpful as a checklist, even though it's not Oracle-specific.
I agree with a previous poster that SQL Trace and tkprof are a good place to start. I also highly recommend the book Optimizing Oracle Performance, which discusses similar tools for tracing execution and analyzing the output.
SQL Trace and tkprof are only good if you have access to theses tools. Most of the large companies that I do work for do not allow developers to access anything under the Oracle unix IDs.
I believe you should be able to determine the problem by first understanding the question that is being asked and by reading the explain plans for each of the queries. Many times I find that the big difference is that there are some tables and indexes that have not been analyzed.
Another good reference that presents a general technique for query tuning is the book SQL Tuning by Dan Tow.
When the performance of a sql statement isn't as expected / desired, one of the first things I do is to check the execution plan.
The trick is to check for things that aren't as expected. For example you might find table scans where you think an index scan should be faster or vice versa.
A point where the oracle optimizer sometimes takes a wrong turn are the estimates how many rows a step will return. If the execution plan expects 2 rows, but you know it will more like 2000 rows, the execution plan is bound to be less than optimal.
With two statements to compare you can obviously compare the two execution plans to see where they differ.
From this analysis, I come up with an execution plan that I think should be suited better. This is not an exact execution plan, but just some crucial changes, to the one I found, like: It should use Index X or a Hash Join instead of a nested loop.
Next thing is to figure out a way to make Oracle use that execution plan. Often by using Hints, or creating additonal indexes, sometimes changing the SQL statement. Then of course test that the changed statement
a) still does what it is supposed to do
b) is actually faster
With b it is very important to make sure you are testing the correct use case. A typical pit fall is the difference between returning the first row, versus returning the last row. Most tools show you the first results as soon as they are available, with no direct indication, that there is more work to be done. But if your actual program has to process all rows before it continues to the next processing step, it is almost irrelevant when the first row appears, it is only relevant when the last row is available.
If you find a better execution plan, the final step is to make you database actually use it in the actual program. If you added an index, this will often work out of the box. Hints are an option, but can be problematic if a library creates your sql statement, those ofte don't support hints. As a last resort you can save and fix execution plans for specific sql statements. I'd avoid this approach, because its easy to become forgotten and in a year or so some poor developer will scratch her head why the statement performs in a way that might have been apropriate with the data one year ago, but not with the current data ...
analyzing the oI also highly recommend the book Optimizing Oracle Performance, which discusses similar tools for tracing execution and utput.

Resources