Join multiple table performance - sql-server

In my current project, I have to left join multiple table (about 10->20 table) together. In these tables, there are about 1->3 large table with millions row (at maximum: 80 millions), the other table only have thousands row at most.
Currently, my query is like:
SELECT *
FROM table1 left join table2 on table1.A=table2.A
table1 left join table3 on table1.B=table3.B
table1 left join table4 on table1.C=table4.C
table1 left join table5 on table1.D=table5.D
....
table1 left join table15 on table1.Z=table15.Z
table1 and table2 are large table, other are small.
I have clustered index in all of these table but the performance is still low. So, I want to know if there is anything I can try to increase the performance.
p/s: I have try to create nonclustered index in these table but the performance become lower than before.

Well the fastest query would be if you de-normalized your table1 so that the split out normalized values were actually part of the table.
Another solution that you might try is building a temp table that was one big collection of the 20 other small tables. And then just join that temp table back to your table1.

First of all, do you really need all those joined data? I suppose most of the situations you don't. If you do, you probably need to review your requirements and architecture.
So the trick is, you only get the data you want, instead of all of them. And filter the data as early as possible (even before joining the next table. but don't worry, SQL Server would do some optimization for you).
I would start from checking the execution plan with Ctrl+L. Try finding out those "Index Scan" nodes and build index for them. I can't go any further without seeing your execution plan.

Related

netezza left outer join query performance

I have a question related to Netezza query performance .I have 2 tables Table A and Table B and Table B is the sub set of Table A with data alteration .I need to update those new values to table A from table B
We can have 2 approaches here
1) Left outer join and select relevant columns and insert in target table
2) Insert table a data into target table and update those values from tableB using join
I tried both and logically both are same.But Explain plan is giving different cost
for normal select
a)Sub-query Scan table "TM2" (cost=0.1..1480374.0 rows=8 width=4864 conf=100)
update
b)Hash Join (cost=356.5..424.5 rows=2158 width=27308 conf=21)
for left outer join
Sub-query Scan table "TM2" (cost=51.0..101474.8 rows=10000000 width=4864 conf=100)
From this I feel left outer join is better .Can anyone put some thought on this and guide
Thanks
The reason that the cost of insert into table_c select ... from table_a; update table_c set ... from table_b; is higher is because you're inserting, deleting, then inserting. Updates in Netezza mark the records to be updated as deleted, then inserts new rows with the updated values. Once the data is written to an extent, it's never (to my knowledge) altered.
With insert into table_c select ... from table_a join table_b using (...); you're only inserting once, thereby only updating all the zone maps once. The cost will be noticeably lower.
Netezza does an excellent job of keeping you away from the disk on reads, but it will write to the disk as often as you tell it to. In the case of updates, seemingly more so. Try to only write as often as is necessary to gain benefits of new distributions and co-located joins. Any more than that, and you're just using excess commit actions.

How can I speed up this SQL view?

I'm a beginner at this so hope you can help. I'm working in SQL server 2008R2 and have a view that is comprised from four tables all joined together:
SELECT DISTINCT ad.award_id,
bl.funding_id,
bl.budget_line,
dd4.monthnumberofyear AS month,
dd4.yearcalendar AS year,
CASE
WHEN frb.full_value IS NULL THEN '0'
ELSE frb.full_value
END AS Expenditure_value,
bl.budget_id,
frb.accode,
'Actual' AS Type
FROM dw.dbo.dimdate5 AS dd4
LEFT OUTER JOIN dbo.award_data AS ad
ON dd4.fulldate BETWEEN ad.usethisstartdate AND
ad.usethisenddate
LEFT OUTER JOIN dbo.budget_line AS bl
ON bl.award_id = ad.award_id
LEFT OUTER JOIN dw.dbo.fctresearchbalances AS frb
ON frb.el3 = bl.award_id
AND frb.element4groupidnew = bl.budget_line
AND dd4.yearfiscal = frb.yr
AND dd4.monthnumberfiscal = frb.period
The view has 9 columns and 1.5 million rows and growing. A select * from this view was taking 20 minutes for all the rows. I added indexes on the fields in the tables that are joined on and that improved it to 10 minutes. My question is what else could I do to get the select to run faster?
Many thanks, Violet.
Try getting rid of the case statement.
If you have 1.5 million rows, if you're interesting in the aggregation of those rows rather than the whole set, you might want to sum the rows in fctResearchBalances first and then do the joins.
(It's a bit difficult to determine what else you might benefit from, without seeing the access plan.)
1- You can use stored procedure to have buffer cache.
2- you can use indexed view , this means creating index on schemabound views.
3- you can use query hints in join to order the query optimizer to use special kind of join.
4- you can use table partitioning .
SELECT DISTINCT --#1 - potential bottleneck
ad.award_id
, bl.funding_id
, bl.budget_line
, [month] = dd4.monthnumberofyear
, [year] = dd4.yearcalendar
, Expenditure_value = ISNULL(frb.full_value, '0')
, bl.budget_id
, frb.accode
, [type] = 'Actual'
FROM dbo.dimdate5 dd4
LEFT JOIN dbo.award_data ad ON dd4.fulldate BETWEEN ad.usethisstartdate AND ad.usethisenddate
LEFT JOIN dbo.budget_line bl ON bl.award_id = ad.award_id
LEFT JOIN dbo.fctresearchbalances frb ON frb.el3 = bl.award_id --#2 - join by multiple columns
AND frb.element4groupidnew = bl.budget_line
AND dd4.yearfiscal = frb.yr
AND dd4.monthnumberfiscal = frb.period
The CASE statement can be replace by
COALESCE(frb.full_value,'0') AS Expenditure_value
Without more info it's not possible to tell exactly what is wrong but just to give you some pointers.
When you have so many LEFT JOINS the order of the joins can make a difference.
Do you have standard indexes or covering indexes with included columns?
If you don't have covering indexes, then primary keys matter in the joins. Including all the primary key columns in the join condition will speed up the query.
Then look at your data - do you need all those LEFT JOINS base on the foreign keys between those tables? Depending on your keys a LEFT JOIN may be equivalent to an INNER JOIN.
And with all those LEFT JOINS is having a DISTINCT really useful?
How much RAM do you have? If you have 8GB+ then 1.5m rows is nothing for SQL Server. You need to optimise those joins.

Proper way to filter a table using values in another table in MS Access?

I have a table of transactions with some transaction IDs and Employee Numbers. I have two other tables which are basically just a column full of transactions or employees that need to be filtered out from the first.
I have been running my query like this:
SELECT * FROM TransactionMaster
Where TransactionMaster.TransID
NOT IN (SELECT TransID from BadTransactions)
AND etc...(repeat for employee numbers)
I have noticed slow performance when running these types of queries. I am wondering if there is a better way to build this query?
If you want all TransactionMaster rows which don't include a TransID match in BadTransactions, use a LEFT JOIN and ask for only those rows where BadTransactions.TransID Is Null (unmatched).
SELECT tm.*
FROM
TransactionMaster AS tm
LEFT JOIN
BadTransactions AS bt
ON tm.TransID = bt.TransID
WHERE bt.TransID Is Null;
That query should be relatively fast with TransID indexed.
If you have Access available, create a new query using the "unmatched query wizard". It will guide you through the steps to create a similar query.

SQL Server CTE referred in self joins slow

I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table.
There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows.
The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed.
I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias.
The query is quite complex but here is a simplified pseudo-version of it
WITH DataCTE as
(
SELECT [columns] FROM table
INNER JOIN table2
ON [...]
INNER JOIN table3
ON [...]
LEFT JOIN table3
ON [...]
)
SELECT [aggregates_columns of each subset] FROM DataCTE Main
LEFT JOIN DataCTE BananasSubset
ON [...]
AND Product = 'Bananas'
AND Quality = 100
LEFT JOIN DataCTE DamagedBananasSubset
ON [...]
AND Product = 'Bananas'
AND Quality < 20
LEFT JOIN DataCTE MangosSubset
ON [...]
GROUP BY [
I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those.
I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times.
I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead.
The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds.
Why isn't SQL Server smart enough to keep the CTE results in memory?
I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement.
To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution.
Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted?
Thanks,
Kharlos
I believe that CTE results are retrieved every time. With a temp table the results are stored until it is dropped. This would seem to explain the performance gains you saw when you switched to a temp table.
Another benefit is that you can create indexes on a temporary table which you can't do to a cte. Not sure if there would be a benefit in your situation but it's good to know.
Related reading:
Which are more performant, CTE or temporary tables?
SQL 2005 CTE vs TEMP table Performance when used in joins of other tables
http://msdn.microsoft.com/en-us/magazine/cc163346.aspx#S3
Quote from the last link:
The CTE's underlying query will be
called each time it is referenced in
the immediately following query.
I'd say go with the temp table. Unfortunately elegant isn't always the best solution.
UPDATE:
Hmmm that makes things more difficult. It's hard for me to say with out looking at your whole environment.
Some thoughts:
can you use a stored procedure instead of a UDF (instead, not from within)?
This may not be possible but if you can remove the left join from you CTE you could move that into an indexed view. If you are able to do this you may see performance gains over even the temp table.

Do the order of JOINs make a difference?

Say I have a query like the one below:
SELECT t1.id, t1.Name
FROM Table1 as t1 --800,000 records
INNER JOIN Table2 as t2 --500,000 records
ON t1.fkID = t2.id
INNER JOIN Table3 as t3 -- 1,000 records
ON t1.OtherId = t3.id
Would i see a performance improvement if I changed the order of my joins on Table2 and Table3. See below:
SELECT t1.id, t1.Name
FROM Table1 as t1 --800,000 records
INNER JOIN Table3 as t3 -- 1,000 records
ON t1.OtherId = t3.id
INNER JOIN Table2 as t2 --500,000 records
ON t1.fkID = t2.id
I've heard that the Query Optimizer will try to determine the best order but doesn't always work. Does the version of SQL Server you are using make a difference?
The order of joins makes no difference.
What does make a difference is ensuring your statistics are up to date.
One way to check your statistics is to run a query in SSMS and include the Actual execution plan. If the Estimated number of rows is very different to the Actual number of rows used by any part of the execution plan, then your statistics are out of date.
Statistics are rebuilt when the related indexes are rebuilt. If your production maintenance window allows, I would update statistics every night.
This will update statistics for all tables in a database:
exec sp_MSforeachtable "UPDATE STATISTICS ?"
The order of joins makes a difference only if you specify OPTION (FORCE ORDER). Otherwise, the optimizer will rearrange your query in whichever way it deems most efficient.
There actually are certain instances where I find that I need to use FORCE ORDER, but of course they are few and far between. If you aren't sure, just SET STATISTICS [TIME|IO] ON and see for yourself. You'll probably find that your version runs slower than the optimized version in most if not all cases.
The Query Optimizer should easily handle these as exactly the same query, and work out the best way of doing it.
A lot of it is more about the statistics than the number of records. For example, if the vast majority of values in t1.fkID are identical, this information can influence the QO a lot.

Resources