I ran the Performance – Top Queries by Total IO (I am trying to improve this process).
The top #1 is this code:
DECLARE #LeadsVS3 AS TT_LEADSMERGE
DECLARE #LastUpdateDate DATETIME
SELECT #LastUpdateDate = MAX(updatedate)
FROM [BUDatamartsource].[dbo].[salesforce_lead]
INSERT INTO #LeadsVS3
SELECT
Lead_id,
(more columns…)
OrderID__c,
City__c
FROM
[ReplicatedVS3].[dbo].[Lead]
WHERE
UpdateDate > #LastUpdateDate
(the code is a piece of a larger SP)
This is in a job that runs every 15 minutes... Other than running the job less frequently is there any other improvement I could make?
Make a try with a local hash table like #LeadsVS3, it is faster than udtt in most cases
Also there is another trick you may do.
On those cases where you always get all 'recent' rows, you may get locked for 1 row, the latest, waiting to commit. You may sacrifice a small part e.g. 1 minute that is to ignore last minute records ( current datetime - 1 minute ). You get those to the next run and save yourself any transaction (or replication) lock waits.
The execution plan that you posted appears to be the estimated execution plan. (the actual execution plan includes the actual number of rows). Without the actual plan it's impossible to tell what's really going on.
The obvious improvement would be to add a covering nonclustered index on Lead.leadid that includes the other columns in your SELECT statement. Right now your scanning a the widest possible index (your clustered index) to retrieve a presumably small percentage of records. Turning that clustered scan into a non-clustered seek will be huge.
On that same note you could make that index a filtered index that's only includes records for dates greater than your last UpdateDate. Then setup a regular SQL Job that periodically rebuilds it to filter on a more current date.
Other things you can do to increase insert performance:
Drop any constraints and/or indexes before the insert then rebuild
them after.
Use smaller data types
Related
I have a long running stored procedure with lot of statements. After analyzing identified few statements which are taking most time. Those statements are all update statements.
Looking at the execution plan, the query scans the source table in parallel in few seconds, and then passed it to gather streams operation which then passes to
This is somewhat similar to below, and we see same behavior with the index creation statements too causing slowness.
https://brentozar.com/archive/2019/01/why-do-some-indexes-create-faster-than-others/
Table has 60 million records and is a heap as we do lot of data loads, updates and deletes.
Reading the source is not a problem as it completes in few seconds, but actual update which happens serially is taking most time.
A few suggestions to try:
if you have indexes on the target table, dropping them before and recreating after should improve insert performance.
Add insert into [Table] with (tablock) hint to the table you are inserting into, this will enable sql server to lock the table exclusively and will allow the insert to also run in parallel.
Alternatively if that doesn't yield an improvement try adding a maxdop 1 hint to the query.
How often do you UPDATE the rows in this heap?
Because, unlike clustered indexes, heaps will use a RID to find specific rows. But the thing is that (unless you specifically rebuild this) when you update a row, the last row will still remain where it was and now point to the new location instead, increasing the number of lookups that is needed for each time you perform an update on a row.
I don't really think that is something that will be affected here, but could you possible see what happens if you add a clustered index on the table and see how the update times are affected?
Also, I don't assume you got some heavy trigger on the table, doing a bunch of stuff as well, right?
Additionally, since you are referring to an article by Brent Ozar, he does advocate to break updates into batches of no more than 4000 rows a time, as that has both been proven to be the fastest and will be below the 5000 rows X-lock that will occur during updates.
I have a table t1 with a primary key on the identity column of this table. Within the insert trigger of t1, an UPDATE statement is being issued to update some columns of the newly inserted rows, the join being on t1.primarykeycolumn and inserted.primarykeycolumn.
When the number of inserted rows starts to creep up, I have noticed 'suboptimal' execution plans. I guess the optimizer is referring to the statistics on t1 to arrive at the execution plan. But for newly inserted rows, the statistics page is always going to be stale, after all the IDENTITY columns is always going to be monotonically increasing when the SQL Server is supplying the values.
To prove that statistics are the 'issue', I issued an UPDATE STATISTICS command as the first statement in the trigger and the optimizer is able to come up with a very good plan for a wide variety of rows. But I certainly cannot issue UPDATE STATISTICS in production code for a mostly OLTP system.
Most of the times, the number of rows inserted will be in the few tens and only occasionally in the couple of thousands. When the number of rows in the tens, the execution plan shows only nested loop joins while it switches to using a series of Merge Joins + Stream Aggregates at some point as the number of rows starts to creep up.
I want to avoid writing convoluted code within the trigger, one part for handling large number of rows and the other for the smaller number of rows. After all, this is what the server is best at doing. Is there a way to tell the optimizer 'even though you do not see the inserted values in the statistics, the distribution is going to be exactly like those that have been inserted before. please come up with the plan based on this assumption'? Any pointers appreciated.
After experimenting a bit, I have the following observation :
In the absence of a statistics, the optimizer is coming up with optimal plans for a very wide number of rows. It is only when the statistics are updated before issuing the inserts (i.e when there are statistics are available), the optimizer comes up with 'bad' plans on the join between inserted & base table inside the trigger as the number of rows starts to go up.
Is there a way to tell the optimizer 'ignore whatever is on the statistics page, go do whatever you were doing in the absence of statistics"?
In this specific case, INSTEAD OF INSERT triggers are a viable option. See http://www.sqlservercentral.com/Forums/Topic1826794-3387-1.aspx
I have a stored procedure running 10 times slower in production than in staging. I took at look at the execution plan and the first thing I noticed was the cost on Table Insert (into a table variable #temp) was 100% in production and 2% in staging.
The estimated number of rows in production showed almost 200 million row! But in staging was only about 33.
Although the production DB is running on SQL Server 2008 R2 while staging is SQL Server 2012 but I don't think this difference could cause such a problem.
What could be the cause of such a huge difference?
UPDATED
Added the execution plan. As you can see, the large number of estimated rows shows up in Nested Loops (Inner Join) but all it does is a clustered index seek to another table.
UPDATED2
Link for the plan XML included
plan.xml
And SQL Sentry Plan Explorer view (with estimated counts shown)
This looks like a bug to me.
There are an estimated 90,991.1 rows going into the nested loops.
The table cardinality of the table being seeked on is 24,826.
If there are no statistics for a column and the equality operator is used, that means the SQL can’t know the density of the column, so it uses a 10 percent fixed value.
90,991.1 * 24,826 * 10% = 225,894,504.86 which is pretty close to your estimated rows of 225,894,000
But the execution plan shows that only 1 row is estimated per seek. Not the 24,826 from above.
So these figures don't add up. I would assume that it starts off from an original 10% ball park estimate and then later adjusts it to 1 because of the presence of a unique constraint without making a compensating adjustment to the other branches.
I see that the seek is calling a scalar UDF [dbo].[TryConvertGuid] I was able to reproduce similar behavior on SQL Server 2005 where seeking on a unique index on the inside of a nested loops with the predicate being a UDF produced a result where the number of rows estimated out of the join was much larger than would be expected by multiplying estimated seeked rows * estimated number of executions.
But, in your case, the operators to the left of the problematic part of the plan are pretty simple and not sensitive to the number of rows (neither the rowcount top operator or the insert operator will change) so I don't think this quirk is responsible for the performance issues you noticed.
Regarding the point in the comments to another answer that switching to a temp table helped the performance of the insert this may be because it allows the read part of the plan to operate in parallel (inserting to a table variable would block this)
Run EXEC sp_updatestats; on the production database. This updates statistics on all tables. It might produce more sane execution plans if your statistics are screwed up.
Please don't run EXEC sp_updatestats; On a large system it could take hours, or days, to complete. What you may want to do is look at the query plan that is being used on production. Try to see if it has a index that could be used and is not being used. Try rebuilding the index (as a side effect it rebuilds statistics on the index.) After rebuilding look at the query plan and note if it is using the index. Perhaps you many need to add an index to the table. Does the table have a clustered index?
As a general rule, since 2005, SQL server manages statistics on its own rather well. The only time you need to explicitly update statistics is if you know that if SQL Server uses an index the query would execute would execute a lot faster but its not. You may want to run (on a nightly or weekly basis) scripts that automatically test every table and every index to see if the index needs to be reorged or rebuilt (depending on how fragmented it is). These kind of scripts (on a large active OLTP system)r may take a long time to run and you should consider carefully when you have a window to run it. There are quite a few versions of this script floating around but I have used this one often:
https://msdn.microsoft.com/en-us/library/ms189858.aspx
Sorry this is probably too late to help you.
Table Variables are impossible for SQL Server to predict. They always estimate one row and exactly one row coming back.
To get accurate estimates so that the better plan can be created you need to switch your table variable to a temp table or a cte.
I have two tables T_A and T_B.
Both are empty.
Both has clustered index on them.
Recovery model is set to SIMPLE.
The insert...select.. meets the requirements of minimal logging. See
http://msdn.microsoft.com/en-us/library/ms191244.aspx
Both staging tables contains large amount of data.
I need to import data into them from staging tables.
If I perform the following T-SQL blocks individually, each takes 2 to 3 minutes to finish. The total time is about 5 to 6 minutes.
BEGIN TRAN
INSERT INTO T_A WITH(TABLOCK) FROM SRC_A WITH(NOLOCK);
COMMIT TRAN
BEGIN TRAN
INSERT INTO T_B WITH(TABLOCK) FROM SRC_B WITH(NOLOCK);
COMMIT TRAN
To make it faster I open two sessions in SMSS and execute the two blocks in parallel. To my surprise, each session takes about 10 to 12 minutes to finish. Together the total time is more than doubled. The wait_type shown is PAGEIOLATCH_SH which point to disk I/O bottleneck. What I don't understand is that even if the two sessions have to wait on each other for I/O it should not wait for that long. Can anyone help explain this?
My story has not ended here yet. Then I removed the clustered index on both table and ran the two blocks in parallel each in a different session. This time each takes about 1 minutes to finish. The total time is about 1 minutes since they are in parallel. Great! But the nightmare comes when I try to create clustered index back.
If I create the cluster index individually it takes 4 minutes each to finish. The total time is about 8 minutes. This defeated my purpose of improving performance.
Then I try to create clustered index on the two tables in parallel each on a different session. This time it is the worst: one takes 12 minutes to finish and the other takes 25 minutes to finish.
From my test result my best choice is back to square one: execute the two transactions sequentially with clustered index on the table.
Has anyone experienced similar situation and what is the best practice to make it faster?
When creating the clustered index after inserting the records SQL has to recreate this table in the background anyway so it would be faster to insert the records directly into the table with the clustered index already present.
Also disable any non clustered indexes while inserting and enable them afterwards again, creating indexes on a filled table is faster than updating them for each insert. Remember to set Max DOP option to 0 when creating indexes.
Bulk inserts are also a lot faster then insert into statement.
I use the 'SQL server import and export wizard' for copying large amounts of data and it seems to be way faster (the wizard uses bulk statements). If necessary you can try to find the statement this wizard uses and run it yourself.
I have a very big table with a lot of rows and a lot of columns (I know it's bad but let's leave this aside).
Specifically, I had two columns - FinishTime, JobId. The first one is the finish time of the row and the second is its id (not unique, but almost unique - only few records exist with the same id).
I have index on jobid and index on finishtime.
We insert rows all the time, mostly ordered by the finish time. We also update statistics of each index periodically.
Now to the problem:
When I run query with filter jobid==<some id> AND finishtime > <now minus 1 hour> - this query takes a lot of time, and when showing the estimated execution plan I see that the plan is to go over the finishtime index, even though going over the jobid index should be a lot better. When looking at the index statistics, I see that the server "thinks" that the number of jobs in the last hour is 1 because we didn't update the statistics of this index.
When I run query with filter jobid==<some id> AND finishtime > <now minus 100 days> - this works great, because the SQL Server knows to go over the correct index - the job id index.
So basically my question is why if we don't update index statistics all the time (which is time consuming), the server assumes that the number of records past the last bucket is 1?
Thanks very much
You can get a histogram of what the statistics contains for an index using DBCC SHOW_STATISTICS, e.g.
DBCC SHOW_STATISTICS ( mytablename , myindexname )
For date-based records, queries will always be prone to incorrect statistics. Running this should show that the last bucket in the histogram has barely any records in the range [prior-to-today / after-today]. However, all else being equal, SQL Server should still prefer the job_id index to the finishtime index if both are single-column indexes with no included columns; this is due to job_id (int) being faster to lookup than finishtime (datetime).
Note: If your finishtime is covering for the query, this will heavily influence the query optimizer in selecting it since it eliminates a bookmark lookup operation.
To combat this, either
update statistics regularly
create multiple filtered indexes (2008+ feature) on the data, with one partition updated far more regularly being the tail end
use index hints on sensitive queries