I am running into an issue where SQL Server is causing a significant number of locks (95 to 150) on our main table. They are typically short duration locks, lasting under 3 seconds, but I would like to eliminate those if I possibly can. We have also noticed that typically there are no blocks, but occasionally we have a situation where the blocks seem to "cascade" and then the entire system slows down considerably.
Background
We have up to 600 virtual machines processing data and we loaded a table in SQL so we could monitor any records that got stalled and records that were marked complete. We typically have between 200,000 and 1,000,000 records in this table during our processing.
What we are trying to accomplish
We are attempting to get the next available record (Status = 0). However, since there can be multiple hits on the stored proc simultaneously, we are trying to make sure each VM gets a unique record. This is important because processing takes between 1.5 and 2.5 minutes per record and we want to make this as clean as possible.
Our thought process to this point
UPDATE TOP (1) dbo.Test WITH (ROWLOCK)
SET Status = 1,
VMID = #VMID,
ReadCount = ReadCount + 1,
ProcessDT = GETUTCDATE()
OUTPUT INSERTED.RowID INTO #retValue
WHERE Status = 0
This update was causing us a few issues with locks, so we re-worked the process a little bit and changed the where to a sub-query to return the top 1 RowID (primary key) from the table. This seemed to help things run a little bit smoother, but then we occasionally get over-loaded in the database again.
UPDATE TOP (1) dbo.Test WITH (ROWLOCK)
SET Status = 1,
VMID = #VMID,
ReadCount = ReadCount + 1,
ProcessDT = GETUTCDATE()
OUTPUT INSERTED.RowID INTO #retValue
-- WHERE Status = 0
WHERE RowID IN (SELECT TOP 1 RowID FROM do.Test WHERE Status = 0 ORDER BY RowID)
We discovered that having a significant number of Status 1 and 2 records int he table causes slowdowns. We figured it was from a table scan on the Status column. We added the following index but it did not help solve the locks.
CREATE NONCLUSTERED INDEX IX_Test_Status_RowID
ON [dbo].[Test] ([Status])
INCLUDE ([RowID])
The final step after the UPDATE, we use the RowID returned to select out the details:
SELECT 'Test' as FileName, *, #Nick as [Nickname]
FROM Test WITH (NOLOCK)
WHERE RowID IN (SELECT id from #retValue)
Types of locks
The majority of the blocks are LCK_M_U and LCK_M_S, which I would expect with that UPDATE and SELECT query. We did have 1 or 2 LCK_M_X locks as well occasionally. That made me think we may still be getting collisions on our "unique" record code.
Questions
Are these locks and the number of locks just normal SQL operations for this type load?
Is the sub-query causing more issues than a TOP(1) in the UPDATE we started with? I am trying to get confirmation I can remove the ORDER BY statement and remove that extra step of processing.
Would a different index help? I wondered if the index updating was a possible cause of the locks initially, but now I am not sure.
Is there a better or more efficient way to get a unique RowID?
Is the WITH (ROWLOCK) causing more locks than leaving it off would cause? The idea is ROWLOCK would only lock the 1 specific record and allow another proc to update another record and select without locking the table or page.
Does anyone have any tools they recommend to stress test and run 100 queries simultaneously in order to test any potential solutions?
Sorry for all the questions, just trying to make sure I am as clear as possible on our process and the questions we have.
Thanks in advance for any insight as this is a really frustrating issue for us.
Hardware
We are running SQL Server 2008 R2 on a Dual Xeon CPU with 24 GB of RAM. So we should have plenty of horsepower for this process.
It looks like the best solution to the issue was to create a separate table with an identity and use the ##IDENTITY from the insert to determine the next row to process. That has solved all my lock issues so far in my stress testing. Thanks to all who pointed my in the right direction!
Related
I've already seen a dozen such questions but most of them get answers that doesn't apply to my case.
First off - the database is am trying to get the data from has a very slow network and is connected to using VPN.
I am accessing it through a database link.
I have full write/read access on my schema tables but I don't have DBA rights so I can't create dumps and I don't have grants for creation new tables etc.
I've been trying to get the database locally and all is well except for one table.
It has 6.5 million records and 16 columns.
There was no problem getting 14 of them but the remaining two are Clobs with huge XML in them.
The data transfer is so slow it is painful.
I tried
insert based on select
insert all 14 then update the other 2
create table as
insert based on select conditional so I get only so many records and manually commit
The issue is mainly that the connection is lost before the transaction finishes (or power loss or VPN drops or random error etc) and all the GBs that have been downloaded are discarded.
As I said I tried putting conditionals so I get a few records but even this is a bit random and requires focus from me.
Something like :
Insert into TableA
Select * from TableA#DB_RemoteDB1
WHERE CREATION_DATE BETWEEN to_date('01-Jan-2016') AND to_date('31-DEC-2016')
Sometimes it works sometimes it doesn't. Just after a few GBs Toad is stuck running but when I look at its throughput it is 0KB/s or a few Bytes/s.
What I am looking for is a loop or a cursor that can be used to get maybe 100000 or a 1000000 at a time - commit it then go for the rest until it is done.
This is a one time operation that I am doing as we need the data locally for testing - so I don't care if it is inefficient as long as the data is brought in in chunks and a commit saves me from retrieving it again.
I can count already about 15GBs of failed downloads I've done over the last 3 days and my local table still has 0 records as all my attempts have failed.
Server: Oracle 11g
Local: Oracle 11g
Attempted Clients: Toad/Sql Dev/dbForge Studio
Thanks.
You could do something like:
begin
loop
insert into tablea
select * from tablea#DB_RemoteDB1 a_remote
where not exists (select null from tablea where id = a_remote.id)
and rownum <= 100000; -- or whatever number makes sense for you
exit when sql%rowcount = 0;
commit;
end loop;
end;
/
This assumes that there is a primary/unique key you can use to check if a row int he remote table already exists in the local one - in this example I've used a vague ID column, but replace that with your actual key column(s).
For each iteration of the loop it will identify rows in the remote table which do not exist in the local table - which may be slow, but you've said performance isn't a priority here - and then, via rownum, limit the number of rows being inserted to a manageable subset.
The loop then terminates when no rows are inserted, which means there are no rows left in the remote table that don't exist locally.
This should be restartable, due to the commit and where not exists check. This isn't usually a good approach - as it kind of breaks normal transaction handling - but as a one off and with your network issues/constraints it may be necessary.
Toad is right, using bulk collect would be (probably significantly) faster in general as the query isn't repeated each time around the loop:
declare
cursor l_cur is
select * from tablea#dblink3 a_remote
where not exists (select null from tablea where id = a_remote.id);
type t_tab is table of l_cur%rowtype;
l_tab t_tab;
begin
open l_cur;
loop
fetch l_cur bulk collect into l_tab limit 100000;
forall i in 1..l_tab.count
insert into tablea values l_tab(i);
commit;
exit when l_cur%notfound;
end loop;
close l_cur;
end;
/
This time you would change the limit 100000 to whatever number you think sensible. There is a trade-off here though, as the PL/SQL table will consume memory, so you may need to experiment a bit to pick that value - you could get errors or affect other users if it's too high. Lower is less of a problem here, except the bulk inserts become slightly less efficient.
But because you have a CLOB column (holding your XML) this won't work for you, as #BobC pointed out; the insert ... select is supported over a DB link, but the collection version will get an error from the fetch:
ORA-22992: cannot use LOB locators selected from remote tables
ORA-06512: at line 10
22992. 00000 - "cannot use LOB locators selected from remote tables"
*Cause: A remote LOB column cannot be referenced.
*Action: Remove references to LOBs in remote tables.
Yesterday I had this strange (never seen before) situation when applying a patch to several databases from with SQL server management studio. I needed to update a large table (5 Million records+) over several databases. The statement I issued per database was something like:
set rowcount 100000
select 1
while ##rowcount > 1
update table
set column = newval
where column is null
Those statements ran for several seconds (anywhere between 2 and 30 seconds). On a second tab I ran this query:
select count(*)
from table
where column is null
Just to check how many rows where already processed. Impatient as I am, I pressed F5 for the count(*) statements every 20 seconds or so. And the - expected behavior - was that I had to wait for anything in the range from 0 to 30 seconds before the count(*) was calculated. I expected this behavior, since the update statement was running so the count(*) was next in line.
But then after a database or 10 this strange thing happened. I opened the next database (altered the database via USE) and after pressing F5 the count(*) responded immediately. And after pressing F5 again, direct result but without any progress, and again and again, the count(*) did not change. However the update was running.
And then after then n-th press, the count(*) dropped with exactly 100K records. I pressed F5 again, immediate result.. and so on and so on... WHY? wasn't the count(*) waiting... I did not allowed to read dirty or....
Only this database gave me this behavior. And I have really no clue what is causing this.
Between switching database I did not close or open any other tab, so the only thing I can imagine is that the connection parameters for that one database are different. However I'm overlooking those... but I can't find a difference.
as per what I understood from your description above is that you were not getting the result of count() correctly on some specific database. If that is the case then, Please use the below script Instead of your count()
SELECT
COUNT(1)
FROM Table WITH(NOLOCK)
WHERE Column IS NULL
Because count(*) takes more time to execute. and as your requirement is to just know the count Use Count(1) instead
Also there is a possibility of lock in the database if you are using transactions. So Use the WITH(NOLOCK) to avoid such blocking and get the result more faster.
I need to insert 1.3 million of records from one table into another, and it takes really long time (over 13 min). After some research I found that it is better to do this operation in batches, so I put together something like this (actual query is more complicated, it is simplified here for briefness):
DECLARE #key INT; SET #key = 0;
CREATE TABLE #CURRENT_KEYS(KEY INT)
WHILE 1=1
BEGIN
-- Getting subset of keys
INSERT INTO #CURRENT_KEYS(KEY)
SELECT TOP 100000 KEY FROM #ALL_KEYS WHERE KEY > #key
IF ##ROWCOUNT = 0 BREAK
-- Main Insert
INSERT INTO #RESULT(KEY, VALUE)
SELECT MAIN_TABLE.KEY, MAIN_TABLE.VALUE
FROM MAIN_TABLE INNER_JOIN #CURRENT_KEYS
ON MAIN_TABLE.KEY = #CURRENT_KEYS.KEY
SELECT #key = MAX(KEY ) FROM #CURRENT_KEYS
TRUNCATE TABLE #CURRENT_KEYS
END
I already have indexed list of 1.3 million keys in #ALL_KEYS table so idea here is in a loop create smaller subset of keys for the JOIN and INSERT. The above loop executes 13 times (1,300,000 records / 100,000 records in a batch). If I put a break after just one iterations - execution time is 9 seconds. I assumed total execution time would be 9*13 seconds, but it's the same 13 minutes!
Any idea why?
NOTE: Instead of temp table #CURRENT_KEYS, I tried to use CTE, but with the same result.
UPDATE Some wait stats.
I am showing for this process PAGEIOLATCH_SH and sometimes PREEMPTIVE_OS_WRITEFILEGATHER in wait stats occasionally over 500ms, but often < 100Ms. Also SP_WHO shows user as suspended for the duration of the query.
I'm pretty sure you're experiencing disk pressure. PREEMPTIVE_OS_WRITEFILEGATHER is an autogrowth event (database getting larger), and PAGEIOLATCH_SH means that the process is waiting for a latch on a buffer that's an IO request (probably your file growth event).
http://blog.sqlauthority.com/2011/02/19/sql-server-preemptive-and-non-preemptive-wait-type-day-19-of-28/
http://blog.sqlauthority.com/2011/02/09/sql-server-pageiolatch_dt-pageiolatch_ex-pageiolatch_kp-pageiolatch_sh-pageiolatch_up-wait-type-day-9-of-28/
What I would recommend is pre-growing both tempdb (for your temp table) and the database that's going to hold the batch insert.
http://support.microsoft.com/kb/2091024
June 29, 2010 - I had an un-committed action from a previous delete statement. I committed the action and I got another error about conflicting primary id's. I can fix that. So morale of the story, commit your actions.
Original Question -
I'm trying to run this query:
with spd_data as (
select *
from openquery(IRPROD,'select * from budget_user.spd_data where fiscal_year = 2010')
)
insert into [IRPROD]..[BUDGET_USER].[SPD_DATA_BUD]
(REC_ID, FISCAL_YEAR, ENTITY_CODE, DIVISION_CODE, DEPTID, POSITION_NBR, EMPLID,
spd_data.NAME, JOB_CODE, PAY_GROUP_CODE, FUND_CODE, FUND_SOURCE, CLASS_CODE,
PROGRAM_CODE, FUNCTION_CODE, PROJECT_ID, ACCOUNT_CODE, SPD_ENC_AMT, SPD_EXP_AMT,
SPD_FB_ENC_AMT, SPD_FB_EXP_AMT, SPD_TUIT_ENC_AMT, SPD_TUIT_EXP_AMT,
spd_data.RUNDATE, HOME_DEPTID, BUD_ORIG_AMT, BUD_APPR_AMT)
SELECT REC_ID, FISCAL_YEAR, ENTITY_CODE, DIVISION_CODE, DEPTID, POSITION_NBR, EMPLID,
spd_data.NAME, JOB_CODE, PAY_GROUP_CODE, FUND_CODE, FUND_SOURCE, CLASS_CODE,
PROGRAM_CODE, FUNCTION_CODE, PROJECT_ID, ACCOUNT_CODE, SPD_ENC_AMT, SPD_EXP_AMT,
SPD_FB_ENC_AMT, SPD_FB_EXP_AMT, SPD_TUIT_ENC_AMT, SPD_TUIT_EXP_AMT,
spd_data.RUNDATE, HOME_DEPTID, lngOrig_amt, lngAppr_amt
from spd_data
left join Budgets.dbo.tblAllPosDep on project_id = projid
and job_code = jcc and position_nbr = psno
and emplid = empid
where OrgProjTest = 'EQUAL';
Basically I'm selecting a table from IRPROD (an oracle db), joining it with a local table, and inserting the results back on IRPROD.
The problem I'm having is that while the query runs, it never stops. I've let it run for an hour and it keeps going until I cancel it. I can see on a bandwidth monitor on the SQL Server data going in and out. Also, if I just run the select part of the query it returns the results in 4 seconds.
Any ideas why it's not finishing? I've got other queryies setup in a similar manner and do not have any problems (granted those insert from local tables and not a remote table).
You didn't included any volume metrics. But I would recommend to use a temporary table to gather the results.
Then you should try to insert the first couple of rows. If this succeeds you'll have a strong indicator that everything is fine.
Try to break down each insert task by project_id or emplid to avoid large transactions logs.
You should also think about crafting a bulk batch process.
If you run just the select without the insert, how many records are returned? Does the data look right or are there multiple records due to the join?
Are there triggers on the table you are inserting into? If you are returning many records and triggers are on the table that are designed to run row-byrow this could be slowing things down. You are also sending to another server, so the network pipeline could be what is slowing you down. Maybe it would be better to send the budget data to the Oracle server and do the insert from there rather than from the SQL Server.
Why is that, with default settings on Sql Server (so transaction isolation level = read committed), that this test:
CREATE TABLE test2 (
ID bigint,
name varchar(20)
)
then run this in one SSMS tab:
begin transaction SH
insert into test2(ID,name) values(1,'11')
waitfor delay '00:00:30'
commit transaction SH
and this one simultaneously in another tab:
select * from test2
requires the 2nd select to wait for the first to complete before returning??
We also tried these for the 2nd query:
select * from test2 NOLOCK WHERE ID = 1
and tried inserting one ID in the first query and selecting a different ID in the second.
Is this the result of page locking? When running the 2 queries, i've also ran this:
select object_name(P.object_id) as TableName, resource_type, resource_description
from
sys.dm_tran_locks L join sys.partitions P on L.resource_associated_entity_id = p.hobt_id
and gotten this result set:
test2 RID 1:12186:5
test2 RID 1:12186:5
test2 PAGE 1:12186
test2 PAGE 1:12186
requires the 2nd select to wait for
the first to complete before
returning??
read commited prevents dirty reads and by blocking you will get a consistent result, snapshot isolation gets around this but you will get slightly worse performance because now sql server will hold the old values for the duration of the transaction (better have your tempdb on a good drive)
BTW, try changing the query from
select * from test2
to
select * from test2 where id <> 1
assuming you have more than 1 row in the table and it will be over a page, insert a couple of thousand rows
List traversal with node locking is done by 'crabbing':
you have a lock current node
you grab a lock next node
you make the next node current
you release the lock on the previous node (former current)
This techniques is common in all list traversal algorithms and is meant to keep stability while traversing: you are never making a 'leap' w/o having yourself anchored in a lock. It is often compared to the techniques used by rock climbers.
A statement like SELECT ... FROM table; is a scan over the entire table. As such, it can be compared with a list traversal, and the thread doing the table scan traversal will 'crabb' ver the rows just like one doing a list traversal will crabb over the nodes. Such list traversal is guaranteed that it will attempt to lock, eventually, every single node in the list, and a table scan will similarly attempt to lock, at one time or another, every single row in the table. So any conflicting lock held by another transaction on row will block the scan, 100% guaranteed. Everything else you observe (page locks, intent locks etc) is implementation details, irrelevant to the fundamental issue.
The proper solution to this problem is to optimize the queries that they don't scan tables end-to-end. Only after that is achieved you can turn your focus to eliminate whatever contention is left: deploy snapshot isolation based row-level versionning. In other words, enable read-committed snapshot on the database.