Explain locking behavior in SQL Server - sql-server

Why is that, with default settings on Sql Server (so transaction isolation level = read committed), that this test:
CREATE TABLE test2 (
ID bigint,
name varchar(20)
)
then run this in one SSMS tab:
begin transaction SH
insert into test2(ID,name) values(1,'11')
waitfor delay '00:00:30'
commit transaction SH
and this one simultaneously in another tab:
select * from test2
requires the 2nd select to wait for the first to complete before returning??
We also tried these for the 2nd query:
select * from test2 NOLOCK WHERE ID = 1
and tried inserting one ID in the first query and selecting a different ID in the second.
Is this the result of page locking? When running the 2 queries, i've also ran this:
select object_name(P.object_id) as TableName, resource_type, resource_description
from
sys.dm_tran_locks L join sys.partitions P on L.resource_associated_entity_id = p.hobt_id
and gotten this result set:
test2 RID 1:12186:5
test2 RID 1:12186:5
test2 PAGE 1:12186
test2 PAGE 1:12186

requires the 2nd select to wait for
the first to complete before
returning??
read commited prevents dirty reads and by blocking you will get a consistent result, snapshot isolation gets around this but you will get slightly worse performance because now sql server will hold the old values for the duration of the transaction (better have your tempdb on a good drive)
BTW, try changing the query from
select * from test2
to
select * from test2 where id <> 1
assuming you have more than 1 row in the table and it will be over a page, insert a couple of thousand rows

List traversal with node locking is done by 'crabbing':
you have a lock current node
you grab a lock next node
you make the next node current
you release the lock on the previous node (former current)
This techniques is common in all list traversal algorithms and is meant to keep stability while traversing: you are never making a 'leap' w/o having yourself anchored in a lock. It is often compared to the techniques used by rock climbers.
A statement like SELECT ... FROM table; is a scan over the entire table. As such, it can be compared with a list traversal, and the thread doing the table scan traversal will 'crabb' ver the rows just like one doing a list traversal will crabb over the nodes. Such list traversal is guaranteed that it will attempt to lock, eventually, every single node in the list, and a table scan will similarly attempt to lock, at one time or another, every single row in the table. So any conflicting lock held by another transaction on row will block the scan, 100% guaranteed. Everything else you observe (page locks, intent locks etc) is implementation details, irrelevant to the fundamental issue.
The proper solution to this problem is to optimize the queries that they don't scan tables end-to-end. Only after that is achieved you can turn your focus to eliminate whatever contention is left: deploy snapshot isolation based row-level versionning. In other words, enable read-committed snapshot on the database.

Related

Insert from select or update from select with commit every 1M records

I've already seen a dozen such questions but most of them get answers that doesn't apply to my case.
First off - the database is am trying to get the data from has a very slow network and is connected to using VPN.
I am accessing it through a database link.
I have full write/read access on my schema tables but I don't have DBA rights so I can't create dumps and I don't have grants for creation new tables etc.
I've been trying to get the database locally and all is well except for one table.
It has 6.5 million records and 16 columns.
There was no problem getting 14 of them but the remaining two are Clobs with huge XML in them.
The data transfer is so slow it is painful.
I tried
insert based on select
insert all 14 then update the other 2
create table as
insert based on select conditional so I get only so many records and manually commit
The issue is mainly that the connection is lost before the transaction finishes (or power loss or VPN drops or random error etc) and all the GBs that have been downloaded are discarded.
As I said I tried putting conditionals so I get a few records but even this is a bit random and requires focus from me.
Something like :
Insert into TableA
Select * from TableA#DB_RemoteDB1
WHERE CREATION_DATE BETWEEN to_date('01-Jan-2016') AND to_date('31-DEC-2016')
Sometimes it works sometimes it doesn't. Just after a few GBs Toad is stuck running but when I look at its throughput it is 0KB/s or a few Bytes/s.
What I am looking for is a loop or a cursor that can be used to get maybe 100000 or a 1000000 at a time - commit it then go for the rest until it is done.
This is a one time operation that I am doing as we need the data locally for testing - so I don't care if it is inefficient as long as the data is brought in in chunks and a commit saves me from retrieving it again.
I can count already about 15GBs of failed downloads I've done over the last 3 days and my local table still has 0 records as all my attempts have failed.
Server: Oracle 11g
Local: Oracle 11g
Attempted Clients: Toad/Sql Dev/dbForge Studio
Thanks.
You could do something like:
begin
loop
insert into tablea
select * from tablea#DB_RemoteDB1 a_remote
where not exists (select null from tablea where id = a_remote.id)
and rownum <= 100000; -- or whatever number makes sense for you
exit when sql%rowcount = 0;
commit;
end loop;
end;
/
This assumes that there is a primary/unique key you can use to check if a row int he remote table already exists in the local one - in this example I've used a vague ID column, but replace that with your actual key column(s).
For each iteration of the loop it will identify rows in the remote table which do not exist in the local table - which may be slow, but you've said performance isn't a priority here - and then, via rownum, limit the number of rows being inserted to a manageable subset.
The loop then terminates when no rows are inserted, which means there are no rows left in the remote table that don't exist locally.
This should be restartable, due to the commit and where not exists check. This isn't usually a good approach - as it kind of breaks normal transaction handling - but as a one off and with your network issues/constraints it may be necessary.
Toad is right, using bulk collect would be (probably significantly) faster in general as the query isn't repeated each time around the loop:
declare
cursor l_cur is
select * from tablea#dblink3 a_remote
where not exists (select null from tablea where id = a_remote.id);
type t_tab is table of l_cur%rowtype;
l_tab t_tab;
begin
open l_cur;
loop
fetch l_cur bulk collect into l_tab limit 100000;
forall i in 1..l_tab.count
insert into tablea values l_tab(i);
commit;
exit when l_cur%notfound;
end loop;
close l_cur;
end;
/
This time you would change the limit 100000 to whatever number you think sensible. There is a trade-off here though, as the PL/SQL table will consume memory, so you may need to experiment a bit to pick that value - you could get errors or affect other users if it's too high. Lower is less of a problem here, except the bulk inserts become slightly less efficient.
But because you have a CLOB column (holding your XML) this won't work for you, as #BobC pointed out; the insert ... select is supported over a DB link, but the collection version will get an error from the fetch:
ORA-22992: cannot use LOB locators selected from remote tables
ORA-06512: at line 10
22992. 00000 - "cannot use LOB locators selected from remote tables"
*Cause: A remote LOB column cannot be referenced.
*Action: Remove references to LOBs in remote tables.

Trouble with SQL Server locks

I am running into an issue where SQL Server is causing a significant number of locks (95 to 150) on our main table. They are typically short duration locks, lasting under 3 seconds, but I would like to eliminate those if I possibly can. We have also noticed that typically there are no blocks, but occasionally we have a situation where the blocks seem to "cascade" and then the entire system slows down considerably.
Background
We have up to 600 virtual machines processing data and we loaded a table in SQL so we could monitor any records that got stalled and records that were marked complete. We typically have between 200,000 and 1,000,000 records in this table during our processing.
What we are trying to accomplish
We are attempting to get the next available record (Status = 0). However, since there can be multiple hits on the stored proc simultaneously, we are trying to make sure each VM gets a unique record. This is important because processing takes between 1.5 and 2.5 minutes per record and we want to make this as clean as possible.
Our thought process to this point
UPDATE TOP (1) dbo.Test WITH (ROWLOCK)
SET Status = 1,
VMID = #VMID,
ReadCount = ReadCount + 1,
ProcessDT = GETUTCDATE()
OUTPUT INSERTED.RowID INTO #retValue
WHERE Status = 0
This update was causing us a few issues with locks, so we re-worked the process a little bit and changed the where to a sub-query to return the top 1 RowID (primary key) from the table. This seemed to help things run a little bit smoother, but then we occasionally get over-loaded in the database again.
UPDATE TOP (1) dbo.Test WITH (ROWLOCK)
SET Status = 1,
VMID = #VMID,
ReadCount = ReadCount + 1,
ProcessDT = GETUTCDATE()
OUTPUT INSERTED.RowID INTO #retValue
-- WHERE Status = 0
WHERE RowID IN (SELECT TOP 1 RowID FROM do.Test WHERE Status = 0 ORDER BY RowID)
We discovered that having a significant number of Status 1 and 2 records int he table causes slowdowns. We figured it was from a table scan on the Status column. We added the following index but it did not help solve the locks.
CREATE NONCLUSTERED INDEX IX_Test_Status_RowID
ON [dbo].[Test] ([Status])
INCLUDE ([RowID])
The final step after the UPDATE, we use the RowID returned to select out the details:
SELECT 'Test' as FileName, *, #Nick as [Nickname]
FROM Test WITH (NOLOCK)
WHERE RowID IN (SELECT id from #retValue)
Types of locks
The majority of the blocks are LCK_M_U and LCK_M_S, which I would expect with that UPDATE and SELECT query. We did have 1 or 2 LCK_M_X locks as well occasionally. That made me think we may still be getting collisions on our "unique" record code.
Questions
Are these locks and the number of locks just normal SQL operations for this type load?
Is the sub-query causing more issues than a TOP(1) in the UPDATE we started with? I am trying to get confirmation I can remove the ORDER BY statement and remove that extra step of processing.
Would a different index help? I wondered if the index updating was a possible cause of the locks initially, but now I am not sure.
Is there a better or more efficient way to get a unique RowID?
Is the WITH (ROWLOCK) causing more locks than leaving it off would cause? The idea is ROWLOCK would only lock the 1 specific record and allow another proc to update another record and select without locking the table or page.
Does anyone have any tools they recommend to stress test and run 100 queries simultaneously in order to test any potential solutions?
Sorry for all the questions, just trying to make sure I am as clear as possible on our process and the questions we have.
Thanks in advance for any insight as this is a really frustrating issue for us.
Hardware
We are running SQL Server 2008 R2 on a Dual Xeon CPU with 24 GB of RAM. So we should have plenty of horsepower for this process.
It looks like the best solution to the issue was to create a separate table with an identity and use the ##IDENTITY from the insert to determine the next row to process. That has solved all my lock issues so far in my stress testing. Thanks to all who pointed my in the right direction!

TSQL Batch insert - math doesn't work

I need to insert 1.3 million of records from one table into another, and it takes really long time (over 13 min). After some research I found that it is better to do this operation in batches, so I put together something like this (actual query is more complicated, it is simplified here for briefness):
DECLARE #key INT; SET #key = 0;
CREATE TABLE #CURRENT_KEYS(KEY INT)
WHILE 1=1
BEGIN
-- Getting subset of keys
INSERT INTO #CURRENT_KEYS(KEY)
SELECT TOP 100000 KEY FROM #ALL_KEYS WHERE KEY > #key
IF ##ROWCOUNT = 0 BREAK
-- Main Insert
INSERT INTO #RESULT(KEY, VALUE)
SELECT MAIN_TABLE.KEY, MAIN_TABLE.VALUE
FROM MAIN_TABLE INNER_JOIN #CURRENT_KEYS
ON MAIN_TABLE.KEY = #CURRENT_KEYS.KEY
SELECT #key = MAX(KEY ) FROM #CURRENT_KEYS
TRUNCATE TABLE #CURRENT_KEYS
END
I already have indexed list of 1.3 million keys in #ALL_KEYS table so idea here is in a loop create smaller subset of keys for the JOIN and INSERT. The above loop executes 13 times (1,300,000 records / 100,000 records in a batch). If I put a break after just one iterations - execution time is 9 seconds. I assumed total execution time would be 9*13 seconds, but it's the same 13 minutes!
Any idea why?
NOTE: Instead of temp table #CURRENT_KEYS, I tried to use CTE, but with the same result.
UPDATE Some wait stats.
I am showing for this process PAGEIOLATCH_SH and sometimes PREEMPTIVE_OS_WRITEFILEGATHER in wait stats occasionally over 500ms, but often < 100Ms. Also SP_WHO shows user as suspended for the duration of the query.
I'm pretty sure you're experiencing disk pressure. PREEMPTIVE_OS_WRITEFILEGATHER is an autogrowth event (database getting larger), and PAGEIOLATCH_SH means that the process is waiting for a latch on a buffer that's an IO request (probably your file growth event).
http://blog.sqlauthority.com/2011/02/19/sql-server-preemptive-and-non-preemptive-wait-type-day-19-of-28/
http://blog.sqlauthority.com/2011/02/09/sql-server-pageiolatch_dt-pageiolatch_ex-pageiolatch_kp-pageiolatch_sh-pageiolatch_up-wait-type-day-9-of-28/
What I would recommend is pre-growing both tempdb (for your temp table) and the database that's going to hold the batch insert.
http://support.microsoft.com/kb/2091024

How SQL server handles updates to a temp table (in regards to disk access)

Let's say we have MS SQL Server database and table A in it. Then, we perform something like this:
select A.a1
into #temp1
from A
This link says: "If memory is available, both table variables and temporary tables are created and processed while in memory (data cache)."
Let's say that we have 100 rows in #temp1, which easily fits into memory...so whole #temp1 is in memory now. But then, we execute this statement:
UPDATE #temp1 SET a1 = a1 + 1
Does this involve some IO operations? For example, is something written into temp_log (which is, I think, not in RAM)? Or perhaps, since we are doing update now, the whole #temp1 is moved to hdd...?
My understanding is that no log is kept for temp tables, as tempdb is cleared on every restart.
Update:
Logging occurs in tempdb, but it is reduced: http://msdn.microsoft.com/en-ca/library/ms190768.aspx;
Perhaps a table-valued variable is suitable instead of a table in tempdb:
http://msdn.microsoft.com/en-us/library/ms175010.aspx

Sql Server Ignore rowlock hint

This is a general question about how to lock range of values (and nothing else!) when they are not exists in table yet. The trigger for the question was that I want to do "insert if not exists", I don't want to use MERGE because I need to support SQL Server 2005.
In the first connection I:
begin transaction
select data from a table using (SERIALIZABLE, ROWLOCK) + where clause to respecify range
wait...
In the second connection, I insert data to the table with values that do not match the where clause in the first connection
I would expect that the second connection won't be affected by the first one, but it finishes only after I commit (or rollback) the first connection's transaction.
What am I missing?
Here is my test code:
First create this table:
CREATE TABLE test
(
VALUE nvarchar(100)
)
Second, open new query window sql server managements studio and execute the following:
BEGIN TRANSACTION;
SELECT *
FROM test WITH (SERIALIZABLE,ROWLOCK)
WHERE value = N'a';
Third, open another new query window and execute the following:
INSERT INTO test VALUES (N'b');
Notice that the second query doesn't ends until the transaction in the first window ends
You are missing an index on VALUE.
Without that SQL Server has nothing to take a key range lock on and will lock the whole table in order to lock the range.
Even when the index is added however you will still encounter blocking with the scenario in your question. The RangeS-S lock doesn't lock the specific range given in your query. Instead it locks the range between the keys either side of the selected range.
When there are no such keys either side the range lock extends to infinity. You would need to add a value between a and b (for example aa) to prevent this happening in your test and the insert of b being blocked.
See Bonus Appendix: Range Locks in this article for more about this.

Resources