Can one table have an exclusive lock while updating a second table? - sql-server

I'm working in SQL Server 2016. I have two tables, one is a queue of work items (TQueue), and the second is the work items (TWork) that are being processed.
I have a script that grabs the top 100 items from TQueue that do not have a record in TWork, and then inserts those items into TWork to be processed.
For performance reasons, I want to run multiple instances of the script simultaneously. The challenge is that Script 1 grabs 100 items, and before the transaction to insert these items into TWork is committed, Script 2 grabs the same set of items and inserts them as well.
Question
I would like to block the reading of TQueue until insert transaction into TWork has completed. Is this possible?

You may use table hints to achieve this goal.
For example:
Create Table Val (ID Int)
Insert Into Val (ID)
Values (0),(1),(2),(3),(4),(5)
First session:
Set Transaction Isolation level Read Committed
Begin Transaction
Select Top 2 * From Val With (ReadPast, XLock, RowLock)
-- Return 0,1
-- Commit has been commented for illustrative purposes.
-- Don't forget to commit the transaction later.
-- Commit
Second session:
Set Transaction Isolation level Read Committed
Begin Transaction
Select * From Val With (ReadPast, XLock, RowLock)
-- Return 2,3,4,5
Commit

Related

When UPDLOCK get released in SQL server?

Recently I have gone through with Hints and Locks in SQL server. While google about this topic I have read one blog where some query have been written which I am not bale to understand. Here it is
BOL states: Use update locks instead of shared locks while reading a table, and hold locks until the end of the statement or transaction. I have some trouble translating this. Does this mean that the update locks are released after the execution of the SELECT statement, unless the SELECT statement in within a transaction?
I other words, are my assumptions in the following 2 scenario's correct?
Scenario 1: no transaction
SELECT something FROM table WITH (UPDLOCK)
/* update locks released */
Scenario 2: with transaction
BEGIN TRANSACTION
SELECT something FROM table WITH (UPDLOCK)
/* some code, including an UPDATE */
COMMIT TRANSACTION
/* update locks released */
Example for scenario 2 (referred for stackoverflow blog)
BEGIN TRAN
SELECT Id FROM Table1 WITH (UPDLOCK)
WHERE AlertDate IS NULL;
UPDATE Table1 SET AlertDate = getutcdate()
WHERE AlertDate IS NULL;
COMMIT TRAN
Please help to understand the above query.
My second question is: once execution of select statement completed at same time UPDLOCK get released or not?
Your assumption in scenario 2 is correct.
To answer your second question, no. The Update locks are held on the selected row(s) until the transaction ends, or until converted to exclusive locks when the update statement modifies those row(s). Step through each statement one at a time using SSMS to verify.
BEGIN TRAN
-- execute sp_lock in second session - no locks yet
SELECT Id FROM Table1 WITH (UPDLOCK) WHERE AlertDate IS NULL;
-- execute sp_lock in second session - update locks present
UPDATE Table1 SET AlertDate = getutcdate() WHERE AlertDate IS NULL;
-- execute sp_lock in second session - update (U) locks are replace by exclusive locks (X) for all row(s) returned by SELECT and modified by the UPDATE (Lock Conversion).
-- Update locks (U) continue to be held for any row(s) returned by the SELECT but not modified by the UPDATE
-- exclusive locks (X) are also held on all rows not returned by SELECT but modified by UPDATE. Internally, lock conversion still occurs, because UPDATE statements must read and write.
COMMIT TRAN
-- sp_lock in second session - all locks gone.
As for what is going on in scenario 1, all T-SQL statements exist either in an implicit or explicit transaction. Senario 1 is implicitly:
BEGIN TRAN
SELECT something FROM table WITH (UPDLOCK)
-- execute sp_lock in second session - update locks (U) will be present
COMMIT TRAN;
-- execute sp_lock in second session - update locks are gone.
Does this mean that the update locks are released after the execution of the SELECT statement, unless the SELECT statement in within a transaction?
The locks will be released as soon as the row is read..but the lock hold will be U lock,so any parallel transaction trying to modify this will have to wait
if you wrap above select in a transaction,locks will be released only when the transaction is committed,so any parallel transaction acquiring locks incompatible with U lock will have to wait
begin tran
select * from t1 with (updlock)
for the below second scenario
BEGIN TRANSACTION
SELECT something FROM table WITH (UPDLOCK)
/* some code, including an UPDATE */
COMMIT TRANSACTION
Imagine, if your select query returned 100 rows,all will use U lock and imagine the update in same transaction affects 2 rows,the two rows will be converted to x locks .so now your query will have 98 u locks and 2 xlocks until the transaction is committed
I would like to think Updlock as repeatable read,any new rows can be added,but any parallel transaction can't delete or update existing rows

Query from multiple threads on a database table

I have a database table with thousands of entries. I have multiple worker threads which pick up one row at a time, does some work (takes roughly one second each). While picking up the row, each thread updates a flag on the database row (like a timestamp) so that the other threads do not pick it up. But the problem is that I end up in a scenario where multiple threads are picking up the same row.
My general question is that what general design approach should I follow here to ensure that each thread picks up unique rows and does their task independently.
Note : Multiple threads are running in parallel to hasten the processing of the database rows. So I would like to have a as small as possible critical segment or exclusive lock.
Just to give some context, below is the stored proc which picks up the rows from the table after it has updated the flag on the row. Please note that the stored proc is not compilable as I have removed unnecessary portions from it. But generally that's the structure of it.
The problem happens when multiple threads execute the stored proc in parallel. The change made by the update statement (note that the update is done after taking up a lock) in one thread is not visible to the other thread unless the transaction is committed. And as there is a SELECT statement (which takes around 50ms) between the UPDATE and the TRANSACTION COMMIT, on 20% cases the UPDATE statement in a thread picks up a row which has already been processed.
I hope I am clear enough here.
USE ['mydatabase']
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[GetRequest]
AS
BEGIN
-- some variable declaration here
BEGIN TRANSACTION
-- check if there are blocking rows in the request table
-- FM: Remove records that don't qualify for operation.
-- delete operation on the table to remove rows we don't want to process
delete FROM request where somecondition = 1
-- Identify the requests to process
DECLARE #TmpTableVar table(TmpRequestId int NULL);
UPDATE TOP(1) request
WITH (ROWLOCK)
SET Lock = DateAdd(mi, 5, GETDATE())
OUTPUT INSERTED.ID INTO #TmpTableVar
FROM request tur
WHERE (Lock IS NULL OR GETDATE() > Lock) -- not locked or lock expired
AND GETDATE() > NextRetry -- next in the queue
IF(##RowCount = 0)
BEGIN
ROLLBACK TRANSACTION
RETURN
END
select #RequestID = TmpRequestId from #TmpTableVar
-- Get details about the request that has been just updated
SELECT somerows
FROM request
WHERE somecondition = 1
COMMIT TRANSACTION
END
The analog of a critical section in SQL Server is sp_getapplock, which is simple to use. Alternatively you can SELECT the row to update with (UPDLOCK,READPAST,ROWLOCK) table hints. Both of these require a multi-statement transaction to control the duration of the exclusive locking.
You need start a transaction isolation level on sql for isolation your line, but this can impact on your performance.
Look the sample:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
GO
SELECT ID, NAME, FLAG FROM SAMPLE_TABLE WHERE FLAG=0
GO
UPDATE SAMPLE_TABLE SET FLAG=1 WHERE ID=1
GO
COMMIT TRANSACTION
Finishing, not exist a better way for use isolation level. You need analyze the positive and negative point for each level isolation and test your system performance.
More information:
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-transaction-isolation-level-transact-sql
http://www.besttechtools.com/articles/article/sql-server-isolation-levels-by-example
https://en.wikipedia.org/wiki/Isolation_(database_systems)

Is a single SQL Server statement atomic and consistent?

Is a statement in SQL Server ACID?
What I mean by that
Given a single T-SQL statement, not wrapped in a BEGIN TRANSACTION / COMMIT TRANSACTION, are the actions of that statement:
Atomic: either all of its data modifications are performed, or none of them is performed.
Consistent: When completed, a transaction must leave all data in a consistent state.
Isolated: Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions.
Durable: After a transaction has completed, its effects are permanently in place in the system.
The reason I ask
I have a single statement in a live system that appears to be violating the rules of the query.
In effect my T-SQL statement is:
--If there are any slots available,
--then find the earliest unbooked transaction and mark it booked
UPDATE Transactions
SET Booked = 1
WHERE TransactionID = (
SELECT TOP 1 TransactionID
FROM Slots
INNER JOIN Transactions t2
ON Slots.SlotDate = t2.TransactionDate
WHERE t2.Booked = 0 --only book it if it's currently unbooked
AND Slots.Available > 0 --only book it if there's empty slots
ORDER BY t2.CreatedDate)
Note: But a simpler conceptual variant might be:
--Give away one gift, as long as we haven't given away five
UPDATE Gifts
SET GivenAway = 1
WHERE GiftID = (
SELECT TOP 1 GiftID
FROM Gifts
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)
In both of these statements, notice that they are single statements (UPDATE...SET...WHERE).
There are cases where the wrong transaction is being "booked"; it's actually picking a later transaction. After staring at this for 16 hours, I'm stumped. It's as though SQL Server is simply violating the rules.
I wondered what if the results of the Slots view is changing before the update happens? What if SQL Server is not holding SHARED locks on the transactions on that date? Is it possible that a single statement can be inconsistent?
So I decided to test it
I decided to check if the results of sub-queries, or inner operations, are inconsistent. I created a simple table with a single int column:
CREATE TABLE CountingNumbers (
Value int PRIMARY KEY NOT NULL
)
From multiple connections, in a tight loop, I call the single T-SQL statement:
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
In other words the pseudo-code is:
while (true)
{
ADOConnection.Execute(sql);
}
And within a few seconds I get:
Violation of PRIMARY KEY constraint 'PK__Counting__07D9BBC343D61337'.
Cannot insert duplicate key in object 'dbo.CountingNumbers'.
The duplicate value is (1332)
Are statements atomic?
The fact that a single statement wasn't atomic makes me wonder if single statements are atomic?
Or is there a more subtle definition of statement, that differs from (for example) what SQL Server considers a statement:
Does this fundamentally means that within the confines of a single T-SQL statement, SQL Server statements are not atomic?
And if a single statement is atomic, what accounts for the key violation?
From within a stored procedure
Rather than a remote client opening n connections, I tried it with a stored procedure:
CREATE procedure [dbo].[DoCountNumbers] AS
SET NOCOUNT ON;
DECLARE #bumpedCount int
SET #bumpedCount = 0
WHILE (#bumpedCount < 500) --safety valve
BEGIN
SET #bumpedCount = #bumpedCount+1;
PRINT 'Running bump '+CAST(#bumpedCount AS varchar(50))
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
IF (#bumpedCount >= 500)
BEGIN
PRINT 'WARNING: Bumping safety limit of 500 bumps reached'
END
END
PRINT 'Done bumping process'
and opened 5 tabs in SSMS, pressed F5 in each, and watched as they too violated ACID:
Running bump 414
Msg 2627, Level 14, State 1, Procedure DoCountNumbers, Line 14
Violation of PRIMARY KEY constraint 'PK_CountingNumbers'.
Cannot insert duplicate key in object 'dbo.CountingNumbers'.
The duplicate key value is (4414).
The statement has been terminated.
So the failure is independent of ADO, ADO.net, or none of the above.
For 15 years i've been operating under the assumption that a single statement in SQL Server is consistent; and the only
What about TRANSACTION ISOLATION LEVEL xxx?
For different variants of the SQL batch to execute:
default (read committed): key violation
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
default (read committed), explicit transaction: no error key violation
BEGIN TRANSACTION
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
COMMIT TRANSACTION
serializable: deadlock
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
COMMIT TRANSACTION
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
snapshot (after altering database to enable snapshot isolation): key violation
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRANSACTION
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
COMMIT TRANSACTION
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
Bonus
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
Default transaction isolation level (READ COMMITTED)
Turns out every query I've ever written is broken
This certainly changes things. Every update statement I've ever written is fundamentally broken. E.g.:
--Update the user with their last invoice date
UPDATE Users
SET LastInvoiceDate = (SELECT MAX(InvoiceDate) FROM Invoices WHERE Invoices.uid = Users.uid)
Wrong value; because another invoice could be inserted after the MAX and before the UPDATE. Or an example from BOL:
UPDATE Sales.SalesPerson
SET SalesYTD = SalesYTD +
(SELECT SUM(so.SubTotal)
FROM Sales.SalesOrderHeader AS so
WHERE so.OrderDate = (SELECT MAX(OrderDate)
FROM Sales.SalesOrderHeader AS so2
WHERE so2.SalesPersonID = so.SalesPersonID)
AND Sales.SalesPerson.BusinessEntityID = so.SalesPersonID
GROUP BY so.SalesPersonID);
without exclusive holdlocks, the SalesYTD is wrong.
How have I been able to do anything all these years.
I've been operating under the assumption that a single statement in SQL Server is consistent
That assumption is wrong. The following two transactions have identical locking semantics:
STATEMENT
BEGIN TRAN; STATEMENT; COMMIT
No difference at all. Single statements and auto-commits do not change anything.
So merging all logic into one statement does not help (if it does, it was by accident because the plan changed).
Let's fix the problem at hand. SERIALIZABLE will fix the inconsistency you are seeing because it guarantees that your transactions behave as if they executed single-threadedly. Equivalently, they behave as if they executed instantly.
You will be getting deadlocks. If you are ok with a retry loop, you're done at this point.
If you want to invest more time, apply locking hints to force exclusive access to the relevant data:
UPDATE Gifts -- U-locked anyway
SET GivenAway = 1
WHERE GiftID = (
SELECT TOP 1 GiftID
FROM Gifts WITH (UPDLOCK, HOLDLOCK) --this normally just S-locks.
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WITH (UPDLOCK, HOLDLOCK) WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)
You will now see reduced concurrency. That might be totally fine depending on your load.
The very nature of your problem makes achieving concurrency hard. If you require a solution for that we'd need to apply more invasive techniques.
You can simplify the UPDATE a bit:
WITH g AS (
SELECT TOP 1 Gifts.*
FROM Gifts
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WITH (UPDLOCK, HOLDLOCK) WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)
UPDATE g -- U-locked anyway
SET GivenAway = 1
This gets rid of one unnecessary join.
Below is an example of an UPDATE statement that does increment a counter value atomically
-- Do this once for test setup
CREATE TABLE CountingNumbers (Value int PRIMARY KEY NOT NULL)
INSERT INTO CountingNumbers VALUES(1)
-- Run this in parallel: start it in two tabs on SQL Server Management Studio
-- You will see each connection generating new numbers without duplicates and without timeouts
while (1=1)
BEGIN
declare #nextNumber int
-- Taking the Update lock is only relevant in case this statement is part of a larger transaction
-- to prevent deadlock
-- When executing without a transaction, the statement will itself be atomic
UPDATE CountingNumbers WITH (UPDLOCK, ROWLOCK) SET #nextNumber=Value=Value+1
print #nextNumber
END
Select does not lock exclusively, even serializable does, but only for the time the select is executed! Once the select is over, the select lock is gone. Then, update locks take on as they now know what to lock as Select has return results. Meanwhile, anyone else can Select again!
The only sure way to safely read and lock a row is:
begin transaction
--lock what i need to read
update mytable set col1=col1 where mykey=#key
--now read what i need
select #d1=col1,#d2=col2 from mytable where mykey=#key
--now do here calculations checks whatever i need from the row i read to decide my update
if #d1<#d2 set #d1=#d2 else set #d1=#d2 * 2 --just an example calc
--now do the actual update on what i read and the logic
update mytable set col1=#d1,col2=#d2 where mykey=#key
commit transaction
This way any other connection running the same statement for the same data it will surely wait at the first (fake) update statement until the previous is done. This ensures that when lock is released only one connection will granted permission to lock request to 'update' and this one will surely read committed finalized data to make calculations and decide if and what to actually update at the second 'real' update.
In other words, when you need to select information to decide if/how to update, you need a begin/commit transaction block plus you need to start with a fake update of what you need to select - before you select it(update output will also do).

Why does my SQL Server UPSERT code sometimes not block?

I have a table ImportSourceMetadata which I use to control an import batch process. It contains a PK column SourceId and a data column LastCheckpoint. The import batch process reads the LastCheckpoint for a given SourceId, performs some logic (on other tables), then updates the LastCheckpoint for that SourceId or inserts it if it doesn't exist yet.
Multiple instances of the process run at the same time, usually with disjunct SourceIds, and I need high parallelity for those cases. However, it can happen that two processes are started for the same SourceId; in that case, I need the instances to block each other.
Therefore, my code looks as follows:
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SELECT LastCheckpoint FROM ImportSourceMetadata WITH (UPDLOCK) WHERE SourceId = 'Source'
-- Perform some processing
-- UPSERT: if the SELECT above yielded no value, then
INSERT INTO ImportSourceMetadata(SourceId, LastCheckpoint) VALUES ('Source', '2013-12-21')
-- otherwise, we'd do this: UPDATE ImportSourceMetadata SET LastCheckpoint = '2013-12-21' WHERE SourceId = 'Source'
COMMIT TRAN
I'm using the transaction to achieve atomicity, but I can only use READ COMMITTED isolation level (because of the parallelity requirements in the "Perform some processing" block). Therefore (and to avoid deadlocks), I'm including an UPDLOCK hint with the SELECT statement to achieve a "critical section" parameterized on the SourceIdvalue.
Now, this works quite well most of the time, but I've managed to trigger primary key violation errors with the INSERT statement when starting a lot of parallel processes for the same SourceIdwith an empty database. I cannot reliably reproduce this, however, and I don't understand why it doesn't work.
I've found hints on the internet (e.g., here and here, in a comment) that I need to specify WITH (UPDLOCK,HOLDLOCK) (resp. WITH (UPDLOCK,SERIALIZABLE)) rather than just taking an UPDLOCK on the SELECT, but I don't really understand why that is. MSDN docs say,
UPDLOCK
Specifies that update locks are to be taken and held until the transaction completes.
An update lock that is taken and held until the transaction completes should be enough to block a subsequent INSERT, and in fact, when I try it out in SQL Server Management Studio, it does indeed block my insert. However, in some rare cases, it seems to suddenly not work any more.
So, why exactly is it that UPDLOCK is not enough, and why is it enough in 99% of my test runs (and when simulating it in SQL Server Management Studio)?
Update: I've now found I can reproduce the non-blocking behavior reliably by executing the code above in two different windows of SQL Server Management Studio simultaneously up to just before the INSERT, but only the first time after creating the database. After that (even though I deleted the contents of the ImportSourceMetadata table), the SELECT WITH (UPDLOCK) will indeed block and the code no longer fails. Indeed, in sys.dm_tran_locks, I can see a U-lock taken even though the row does not exist on subsequent test runs, but not on the first run after creating the table.
This is a complete sample to show the difference in locks between a "newly created table" and an "old table":
DROP TABLE ImportSourceMetadata
CREATE TABLE ImportSourceMetadata(SourceId nvarchar(50) PRIMARY KEY, LastCheckpoint datetime)
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SELECT LastCheckpoint FROM ImportSourceMetadata WITH (UPDLOCK) WHERE SourceId='Source'
SELECT *
FROM sys.dm_tran_locks l
JOIN sys.partitions p
ON l.resource_associated_entity_id = p.hobt_id JOIN sys.objects o
ON p.object_id = o.object_id
INSERT INTO ImportSourceMetadata VALUES('Source', '2013-12-21')
ROLLBACK TRAN
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SELECT LastCheckpoint FROM ImportSourceMetadata WITH (UPDLOCK) WHERE SourceId='Source'
SELECT *
FROM sys.dm_tran_locks l
JOIN sys.partitions p
ON l.resource_associated_entity_id = p.hobt_id JOIN sys.objects o
ON p.object_id = o.object_id
ROLLBACK TRAN
On my system (with SQL Server 2012), the first query shows no locks on ImportSourceMetadata, but the second query shows a KEY lock on ImportSourceMetadata.
In other words, HOLDLOCK is indeed required, but only if the table was freshly created. Why's that?
You also need HOLDLOCK.
If the row does exist then your SELECT statement will take out a U lock on at least that row and retain it until the end of the transaction.
If the row doesn't exist there is no row to take and hold a U lock in so you aren't locking anything. HOLDLOCK will lock at least the range where the row would fit in.
Without HOLDLOCK two concurrent transactions can both do the SELECT for a non existent row. Retain no conflicting locks and both move onto the INSERT.
Regarding the repro in your question it seems the "row doesn't exist" issue is a bit more complex than I first thought.
If the row previously did exist but has since been logically deleted but still physically exists on the page as a "ghost" record then the U lock can still be taken out on the ghost explaining the blocking that you are seeing.
You can use DBCC PAGE to see ghost records as in this slight amend to your code.
SET NOCOUNT ON;
DROP TABLE ImportSourceMetadata
CREATE TABLE ImportSourceMetadata
(
SourceId NVARCHAR(50),
LastCheckpoint DATETIME,
PRIMARY KEY(SourceId)
)
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SELECT LastCheckpoint
FROM ImportSourceMetadata WITH (UPDLOCK)
WHERE SourceId = 'Source'
INSERT INTO ImportSourceMetadata
VALUES ('Source', '2013-12-21')
DECLARE #DBCCPAGE NVARCHAR(100)
SELECT TOP 1 #DBCCPAGE = 'DBCC PAGE(0,' + CAST(file_id AS VARCHAR) + ',' + CAST(page_id AS VARCHAR) + ',3) WITH NO_INFOMSGS'
FROM ImportSourceMetadata
CROSS APPLY sys.fn_physloccracker(%%physloc%%)
ROLLBACK TRAN
DBCC TRACEON(3604)
EXEC (#DBCCPAGE)
DBCC TRACEOFF(3604)
The SSMS messages tab shows
Slot 0 Offset 0x60 Length 31
Record Type = GHOST_DATA_RECORD Record Attributes = NULL_BITMAP VARIABLE_COLUMNS
Record Size = 31
Memory Dump #0x000000001215A060
0000000000000000: 3c000c00 00000000 9ba20000 02000001 †<.......¢......
0000000000000010: 001f0053 006f0075 00720063 006500††††...S.o.u.r.c.e.
Slot 0 Column 1 Offset 0x13 Length 12 Length (physical) 12

What sql server isolation level should I choose to prevent concurrent reads?

I have the following transaction:
SQL inserts a 1 new record into a table called tbl_document
SQL deletes all records matching a criteria in another table called tbl_attachment
SQL inserts multiple records into the tbl_attachment
Until this transaction finishes, I don't want others users to be aware of the (1) new records in tbl_document, (2) deleted records in tbl_attachment, and (3) modified records in tbl_attachment.
Would Read Committed Isolation be the correct isolation level?
It doesn't matter the transaction isolation level of your writes. What is important is the isolation level of your reads. Normally reads will not see your insert/update/delete until is committed. The only isolation level can does see uncommitted changes is READ UNCOMMITTED. If the concurrent thread uses dirty reads, there is nothing you can do in the write to prevent it.
READ UNCOMMITTED can be either set as an isolation level, or requested explicitly with a (NOLOCK) table hint. Dirty reads can see inconsistent transaction data (eg. the debit does not balance the credit of an operation) and also can cause duplicate reads (read the same row multiple times from a table) and trigger mysterious key violations.
yes, like this:
BEGIN TRANSACTION
insert into tbl_document ...
delete tbl_attachment where ...
inserts into tbl_attachment ...
COMMIT
you may block/lock users until you are finished and commit/rollback the transaction. Also, someone could SELECT your rows from tbl_attachment after your insert into tbl_document but before your delete. If you need to prevent that do this:
BEGIN TRANSACTION
select tbl_attachment with (UPDLOCK,HOLDLOCK) where ...
insert into tbl_document ...
delete tbl_attachment where ...
inserts into tbl_attachment ...
COMMIT
or just delete tbl_attachment before the insert into tbl_document and forget the select with the locking hints.

Resources