I have the following TRANSACTION structure:
BEGIN TRY
BEGIN tran sometransaction
INSERT INTO local_table_1 (columns...)
SELECT (columns...)
FROM remote_table
WHERE (conditions, predicates)
UPDATE local_table_1
SET column_A = value
WHERE....
UPDATE local_table_1
SET column_B = value
WHERE....
UPDATE local_table_1
SET column_C = value
WHERE....
COMMIT tran sometransaction
END TRY
BEGIN catch
ROLLBACK tran sometransaction
END catch
I want to make sure that no one is allowed to read contents of local_table_1 unless all statements within this transaction are over and have been committed.
Is there a way to set WITH (TABLOCKX, HOLDLOCK) on the whole transaction? I understand that tables lock automatically during the transaction execution however I could not find any explanation if that extends on external concurrent read processes.
Thank you.
An external process will not read uncommitted transactions unless your external process is using a NOLOCK hint or READ UNCOMMITTED isolation. This is all related to Isolation in reference to the ACID properties of sql. http://en.wikipedia.org/wiki/ACID
Set the transaction isolation level to READ COMMITTED or REPEATABLE READ before you begin your transaction. The following link has a description and example:
http://msdn.microsoft.com/en-us/library/ms173763.aspx
Related
I created the following stored procedure in order to examine the isolation level behavior in transaction:
CREATE PROCEDURE ReadCommittedIsolationLevel
AS
BEGIN
BEGIN TRANSACTION t1
BEGIN TRY
EXEC SnapShotIsolationLevel
COMMIT TRANSACTION
END TRY
BEGIN CATCH
PRINT ERROR_MESSAGE()
ROLLBACK TRANSACTION t1
END CATCH
END
CREATE PROCEDURE SnapShotIsolationLevel
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRANSACTION t2
BEGIN TRY
SELECT TOP 20 *
FROM Orders
ORDER BY 1 DESC
COMMIT
END TRY
BEGIN CATCH
PRINT ERROR_MESSAGE()
ROLLBACK TRAN t2
END CATCH
END
And then I run this:
EXEC ReadCommittedIsolationLevel
I get this error:
Transaction failed in database 'MyDataBase' because the statement was run under snapshot isolation but the transaction did not start in snapshot isolation. You cannot change the isolation level of the transaction to snapshot after the transaction has started unless the transaction was originally started under snapshot isolation level.
Cannot roll back t2. No transaction or savepoint of that name was found.
If I remove the transactions and run it like plain stored procedures, it works fine.
Why is that?
The error gave you the correct explanation:
You cannot change the isolation level of the transaction to snapshot
after the transaction has started.
SQL Server does not have nested transactions, the only real transaction you have is the outer transaction started under Read Committed, the next begin tran does nothing except for incrementing ##trancount. You can read more on it here: A SQL Server DBA myth a day: (26/30) nested transactions are real by Paul Randal
Snapshot Isolation means consistent data at the transaction level, not at the statement level(RCSI), so you should change Isolation level to Snapshot at the beginning of the transaction or you cannot change it to Snapshot at all.
I've been reading the MSDN about transaction isolation levels and table hints in an effort to find out what I need to do to exclusively lock a table while I perform a 2 step insert in SQL Server. I've come up with 2 ways to do it and would like to know what the difference is between the approaches.
This answer shows how to do it with hints (https://stackoverflow.com/a/23759307/545430):
--Enclose steps in transaction to define an atomic operation
BEGIN TRAN
-- Perform an insert that locks tblMoo
INSERT INTO tblMoo SET fldCow='valPie' WITH (TABLOCKX, SERIALIZABLE)
UPDATE tblMoo SET fldCowIndex=(SELECT MAX(fldCowIndex) + 1)
COMMIT TRAN
I think I could also achieve my objective by setting the isolation level:
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
INSERT blah blah blah
UPDATE hoo dee dah
COMMIT TRAN
It is important that I lock the entire table from any updates while my transaction is inserting this new row. Would both approaches yield the same result: the table being locked for both the INSERT and UPDATE commands?
I need to lock a row in a table so no one can read this line while I'm running a procedure. I am using BEGIN TRAN in this procedure. So, this record I'm trying to block is uncommitted during the process.
Is it possible?
Depending on what is the purpose of your stored procedure:
- In case it modifies the mentioned row, you can base on transaction levels
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
--UPDATE/INSERT/DELETE your row here
...
COMMIT TRANSACTION
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
- Use lock hints
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
SELECT column1, column2
FROM yourTable WITH (ROWLOCK)
WHERE ID = YourRecordId
...
COMMIT TRANSACTION
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
I often saw many people use SELECT statement within a transaction. I often use insert/update/delete only in transaction. I just do not understand that what is the utility of putting a SELECT statement inside transaction.
I got one answer that....SELECT inside the transaction can see changes made by other previous Insert/Update/Delete statements in that transaction, a SELECT statement outside the transaction cannot.
Above statement is it true or not ?
Is this is the only reason that people put SELECT statement inside transaction? Please discuss all the reason in detail if possible. thanks
try doing this and you will understand:
Open a two new queries on SSMS (lets call it A and B from now one) and on A, create a simple table like this:
create table transTest(id int)
insert into transTest values(1)
now, do the following:
do select * from transTest in both of them. You will see the value 1
On A run:
set transaction isolation level read committed
On B run:
begin transaction
insert into transTest values(2)
On A run:
select * from transTest
you will see that the query wont finish because it is locked by the transaction on A
On B run:
commit transaction
Go back to A and you will see that the query finished
Repeat the test with
set transaction isolation level read uncommitted on A
you will see that the query wont be locked by the transaction
One of the main reasons I can think of (the only reason, in fact) is if you want to set a different isolation level, eg:
USE AdventureWorks2008R2;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN TRANSACTION;
SELECT * FROM HumanResources.EmployeePayHistory;
SELECT * FROM HumanResources.Department;
COMMIT TRANSACTION;
For single SELECT statements though, I'm not so sure, unless you had a reason to go the other way and set READ UNCOMMITTED in cases where response time/maximising concurrency is more important than accurate or valid data.
<speculation certainty="75%"> If the single SELECT statement is inside an explicit transaction without altering the isolation levels, I'm pretty sure that will have no effect at all. Individual statements are, by themselves, transactions that are auto-committed or rolled back on error.</speculation>
I have a SQL statement that does an update, and then if the ##ROWCOUNT is 0, it will insert. this is basically a MERGE in SQL 2008. We are running into situations where two threads are failing on the update simultaneously. It will attempt to insert the same key twice in a table. We are using the Default Transaction isolation level, Read Committed. Will changing the level to repeatable reads fix this or do I have to go all the way to Serializable to make this work? Here is some code:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN TRAN;
UPDATE TableA
SET Duration = #duration
WHERE keyA = #ID
AND keyB = #IDB;
IF ##rowcount = 0
BEGIN
INSERT INTO TableA (keyA,keyB,Duration)
VALUES (#ID,#IDB,#duration);
END
COMMIT TRAN;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;";
You would need to go all the way up to SERIALIZABLE.
Under REPEATABLE READ if the row does not exist then both UPDATE statements can run concurrently without blocking each other and proceed to do the insert. Under SERIALIZABLE the range where the row would have been is blocked.
But you should also consider leaving the isolation level at default read committed and putting a unique constraint on keyA,keyB so any attempts to insert a dupe fail with an error.