I thought that REPEATABLE READ should not pick up on changed data but should pick up on new data.
However I have the following script:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
create table testLocking(a int);
BEGIN TRANSACTION
insert into testLocking values (1);
select * from testLocking;
WAITFOR DELAY '000:00:30';
select * from testLocking;
COMMIT TRANSACTION;
BEGIN TRANSACTION
insert into testLocking values (2);
UPDATE testLocking SET A=4 WHERE a=1;
select * from testLocking;
WAITFOR DELAY '000:00:40';
COMMIT TRANSACTION;
drop table testLocking;
I get the results:
1
1
4
2
I was expecting:
1
1
**2**
2
4
Can anyone see what I have done wrong?
UPDATED
I want to be able to see the effects of transaction isolation levels by running queries concurrently.
Repeatable Read means that changes made by other transactions are not reflected in your queries. You code makes it's change in the same transaction.
UPDATED
A single script is not able to show the effects of two transactions interacting with a database. The best way to do this in SSMS is to separate the code into two windows. Open them side by side using Window -> New Vertical Tab Group.
By highlighting lines one at a time you can run the transactions concurrently and you will see the effects of the queries in each window.
Related
I have a problem with Postgresql repeatable read isolation level.
I did make an experiment about repeatable read isolation level's behavior when phantom read occurred.
Postgresql's manual says "The table also shows that PostgreSQL's Repeatable Read implementation does not allow phantom reads."
But phantom read occurred;
CREATE TABLE public.testmodel
(
id bigint NOT NULL
);
--Session 1 --
BEGIN TRANSACTION ISOLATION LEVEL Repeatable Read;
INSERT INTO TestModel(ID)
VALUES (10);
Select sum(ID)
From TestModel
where ID between 1 and 100;
--COMMIT;
--Session 2--
BEGIN TRANSACTION ISOLATION LEVEL Repeatable Read;
INSERT INTO TestModel(ID)
VALUES (10);
Select sum(ID)
From TestModel
where ID between 1 and 100;
COMMIT;
Steps I followed;
Create Table
Run session 1 (I commented commit statement)
Run session 2
Run commit statement in session 1.
To my surprise, both of them (session 1, session 2) worked without any exceptions.
As far as I understand from the document. It shouldn't have been.
I was expecting session 1 throw exception, when committing it after session 2.
What is the reason of this? I am confused.
The docs you referenced define a "phantom read" as a situation where:
A transaction re-executes a query returning a set of rows
that satisfy a search condition and finds that the set of rows
satisfying the condition has changed due to another recently-committed
transaction.
In other words, a phantom read has occurred if you run the same query twice (or two queries seeking the same data), and you get different results. The REPEATABLE READ isolation level prevents this from happening, i.e. if you repeat the same read, you will get the same answer. It does not guarantee that either of those results reflects the current state of the database.
Since you are only reading data once in each transaction, this cannot be an example of a phantom read. It falls under the more general category of a "serialization anomaly", i.e. behaviour which could not occur if the transactions were executed serially. This type of anomaly is only avoided at the SERIALIZABLE isolation level.
There is an excellent set of examples on the Postgres wiki, describing anomalies which are allowed under REPEATABLE READ, but prevented under SERIALIZABLE isolation:
https://wiki.postgresql.org/wiki/SSI
Phantom read doesn't occur in REPEATABLE READ in PostgreSQL as the documentation says. *I explain more about phantom read in my answer on Stack Overflow .
I experimented if phantom read occurs in REPEATABLE READ in PostgreSQL with 2 command prompts.
First, to set REPEATABLE READ, I rans the query below and log out and log in again:
ALTER DATABASE postgres SET DEFAULT_TRANSACTION_ISOLATION TO 'repeatable read';
And, I created person table with id and name as shown below.
person table:
id
name
1
John
2
David
Then, I did these steps below with PostgreSQL queries:
Flow
Transaction 1 (T1)
Transaction 2 (T2)
Explanation
Step 1
BEGIN;
T1 starts.
Step 2
BEGIN;
T2 starts.
Step 3
SELECT * FROM person;1 John2 David
T1 reads 2 rows.
Step 4
INSERT INTO person VALUES (3, 'Tom');
T2 inserts the row with 3 and Tom to person table.
Step 5
COMMIT;
T2 commits.
Step 6
SELECT * FROM person;1 John2 David
T1 still reads 2 rows instead of 3 rows after T2 commits.*Phantom read doesn't occur!!
Step 7
COMMIT;
T1 commits.
You are misunderstanding what "does not allow phantom reads" means.
This simply means that phantom reads can not happen, not that there will be an error.
Session 2 will not see any committed changes to the table until the transaction from session 2 is committed as well.
repeatable read guarantees a consistent state of the database for the duration of the transaction where only changes made by that transaction itself will be visible, but no other changes. There is no need to throw an error.
I have a database table with thousands of entries. I have multiple worker threads which pick up one row at a time, does some work (takes roughly one second each). While picking up the row, each thread updates a flag on the database row (like a timestamp) so that the other threads do not pick it up. But the problem is that I end up in a scenario where multiple threads are picking up the same row.
My general question is that what general design approach should I follow here to ensure that each thread picks up unique rows and does their task independently.
Note : Multiple threads are running in parallel to hasten the processing of the database rows. So I would like to have a as small as possible critical segment or exclusive lock.
Just to give some context, below is the stored proc which picks up the rows from the table after it has updated the flag on the row. Please note that the stored proc is not compilable as I have removed unnecessary portions from it. But generally that's the structure of it.
The problem happens when multiple threads execute the stored proc in parallel. The change made by the update statement (note that the update is done after taking up a lock) in one thread is not visible to the other thread unless the transaction is committed. And as there is a SELECT statement (which takes around 50ms) between the UPDATE and the TRANSACTION COMMIT, on 20% cases the UPDATE statement in a thread picks up a row which has already been processed.
I hope I am clear enough here.
USE ['mydatabase']
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[GetRequest]
AS
BEGIN
-- some variable declaration here
BEGIN TRANSACTION
-- check if there are blocking rows in the request table
-- FM: Remove records that don't qualify for operation.
-- delete operation on the table to remove rows we don't want to process
delete FROM request where somecondition = 1
-- Identify the requests to process
DECLARE #TmpTableVar table(TmpRequestId int NULL);
UPDATE TOP(1) request
WITH (ROWLOCK)
SET Lock = DateAdd(mi, 5, GETDATE())
OUTPUT INSERTED.ID INTO #TmpTableVar
FROM request tur
WHERE (Lock IS NULL OR GETDATE() > Lock) -- not locked or lock expired
AND GETDATE() > NextRetry -- next in the queue
IF(##RowCount = 0)
BEGIN
ROLLBACK TRANSACTION
RETURN
END
select #RequestID = TmpRequestId from #TmpTableVar
-- Get details about the request that has been just updated
SELECT somerows
FROM request
WHERE somecondition = 1
COMMIT TRANSACTION
END
The analog of a critical section in SQL Server is sp_getapplock, which is simple to use. Alternatively you can SELECT the row to update with (UPDLOCK,READPAST,ROWLOCK) table hints. Both of these require a multi-statement transaction to control the duration of the exclusive locking.
You need start a transaction isolation level on sql for isolation your line, but this can impact on your performance.
Look the sample:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
GO
SELECT ID, NAME, FLAG FROM SAMPLE_TABLE WHERE FLAG=0
GO
UPDATE SAMPLE_TABLE SET FLAG=1 WHERE ID=1
GO
COMMIT TRANSACTION
Finishing, not exist a better way for use isolation level. You need analyze the positive and negative point for each level isolation and test your system performance.
More information:
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-transaction-isolation-level-transact-sql
http://www.besttechtools.com/articles/article/sql-server-isolation-levels-by-example
https://en.wikipedia.org/wiki/Isolation_(database_systems)
Two SP's are getting executed one after another and the second one is getting blocked by first one. They both are trying to update same table. Two SP's are as following
CREATE PROCEDURE [dbo].[SP1]
Begin
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
BEGIN TRANSACTION ImpSchd
update Table t1 .......... ................................//updating
a set of [n1,n2....n100] records
COMMIT TRANSACTION ImpSchd
SET TRANSACTION ISOLATION LEVEL
READ COMMITTED;
END
2.
CREATE PROCEDURE [dbo].[SP2]
Begin
update Table t1 .......... ................................//updating
a set of [n101,n102.....n200] records
END
My question is when sp1 is running is snapshot level isolation why is it blocking sp2 (n both are updating different set of records)?
If i run first sp for two different set of records simultaneously it
works perfectly.
How can I overcome this situation ?
If using the snapshot level isolation is to be set for each sp updating the same table then it would be a larger change.
if two sp has to update same records in a table, how should i handle that(both sp will update different columns)?
Isolation levels only are for not blocking selects,so any DML wont be affected by Isolation levels.In this case update takes IX lock on table,page followed by taking xlock on row to update.Since you are updating in bulk ,table itself might have been locked due to lock escalation.Hope this helps
I often saw many people use SELECT statement within a transaction. I often use insert/update/delete only in transaction. I just do not understand that what is the utility of putting a SELECT statement inside transaction.
I got one answer that....SELECT inside the transaction can see changes made by other previous Insert/Update/Delete statements in that transaction, a SELECT statement outside the transaction cannot.
Above statement is it true or not ?
Is this is the only reason that people put SELECT statement inside transaction? Please discuss all the reason in detail if possible. thanks
try doing this and you will understand:
Open a two new queries on SSMS (lets call it A and B from now one) and on A, create a simple table like this:
create table transTest(id int)
insert into transTest values(1)
now, do the following:
do select * from transTest in both of them. You will see the value 1
On A run:
set transaction isolation level read committed
On B run:
begin transaction
insert into transTest values(2)
On A run:
select * from transTest
you will see that the query wont finish because it is locked by the transaction on A
On B run:
commit transaction
Go back to A and you will see that the query finished
Repeat the test with
set transaction isolation level read uncommitted on A
you will see that the query wont be locked by the transaction
One of the main reasons I can think of (the only reason, in fact) is if you want to set a different isolation level, eg:
USE AdventureWorks2008R2;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN TRANSACTION;
SELECT * FROM HumanResources.EmployeePayHistory;
SELECT * FROM HumanResources.Department;
COMMIT TRANSACTION;
For single SELECT statements though, I'm not so sure, unless you had a reason to go the other way and set READ UNCOMMITTED in cases where response time/maximising concurrency is more important than accurate or valid data.
<speculation certainty="75%"> If the single SELECT statement is inside an explicit transaction without altering the isolation levels, I'm pretty sure that will have no effect at all. Individual statements are, by themselves, transactions that are auto-committed or rolled back on error.</speculation>
When I perform a select/Insert query, does SQL Server automatically create an implicit transaction and thus treat it as one atomic operation?
Take the following query that inserts a value into a table if it isn't already there:
INSERT INTO Table1 (FieldA)
SELECT 'newvalue'
WHERE NOT EXISTS (Select * FROM Table1 where FieldA='newvalue')
Is there any possibility of 'newvalue' being inserted into the table by another user between the evaluation of the WHERE clause and the execution of the INSERT clause if I it isn't explicitly wrapped in a transaction?
You are confusing between transaction and locking. Transaction reverts your data back to the original state if there is any error. If not, it will move the data to the new state. You will never ever have your data in an intermittent state when the operations are transacted. On the other hand, locking is the one that allows or prevents multiple users from accessing the data simultaneously. To answer your question, select...insert is atomic and as long as no granular locks are explicitly requested, no other user will be able to insert while select..insert is in progress.
John, the answer to this depends on your current isolation level. If you're set to READ UNCOMMITTED you could be looking for trouble, but with a higher isolation level, you should not get additional records in the table between the select and insert. With a READ COMMITTED (the default), REPEATABLE READ, or SERIALIZABLE isolation level, you should be covered.
Using SSMS 2016, it can be verified that the Select/Insert statement requests a lock (and so most likely operates atomically):
Open a new query/connection for the following transaction and set a break-point on ROLLBACK TRANSACTION before starting the debugger:
BEGIN TRANSACTION
INSERT INTO Table1 (FieldA) VALUES ('newvalue');
ROLLBACK TRANSACTION --[break-point]
While at the above break-point, execute the following from a separate query window to show any locks (may take a few seconds to register any output):
SELECT * FROM sys.dm_tran_locks
WHERE resource_database_id = DB_ID()
AND resource_associated_entity_id = OBJECT_ID(N'dbo.Table1');
There should be a single lock associated to the BEGIN TRANSACTION/INSERT above (since by default runs in an ISOLATION LEVEL of READ COMMITTED)
OBJECT ** ********** * IX LOCK GRANT 1
From another instance of SSMS, open up a new query and run the following (while still stopped at the above break-point):
INSERT INTO Table1 (FieldA)
SELECT 'newvalue'
WHERE NOT EXISTS (Select * FROM Table1 where FieldA='newvalue')
This should hang with the string "(Executing)..." being displayed in the tab title of the query window (since ##LOCK_TIMEOUT is -1 by default).
Re-run the query from Step 2.
Another lock corresponding to the Select/Insert should now show:
OBJECT ** ********** 0 IX LOCK GRANT 1
OBJECT ** ********** 0 IX LOCK GRANT 1
ref: How to check which locks are held on a table