Read committed Snapshot VS Snapshot Isolation Level - sql-server

Could some one please help me understand when to use SNAPSHOT isolation level over READ COMMITTED SNAPSHOT in SQL Server?
I understand that in most cases READ COMMITTED SNAPSHOT works, but not sure when go for SNAPSHOT isolation.
Thanks

READ COMMITTED SNAPSHOT does optimistic reads and pessimistic writes. In contrast, SNAPSHOT does optimistic reads and optimistic writes.
Microsoft recommends READ COMMITTED SNAPSHOT for most apps that need row versioning.
Read this excellent Microsoft article: Choosing Row Versioning-based Isolation Levels. It explains the benefits and costs of both isolation levels.
And here's a more thorough one:
http://msdn.microsoft.com/en-us/library/ms345124(SQL.90).aspx

[![Isolation levels table][2]][2]
See the example below:
Read Committed Snapshot
Change the database property as below
ALTER DATABASE SQLAuthority
SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE
GO
Session 1
USE SQLAuthority
GO
BEGIN TRAN
UPDATE DemoTable
SET i = 4
WHERE i = 1
Session 2
USE SQLAuthority
GO
BEGIN TRAN
SELECT *
FROM DemoTable
WHERE i = 1
Result – Query in Session 2 shows old value (1, ONE) because current transaction is NOT committed. This is the way to avoid blocking and read committed data also.
Session 1
COMMIT
Session 2
USE SQLAuthority
GO
SELECT *
FROM DemoTable
WHERE i = 1
Result – Query in Session 2 shows no rows because row is updated in session 1. So again, we are seeing committed data.
Snapshot Isolation Level
This is the new isolation level, which was available from SQL Server 2005 onwards. For this feature, there is a change needed in the application as it has to use a new isolation level.
Change database setting using below. We need to make sure that there is no transaction in the database.
ALTER DATABASE SQLAuthority SET AllOW_SNAPSHOT_ISOLATION ON
Now, we also need to change the isolation level of connection by using below
Session 1
USE SQLAuthority
GO
BEGIN TRAN
UPDATE DemoTable
SET i = 10
WHERE i = 2
Session 2
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
GO
USE SQLAuthority
GO
BEGIN TRAN
SELECT *
FROM DemoTable
WHERE i = 2
Result- Even if we have changed the value to 10, we will still see old record in session 2 (2, TWO).
Now, let’s commit transaction in session 1
Session 1
COMMIT
Let’s come back to session 2 and run select again.
Session 2
SELECT *
FROM DemoTable
WHERE i = 2
We will still see the record because session 2 has stated the transaction with snapshot isolation. Unless we complete the transaction, we will not see latest record.
Session 2
COMMIT
SELECT *
FROM DemoTable
WHERE i = 2
Now, we should not see the row as it's already updated.
See: SQL Authority, Safari Books Online

No comparison of Snapshot and Snapshot Read Committed is complete without a discussion of the dreaded "snapshot update conflict" exception that can happen in Snapshot, but not Snapshot Read Committed.
In a nutshell, Snapshot isolation retrieves a snapshot of committed data at the start of a transaction, and then uses optimistic locking for both reads and writes. If, when attempting to commit a transaction, it turns out that something else changed some of that same data, the database will rollback the entire transaction and raise an error causing a snapshot update conflict exception in the calling code. This is because the version of data affected by the transaction is not the same at the end of the transaction as it was at the start.
Snapshot Read Committed does not suffer from this problem because it uses locking on writes (pessimistic writes) and it obtains snapshot version information of all committed data at the stat of each statement.
The possibility of snapshot update conflicts happening in Snapshot and NOT Snapshot Read Committed is an extremely significant difference between the two.

Still relevant, starting with Bill's comments I read more and made notes that might be useful to someone else.
By default single statements (including SELECT) work on "committed" data (READ COMMITTED), the question is: do they wait for data to be "idle" and stop others from working when reading?
Setting via right click DB "Properties -> Options -> Miscellaneous":
Concurrency/Blocking: Is Read Committed Snapshot On [defaults off, should be on]:
Use SNAPSHOT for select (read), do not wait for others, nor block them.
Effects operation without code change
ALTER DATABASE <dbName> SET READ_COMMITTED_SNAPSHOT [ON|OFF]
SELECT name, is_read_committed_snapshot_on FROM sys.databases
Consistency: Allow Snapshot Isolation [defaults off, debatable – OK off]:
Allow client to request SNAPSHOT across SQL statements (transactions).
Code must request "transaction" snapshots (like SET TRANSACTION ...)
ALTER DATABASE <dbName> SET ALLOW_SNAPSHOT_ISOLATION [ON|OFF]
SELECT name, snapshot_isolation_state FROM sys.databases
To the question: it is not one or the other between Read Committed Snapshot and Allow Snapshot Isolation. They are two cases of Snapshot, and either could be on or off independently, with Allow Snapshot Isolation a bit more of an advanced topic. Allow Snapshot Isolation allows code to go a step further controlling Snapshot land.
The issue seems clear if you think about one row: by default the system has no copy, so a reader has to wait if anyone else is writing, and a writer also has to wait if anyone else is reading – the row must lock all the time. Enabling "Is Read Committed Snapshot On" activates the DB to support "snapshot copies" to avoid these locks.
Rambling on...
In my opinion "Is Read Committed Snapshot On" should be TRUE for any normal MS SQLServer databases, and that it is a premature optimization that it ships FALSE by default.
However, I'm told the one row lock gets worse not only because you may be addressing multiple rows across tables, but because in SQL Server row locks are implemented using "block" level locks (locking random rows associated by storage proximity) and that there is a threshold where multiple locks trigger table locking - presumably more "optimistic" performance optimizations at the risk of blocking issues in busy databases.

Let me describe 2 points that have not been mentioned.
Firstly let's make it clear how to use both because it's not intuitive.
SNAPSHOT and READ_COMMITTED_SNAPSHOT are two different isolation levels.
SNAPSHOT is isolation level you can use in your transaction explicitly as usual:
begin transaction
set transaction isolation level snapshot;
-- ...
commit
READ_COMMITTED_SNAPSHOT can't be use like this. READ_COMMITTED_SNAPSHOT is both a database level option and an implicit/automatic isolation level. To use it, you need to enable it for the whole database:
alert database ... set read_committed_snapshot on;
What above database setting does, is that every time you run transaction like this:
begin transaction
set transaction isolation level read committed;
-- ...
commit
With this option ON, all READ_COMMITTED transactions will run under READ_COMMITTED_SNAPSHOT isolation level instead. This happens automatically, affecting all READ_COMMITTED transactions issued against database with this setting set to ON. It's not possible to run transaction under READ_COMMITTED isolation level because all transactions with this level will be automatically converted to READ_COMMITTED_SNAPSHOT.
Secondly you shouldn't blindly use READ_COMMITTED_SNAPSHOT option.
To illustrate the kind problems it can create, imagine you have simple events table like this:
create table Events (
id int not null identity(1, 1) primary key,
name nvarchar(450) not null
-- ...
)
And you poll it periodically with query like this:
begin transaction
set transaction isolation level read committed; -- automatically set to read committed snapshot when this setting is ON on database level
select top 100 * from Events where id > ${lastId} order by id asc;
commit
Above query doesn't need to be enclosed with transaction and explicit isolation level. READ_COMMITTED is default isolation level and if you invoke query without wrapping it in transaction block - it'll be implicitly run in READ_COMMITTED transaction.
You'll find that under READ_COMMITTED_SNAPSHOT isolation level auto-increment identity values may have gaps that later appear.
You can easily simulate it with insert like this:
begin transaction
insert into Events (name) values ('test 1');
waitfor delay '00:00:10'
commit
...followed by normal insert:
insert into Events (name) values ('test 2');
Your polling function invoked within 10s will return single row with id 2.
Following poll after updating lastId will return nothing. Row with id 1 had will appear after 10s.
Event with id 1 will be effectively skipped.
This will not happen if you use READ_COMMITTED without READ_COMMITTED_SNAPSHOT auto promotion option.
It's worth understanding this scenario. It's not related to the fact that IDENTITY column doesn't guarantee uniqueness. It's not related to the fact that IDENTITY column doesn't guarantee strict monotonicity. Even when both uniqueness and strict monotonicity are not violated, you still end up with gaps - possibility of seeing commits with higher ids before seeing commits with lower ids.
Under READ_COMMITTED this problem doesn't exist.
Under READ_COMMITTED you can also see gaps - ie. by transactions that rolled back. But those gaps will be permanent - ie. you are not skipping events because they will never reappear. Ie. you won't see lower ids reappearing later after you've seen higher ids.
Please understand above issue and its implications before turning READ_COMMITTED_SNAPSHOT on.
Control of this option lies in the gray area of developer vs db admin responsibility. If you're admin, you should not blindly use it as developers may have relied on READ_COMMITTED isolation semantics when developing application and turning READ_COMMITTED_SNAPSHOT may violate those assumptions in very implicit, hard to find bug way.

Related

in-flight collision of DB transaction with different Isolation Level

I have some doubts with respect to transactions and isolation levels:
1) In case the DB transaction level is set to Serializable / Repeatable Read and there are two concurrent transactions trying to modify the same data then one of the transaction will fail.
In such cases, why DB doesn't re-tries the failed operation? Is it a good practice to retry the transaction on application level (hoping the other transaction will be over in mean time)?
2) In case the DB transaction level is set to READ_COMMITTED / DIRTY READ and there are two concurrent transactions trying to modify the same data then why the transactions don't fail?
Ideally we are controlling the read behaviour and concurrent writes should not be allowed.
3) My application has 2 parts and uses the spring managed datasource in one part and application created datasource in other part (this part doesn't use spring and data source is explicit created by passing the properties).
My assumption is that isolation level has no impact - from which datasource the connections is coming from...two concurrent transactions even if coming from different datasource will behave the same based on isolation level as if they are coming from same datasource.
Do you see any issue with this setup? Should we strive for single datasource across application?
I also wait until others to give their feed backs. But now i would like to give my 2 cents to this post.
As you explained isolation's are work differently each.
I'll try to keep a sample data set as follows
IF OBJECT_ID('Employees') IS NOT NULL DROP TABLE Employees
GO
CREATE TABLE Employees (
Emp_id INT IDENTITY,
Emp_name VARCHAR(20),
Emp_Telephone VARCHAR(15),
)
ALTER TABLE Employees
ADD CONSTRAINT PK_Employees PRIMARY KEY (emp_id)
INSERT INTO Employees (Emp_name, Emp_Telephone)
SELECT 'Satsara', '07436743439'
INSERT INTO Employees (Emp_name, Emp_Telephone)
SELECT 'Udhara', '045672903'
INSERT INTO Employees (Emp_name, Emp_Telephone)
SELECT 'Sithara', '58745874859'
REPEATABLE READ and SERIALIZABLE are both very close to each, but SERIALIZABLE is the heights in the isolation. Both options are provided for avoid the dirty readings and both need to manage very carefully because most of the time this will cause for deadlocks due to the way that it handing the data. If there's a deadlock, definitely server will wipe out one transaction from the picture. So it will never run it by the server again due to it doesn't have any clue about that removed transaction, unless a log.
REPEATABLE READ - Not allow to modify (lock records) any records which is already read by another process (another query). But it allows for new records to insert (without a lock) which can be impact to your system while querying.
SERIALIZABLE - Different in Serializable is, its not allow to insert records with
"SET TRANSACTION ISOLATION LEVEL Serializable". So INSERT processors are wait until the previous transaction commit.
Usually REPEATABLE READ and SERIALIZABLE isolation's are keep data locks than other two options.
example [REPEATABLE and SERIALIZABLE]:
In Employee table you have 3 records.
Open a query window and run (QUERY 1)
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRAN
SELECT * FROM Employees;
Now try to run a insert query in a different window (QUERY 2)
INSERT INTO Employees(Emp_name, Emp_Telephone)
SELECT 'JANAKA', '3333333'
System allow to insert the new record in QUERY 2 and now run the same query2 again and you can see 4 records.
Now replace the Query 1 with following code and try the same process to test the Serializable
SET TRANSACTION ISOLATION LEVEL Serializable
BEGIN TRAN
SELECT * FROM Employees;
This time you can see the that 2nd Query insert command not allow to execute and wait until the Query 1 to commit.
Once Query 1 committed only, Query 2 allows to execute the INSERT command.
When compare the Read Committed and the Read Uncommitted,
READ COMMITTED - Changes to the data is not visible to other processors until it commit the records. With Read Committed. it puts shared locks for all the records it reads. If another process found a exclusive lock by, it wait until its lock release.
READ UNCOMMITTED - Not recommended and garbage data can read by the system due to this. (in SQL Server nolock). So this will return the uncommitted data.
"Select * from Employee (nolock)
**DEADLOCKS - ** Whether its Repeatable read, Serializable, READ COMMITTED or READ UNCOMMITTED, it can creates dead locks. Only things
is as we discussed Repeatable read and Serializable are more prone to
deadlocks than other two options.
Note: If you need sample for Read Committed and Read Uncommitted, please let know in the comment section and we can discuss.
Actually this topic is very large topic and need to discuss with lots of samples. I do not know this explanation is enough or not. But i gave a small try. NO idea ware to start and when to stop.
At the same time, you asked about " Is it a good practice to retry the
transaction on application level "
In my opinion that's fine. Personally i also do retrying process in some sort of a situations.
Different techniques used.
Keeping a Flag field to identify it updated or not and retry
Using a Event driven solution such RabitMQ, KAFKA.

Row locking behaviour while updating

In Oracle databases I can start a transaction and update a row without committing. Selecting this row in another session still returns the current ("old") value.
How to get this behaviour in SQL Server? Currently, the row is locked until the transaction is ended. WITH (NOLOCK) inside the select statement gives the new value from the uncommitted transaction which is potentially dangerous.
Starting the transaction without committing:
BEGIN TRAN;
UPDATE test SET val = 'Updated' WHERE id = 1;
This works:
SELECT * FROM test WHERE id = 2;
This waits for the transaction to be committed:
SELECT * FROM test WHERE id = 1;
With Read Committed Snapshot Isolation (RCSI), versions of rows are stored in a version store, so readers can read a version of a row that existed at the time the statement started and before any changes have been made; while a transaction is open; without taking shared locks on rows or pages; and without blocking writers or other readers. From this post by Paul White:
To summarize, locking read committed sees each row as it was at the time it was briefly locked and physically read; RCSI sees all rows as they were at the time the statement began. Both implementations are guaranteed to never see uncommitted data,
One cost, of course, is that if you read a prior version of the row, it can change (even many times) before you're done doing whatever it is you plan to do with it. If you're making important decisions based on some past version of the row, it may be the case that you actually want an isolation level that forces you to wait until all changes have been committed.
Another cost is that version store is not free... it requires space and I/O in tempdb, so if tempdb is already a bottleneck on your system, this is something worth testing.
(In SQL Server 2019, with Accelerated Database Recovery, the version store shifts to the user database, which increases database size but mitigates some of the tempdb contention.)
Paul's post goes on to explain some other risks and caveats.
In almost all cases, this is still way better than NOLOCK, IMHO. Lots of links about the dangers there (and why RCSI is better) here:
I'm using NOLOCK; is that bad?
And finally, from the documentation (adding one clarification from the comments):
When the READ_COMMITTED_SNAPSHOT database option is set ON, read committed isolation uses row versioning to provide statement-level read consistency. Read operations require only SCH-S table level locks and no page or row locks. That is, the SQL Server Database Engine uses row versioning to present each statement with a transactionally consistent snapshot of the data as it existed at the start of the statement. Locks are not used to protect the data from updates by other transactions. A user-defined function can return data that was committed after the time the statement containing the UDF began.When the READ_COMMITTED_SNAPSHOT database option is set OFF, which is the default setting * on-prem but not in Azure SQL Database *, read committed isolation uses shared locks to prevent other transactions from modifying rows while the current transaction is running a read operation. The shared locks also block the statement from reading rows modified by other transactions until the other transaction is completed. Both implementations meet the ISO definition of read committed isolation.

Read Committed Snapshot Isolation: Does Update Conflict Rollback appear as Deadlock?

I have read committed snapshot isolation and allow isolation ON for my database. I'm still receiving a deadlock error. I'm pretty sure I know what is happening...
First transaction gets a sequence number at the beginning of its transaction.
Second one gets a later sequence number at the beginning of its transaction, but after the first transaction has already gotten its (second sequence number is more recent than first).
Second transaction makes it to the update statement first. When it checks the row versioning it sees the record that precedes both transactions since the first one hasn't reached the update yet. It finds that the row's sequence number is in a committed state and moves on it's merry way.
The first transaction takes it's turn and like the second transaction finds the same committed sequence number because it won't see the second one because it is newer than itself. When it tries to commit it finds that another transaction has already updated records that are trying to be committed and has to roll itself back.
Here is my question: Will this rollback appear as a deadlock in a trace?
In a comment attached to the original question you said: "I'm just wondering if an update conflict will appear as a deadlock or if it will appear as something different." I actually had exactly these types of concerns when I started looking into using snapshot isolation. Eventually I realized that there is significant difference between READ_COMMITTED_SNAPSHOT and isolation level SNAPSHOT.
The former uses row versioning for reads, but continues to use exclusive locking for writes. So, READ_COMMITTED_SNAPHOT is actually something in between pure pessimistic and pure optimistic concurrency control. Because it uses locks for writing, update conflicts are not possible, but deadlocks are. At least in SQL Server those deadlocks will be reported as deadlocks just as they are with 'normal' pessimistic locking.
The latter (isolation level SNAPSHOT) is pure optimistic concurrency control. Row versioning is used for both reads and writes. Deadlocks are not possible, but update conflicts are. The latter are reported as update conflicts and not as deadlocks.
The snapshot transaction is rolled back, and it receives the following error message:
Msg 3960, Level 16, State 4, Line 1
Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot
isolation to access table 'Test.TestTran' directly or indirectly in database 'TestDatabase' to
update, delete, or insert the row that has been modified or deleted by another transaction.
Retry the transaction or change the isolation level for the update/delete statement.
To prevent deadlock enable both
ALLOW_SNAPSHOT_ISOLATION and READ_COMMITTED_SNAPSHOT
ALTER DATABASE [BD] SET READ_COMMITTED_SNAPSHOT ON;
ALTER DATABASE [BD] SET ALLOW_SNAPSHOT_ISOLATION ON;
here explain the differences
http://technet.microsoft.com/en-us/sqlserver/gg545007.aspx

What is the result if I read after a uncommited update within the same transaction

I'm using Google App Engine with Java JPA.
The isolation level is Serializable inside transaction; Repeated Read outside transaction.
I search a lot of articles and everybody talks about behaviors between transactions, but no one mention about read within the same transaction.
Example :
/* data in User table with {ID=1,NAME='HANK'} */
BEGIN;
UPDATE User SET name = 'MING' WHERE ID=1;
SELECT name FROM User WHERE ID = 1;
COMMIT;
Result : Still {ID=1, NAME='HANK'}
My Questions:
Does Isolation level setting affect queries within the same transaction?
What is the rule with the same transaction?
Any queries done within the same transaction will be immediately visible to itself. In your example if you read row with ID of 1, you will see that it is updated. The difference is how other users are affected by your transaction. Depending on your isolation level the other user may:
Get blocked, the other user will wait until you commit / rollback
Read the data as it was before the transaction (snapshot isolation)
Read the data that is most up-to-date even without you committing (read uncommitted)
I'm just scratching the surface of isolation levels, there have been a lot of books written on the subject.

Minimum transaction isolation level to avoid "Lost Updates"

With SQL Server's transaction isolation levels, you can avoid certain unwanted concurrency issues, like dirty reads and so forth.
The one I'm interested in right now is lost updates - the fact two transactions can overwrite one another's updates without anyone noticing it. I see and hear conflicting statements as to which isolation level at a minimum I have to choose to avoid this.
Kalen Delaney in her "SQL Server Internals" book says (Chapter 10 - Transactions and Concurrency - Page 592):
In Read Uncommitted isolation, all the behaviors described previously, except lost updates, are possible.
On the other hand, an independent SQL Server trainer giving us a class told us that we need at least "Repeatable Read" to avoid lost updates.
So who's right?? And why??
I dont know if it is too late to answer but I am just learning about transaction isolation levels in college and as part of my research I came across this link:
Microsoft Technet
Specifically the paragraph in question is:
Lost Update
A lost update can be interpreted in one of two ways. In the first scenario, a lost update is considered to have taken place when data that has been updated by one transaction is overwritten by another transaction, before the first transaction is either committed or rolled back. This type of lost update cannot occur in SQL Server 2005 because it is not allowed under any transaction isolation level.
The other interpretation of a lost update is when one transaction (Transaction #1) reads data into its local memory, and then another transaction (Transaction #2) changes this data and commits its change. After this, Transaction #1 updates the same data based on what it read into memory before Transaction #2 was executed. In this case, the update performed by Transaction #2 can be considered a lost update.
So in essence both people are right.
Personally (and I am open to being wrong, so please correct me as I am just learning this) I take from this the following two points:
The whole point of a transaction enviorment is to prevent lost updates as described in the top paragraph. So if even the most basic transaction level cant do that then why bother using it.
When people talk about lost updates, they know the first paragraph applies, and so generally speaking mean the second type of lost update.
Again, please correct me if anything here is wrong as I would like to understand this too.
The example in the book is of Clerk A and Clerk B receiving shipments of Widgets.
They both check the current inventory, see 25 is in stock. Clerk A has 50 widgets and updates to 75, Clerk B has 20 widgets and so updates to 45 overwriting the previous update.
I assume she meant this phenomena can be avoided at all isolation levels by Clerk A doing
UPDATE Widgets
SET StockLevel = StockLevel + 50
WHERE ...
and Clerk B doing
UPDATE Widgets
SET StockLevel = StockLevel + 20
WHERE ...
Certainly if the SELECT and UPDATE are done as separate operations you would need repeatable read to avoid this so the S lock on the row is held for the duration of the transaction (which would lead to deadlock in this scenario)
Lost updates may occur even if reads and writes are in separate transactions, like when users read data into Web pages, then update. In such cases no isolation level can protect you, especially when connections are reused from a connection pool. We should use other approaches, such as rowversion. Here is my canned answer.
My experience is that with Read Uncommitted you no longer get 'lost updates', you can however still get 'lost rollbacks'. The SQL trainer was probably referring to that concurrency issue, so the answer you're likely looking for is Repeatable Read.
That said, I would be very interested if anyone has experience that goes against this.
As marked by Francis Rodgers, what you can rely on SQL Server implementation is that once a transaction updated some data, every isolation level always issue "update locks" over the data, and denying updates and writes from another transaction, whatever it's isolation level it is. You can be sure this kind of lost updates are covered.
However, if the situation is that a transaction reads some data (with an isolation level different than Repeatable Read), then another transaction is able to change this data and commits it's change, and if the first transaction then updates the same data but this time, based on the internal copy that he made, the management system cannot do anything for saving it.
Your answer in that scenario is either use Repeatable Read in the first transaction, or maybe use some read lock from the first transaction over the data (I don't really know about that in a confident way. I just know of the existence of this locks and that you can use them. Maybe this will help anyone who's interested in this approach Microsoft Designing Transactions and Optimizing Locking).
The following is quote from 70-762 Developing SQL Databases (p. 212):
Another potential problem can occur when two processes read the same
row and then update that data with different values. This might happen
if a transaction first reads a value into a variable and then uses the
variable in an update statement in a later step. When this update
executes, another transaction updates the same data. Whichever of
these transactions is committed first becomes a lost update because it
was replaced by the update in the other transaction. You cannot use
isolation levels to change this behavior, but you can write an
application that specifically allows lost updates.
So, it seems that none of the isolation levels can help you in such cases and you need to solve the issue in the code itself. For example:
DROP TABLE IF EXISTS [dbo].[Balance];
CREATE TABLE [dbo].[Balance]
(
[BalanceID] TINYINT IDENTITY(1,1)
,[Balance] MONEY
,CONSTRAINT [PK_Balance] PRIMARY KEY
(
[BalanceID]
)
);
INSERT INTO [dbo].[Balance] ([Balance])
VALUES (100);
-- query window 1
BEGIN TRANSACTION;
DECLARE #CurrentBalance MONEY;
SELECT #CurrentBalance = [Balance]
FROM [dbo].[Balance]
WHERE [BalanceID] = 1;
WAITFOR DELAY '00:00:05'
UPDATE [dbo].[Balance]
SET [Balance] = #CurrentBalance + 20
WHERE [BalanceID] = 1;
COMMIT TRANSACTION;
-- query window 2
BEGIN TRANSACTION;
DECLARE #CurrentBalance MONEY;
SELECT #CurrentBalance = [Balance]
FROM [dbo].[Balance]
WHERE [BalanceID] = 1;
UPDATE [dbo].[Balance]
SET [Balance] = #CurrentBalance + 50
WHERE [BalanceID] = 1;
COMMIT TRANSACTION;
Create the table, the execute each part of the code in separate query windows. Changing the isolation level does nothing. For example, the only difference between read committed and repeatable read is that the last, blocks the second transaction while the first is finished and then overwrites the value.

Resources