I often saw many people use SELECT statement within a transaction. I often use insert/update/delete only in transaction. I just do not understand that what is the utility of putting a SELECT statement inside transaction.
I got one answer that....SELECT inside the transaction can see changes made by other previous Insert/Update/Delete statements in that transaction, a SELECT statement outside the transaction cannot.
Above statement is it true or not ?
Is this is the only reason that people put SELECT statement inside transaction? Please discuss all the reason in detail if possible. thanks
try doing this and you will understand:
Open a two new queries on SSMS (lets call it A and B from now one) and on A, create a simple table like this:
create table transTest(id int)
insert into transTest values(1)
now, do the following:
do select * from transTest in both of them. You will see the value 1
On A run:
set transaction isolation level read committed
On B run:
begin transaction
insert into transTest values(2)
On A run:
select * from transTest
you will see that the query wont finish because it is locked by the transaction on A
On B run:
commit transaction
Go back to A and you will see that the query finished
Repeat the test with
set transaction isolation level read uncommitted on A
you will see that the query wont be locked by the transaction
One of the main reasons I can think of (the only reason, in fact) is if you want to set a different isolation level, eg:
USE AdventureWorks2008R2;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN TRANSACTION;
SELECT * FROM HumanResources.EmployeePayHistory;
SELECT * FROM HumanResources.Department;
COMMIT TRANSACTION;
For single SELECT statements though, I'm not so sure, unless you had a reason to go the other way and set READ UNCOMMITTED in cases where response time/maximising concurrency is more important than accurate or valid data.
<speculation certainty="75%"> If the single SELECT statement is inside an explicit transaction without altering the isolation levels, I'm pretty sure that will have no effect at all. Individual statements are, by themselves, transactions that are auto-committed or rolled back on error.</speculation>
Related
I am dubbing some SP at work and I have discover that whoever wrote the code used a transaction on a single update statement like this
begin transaction
*single update statment:* update table whatever with whatever
commit transaction
I understand that this is wrong because transaction is used when you want to update multiple updates.
I want to understand from the theoretical point, what are the implications of using the code as above?
Is there any difference in updating the whatever table with and without the transaction? Are there any extra locks or something?
Perhaps the transaction was included due to prior or possible future code which may involve other data. Perhaps that developer simply makes a habit of wrapping code in transactions, to be 'safe'?
But if the statement literally involves only a single update to a single row, there really is no benefit to that code being there in this case. A transaction does not necessarily 'lock' anything, though the actions performed inside it may, of course. It just makes sure that all the actions contained therein are performed all-or-nothing.
Note that a transaction is not about multiple tables, it's about multiple updates. It assures multiple updates happen all-or-none.
So if you were updating the same table twice, there would be a difference with or without the transaction. But your example shows only a single update statement, presumably updating only a single record.
In fact, it's probably pretty common that transactions encapsulate multiple updates to the same table. Imagine the following:
INSERT INTO Transactions (AccountNum, Amount) VALUES (1, 200)
INSERT INTO Transactions (AccountNum, Amount) values (2, -200)
That should be wrapped into a transaction, to assure that the money is transferred correctly. If one fails, so must the other.
I understand that this is wrong because transaction is used when you want to update multiple tables.
Not necessarily. This involves one table only - and just 2 rows:
--- transaction begin
BEGIN TRANSACTION ;
UPDATE tableX
SET Balance = Balance + 100
WHERE id = 42 ;
UPDATE tableX
SET Balance = Balance - 100
WHERE id = 73 ;
COMMIT TRANSACTION ;
--- transaction end
Hopefully your colleague's code looks more like this, otherwise SQL will issue a syntax error.
As per Ypercube's comment, there is no real purpose in placing one statement inside a transaction, but possibly this is a coding standard or similar.
begin transaction -- Increases ##TRANCOUNT to 1
update table whatever with whatever
commit transaction -- DECREMENTS ##TRANCOUNT to 0
Often, when issuing adhoc statements directly against SQL, it is a good idea to wrap your statements in a transaction, just in case something goes wrong and you need to rollback, i.e.
begin transaction -- Just in case my query goofs up
update table whatever with whatever
select ... from table ... -- check that the correct updates / deletes / inserts happened
-- commit transaction -- Only commit if the above check succeeds.
I have a program that connects to an Oracle database and performs operations on it. I now want to adapt that program to also support an SQL Server database.
In the Oracle version, I use "SELECT FOR UPDATE WAIT" to lock specific rows I need. I use it in situations where the update is based on the result of the SELECT and other sessions can absolutely not modify it simultaneously, so they must manually lock it first. The system is highly subject to sessions trying to access the same data at the same time.
For example:
Two users try to fetch the row in the database with the highest priority, mark it as busy, performs operations on it, and mark it as available again for later use.
In Oracle, the logic would go basically like this:
BEGIN TRANSACTION;
SELECT ITEM_ID FROM TABLE_ITEM WHERE ITEM_PRIORITY > 10 AND ITEM_CATEGORY = 'CT1'
ITEM_STATUS = 'available' AND ROWNUM = 1 FOR UPDATE WAIT 5;
UPDATE [locked item_id] SET ITEM_STATUS = 'unavailable';
COMMIT TRANSACTION;
Note that the queries are built dynamically in my code. Also note that when the previously most favorable row is marked as unavailable, the second user will automatically go for the next one and so on. Furthermore, different users working on different categories will not have to wait for each other's locks to be released. Worst comes to worst, after 5 seconds, an error would be returned and the operation would be cancelled.
So finally, the question is: how do I achieve the same results in SQL Server? I have been looking at locking hints which, in theory, seem like they should work. However, the only locks that prevents other locks are "UPDLOCK" AND "XLOCK" which both only work at a table level.
Those locking hints that do work at a row level are all shared locks, which also do not satisfy my needs (both users could lock the same row at the same time, both mark it as unavailable and perform redundant operations on the corresponding item).
Some people seem to add a "time modified" column so sessions can verify that they are the ones who modified it, but this sounds like there would be a lot of redundant and unnecessary accesses.
You're probably looking forwith (updlock, holdlock). This will make a select grab an exclusive lock, which is required for updates, instead of a shared lock. The holdlock hint tells SQL Server to keep the lock until the transaction ends.
FROM TABLE_ITEM with (updlock, holdlock)
As documentation sayed:
XLOCK
Specifies that exclusive locks are to be taken and held until the
transaction completes. If specified with ROWLOCK, PAGLOCK, or TABLOCK,
the exclusive locks apply to the appropriate level of granularity.
So solution is using WITH(XLOCK, ROWLOCK):
BEGIN TRANSACTION;
SELECT ITEM_ID
FROM TABLE_ITEM
WITH(XLOCK, ROWLOCK)
WHERE ITEM_PRIORITY > 10 AND ITEM_CATEGORY = 'CT1' AND ITEM_STATUS = 'available' AND ROWNUM = 1;
UPDATE [locked item_id] SET ITEM_STATUS = 'unavailable';
COMMIT TRANSACTION;
In SQL Server there are locking hints but they do not span their statements like the Oracle example you provided. The way to do it in SQL Server is to set an isolation level on the transaction that contains the statements that you want to execute. See this MSDN page but the general structure would look something like:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
select * from ...
update ...
COMMIT TRANSACTION;
SERIALIZABLE is the highest isolation level. See the link for other options. From MSDN:
SERIALIZABLE Specifies the following:
Statements cannot read data that has been modified but not yet
committed by other transactions.
No other transactions can modify data that has been read by the
current transaction until the current transaction completes.
Other transactions cannot insert new rows with key values that would
fall in the range of keys read by any statements in the current
transaction until the current transaction completes.
Have you tried WITH (ROWLOCK)?
BEGIN TRAN
UPDATE your_table WITH (ROWLOCK)
SET your_field = a_value
WHERE <a predicate>
COMMIT TRAN
If I have a stored procedure that executes another stored procedure several times with different arguments, is it possible to have each of these calls commit independently of the others?
In other words, if the first two executions of the nested procedure succeed, but the third one fails, is it possible to preserve the results of the first two executions (and not roll them back)?
I have a stored procedure defined something like this in SQL Server 2000:
CREATE PROCEDURE toplevel_proc ..
AS
BEGIN
...
while #row_count <= #max_rows
begin
select #parameter ... where rownum = #row_count
exec nested_proc #parameter
select #row_count = #row_count + 1
end
END
First off, there is no such thing as a nested transaction in SQL Server
However, you can use SAVEPOINTs as per this example (too long to reproduce here sorry) from fellow SO user Remus Rusanu
Edit: AlexKuznetsov mentioned (he deleted his answer though) that this won't work if a transaction is doomed. This can happen with SET XACT_ABORT ON or some trigger errors.
From BOL:
ROLLBACK TRANSACTION without a
savepoint_name or transaction_name
rolls back to the beginning of the
transaction. When nesting
transactions, this same statement
rolls back all inner transactions to
the outermost BEGIN TRANSACTION
statement.
I also found the following from another thread here:
Be aware that SQL Server transactions
aren't really nested in the way you
might think. Once an explict
transaction is started, a subsequent
BEGIN TRAN increments ##TRANCOUNT
while a COMMIT decrements the value.
The entire outmost transaction is
committed when a COMMIT results in a
zero ##TRANCOUNT. But a ROLLBACK
without a savepoint rolls back all
work including the outermost
transaction.
If you need nested transaction
behavior, you'll need to use SAVE
TRANSACTION instead of BEGIN TRAN and
use ROLLBACK TRAN [savepoint_name]
instead of ROLLBACK TRAN.
So it would appear possible.
I'm trying to write to a log file inside a transaction so that the log survives even if the transaction is rolled back.
--start code
begin tran
insert [something] into dbo.logtable
[[main code here]]
rollback
commit
-- end code
You could say just do the log before the transaction starts but that is not as easy because the transaction starts before this S-Proc is run (i.e. the code is part of a bigger transaction)
So, in short, is there a way to write a special statement inside a transaction that is not part of the transaction. I hope my question makes sense.
Use a table variable (#temp) to hold the log info. Table variables survive a transaction rollback.
See this article.
I do this one of two ways, depending on my needs at the time. Both involve using a variable, which retain their value following a rollback.
1) Create a DECLARE #Log varchar(max) value and use this: #SET #Log=ISNULL(#Log+'; ','')+'Your new log info here'. Keep appending to this as you go through the transaction. I'll insert this into the log after the commit or the rollback as necessary. I'll usually only insert the #Log value into the real log table when there is an error (in theCATCH` block) or If I'm trying to debug a problem.
2) create a DECLARE #LogTable table (RowID int identity(1,1) primary key, RowValue varchar(5000). I insert into this as you progress through your transaction. I like using the OUTPUT clause to insert the actual IDs (and other columns with messages, like 'DELETE item 1234') of rows used in the transaction into this table with. I will insert this table into the actual log table after the commit or the rollback as necessary.
If the parent transaction rolls back the logging data will roll back as well - SQL server does not support proper nested transactions. One possibility is to use a CLR stored procedure to do the logging. This can open its own connection to the database outside the transaction and enter and commit the log data.
Log output to a table, use a time delay, and use WITH(NOLOCK) to see it.
It looks like #arvid wanted to debug the operation of the stored procedure, and is able to alter the stored proc.
The c# code starts a transaction, then calls a s-proc, and at the end it commits or rolls back the transaction. I only have easy access to the s-proc
I had a similar situation. So I modified the stored procedure to log my desired output to a table. Then I put a time delay at the end of the stored procedure
WAITFOR DELAY '00:00:12'; -- 12 second delay, adjust as desired
and in another SSMS window, quickly read the table with READ UNCOMMITTED isolation level (the "WITH(NOLOCK)" below
SELECT * FROM dbo.NicksLogTable WITH(NOLOCK);
It's not the solution you want if you need a permanent record of the logs (edit: including where transactions get rolled back), but it suits my purpose to be able to debug the code in a temporary fashion, especially when linked servers, xp_cmdshell, and creating file tables are all disabled :-(
Apologies for bumping a 12-year old thread, but Microsoft deserves an equal caning for not implementing nested transactions or autonomous transactions in that time period.
If you want to emulate nested transaction behaviour you can use named transactions:
begin transaction a
create table #a (i int)
select * from #a
save transaction b
create table #b (i int)
select * from #a
select * from #b
rollback transaction b
select * from #a
rollback transaction a
In SQL Server if you want a ‘sub-transaction’ you should use save transaction xxxx which works like an oracle checkpoint.
Is there a way to lock a SELECT in a Transaction? If a SELECT occurs, no more SELECTs are executed while the first one is not finished.
Thanks!
Not sure I understand your question correctly, but if you want to impose more rigorous locking than SQL Server's default, then you can either bump up the isolation level or use a locking hint. This can be useful if you first need to SELECT something and then later, based on the value SELECTed, do an UPDATE. To avoid a phantom UPDATE from another transaction (wherein the value you previously SELECTed was changed in b/w the SELECT and UPDATE), you can impose an update lock on your SELECT statement.
Eg:
select * from mytable with (holdlock, xlock)
Notice that the SELECT statement above uses the more rigorous update lock & holds that lock for the duration of the transaction. You would also want to wrap your statements in an explicit transaction, as in:
begin transaction
select * from mytable with (holdlock, xlock) -- exclusive lock held for the entire transaction
-- more code here...
update mytable set col='whatever' where ...
commit transaction
Be wary, of course, for long-running transactions.
Might want to look at Isolation Level instead of a hint
You'll need an isolation level of SERIALIZABLE:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
The other SELECT queries will block until the transaction completes in this case. The default level is usually READ COMMITTED.
it seems you're looking for a pessimistic locking strategy but without acctually locking your data. look into Application locks in sql server.
Changing your isolation levels is generally your best choice.
But, if for whatever reason that's not an option for you - you could also do an exclusive table lock...
select * from MyTable with (tablockx)
That would prevent any other selects on the table until your transaction is finished.