My stored procedure is like this.
Im maintaining a RowVersion in Table A.
Starts Transaction
Read RowVersion from Table A rw1
...
Some Calculations
...
Read RowVersion from Table A as rw2
Update Some Tables including Table A
IF(rw1==rw2)
COMMIT
ELSE
ROLLBACK
Currently im using READ COMMIT as Isolation Level But When its updating Table A RowVersion is also changing.
Goal : when two or more users logged in to system and press a button at same time (which will execute this SP) first one only execute the SP and not allow other one to execute
Related
I have created a database transaction and I am inserting records in Table1 of H2 DB. But no commits done yet.
In between this process, after executing half of the records, I execute one create statement(created Table2).
Table2 is created and along with it, previous INSERT statements are also getting committed in DB.
After this, I'm inserting more records in Table1, if there is a failure in insertion, I still see records in Table1 which were inserted before create statement for Table2.
Due to this, I see some records in DB even after transaction failure. I was expecting ZERO records in DB.
Why is this happening?
Because create table is a DDL statement and no DML statement. And DDL statement usually commit any open transaction.
If you want to avoid this you should create all objects you need during the import before you import the first record.
EDIT 2019-03-22
Although this topic is a bit old I like to mention one thing which could help. You could create a procedure which uses PRAGMA AUTONOMOUS_TRANSACTION which executes an sql statement via execute immediate
PROCEDURE exec_sql_autonomous(p_sql VARCHAR2)
AS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
EXECUTE IMMEDIATE p_sql;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
ROLLBACK;
RAISE;
END;
This way you may be able to create a table while the data inserting transaction is in progress without committing it due to the table creation.
I have a table were values can be altered by different users and records of 100k rows.
I made a stored procedure where in, it has a begin tran and at the last part
to either commit or rollback the changes depending on the situation.
So for now the problem we're encountering is a lock of that table. For example 1st user is executing the stored procedure thru the system, then the other users won't be able to select or also execute the stored procedure because the table is currently locked.
So is there anyway where I can avoid lock other than using dirty read. Or a way where I can rollback the changes made without using begin tran, because it is the main reason why the table is locked up.
Yes, you can at least (quick & dirty) enable SNAPSHOT isolation level for transactions. That will prevent locks inside the transactions.
ALTER DATABASE MyDatabase
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE MyDatabase
SET READ_COMMITTED_SNAPSHOT ON
See for details.
Right now I am having an issue with a stored procedure that is locking up when running.
It's a conversion from Sybase.
The stored procedure originally would do
TRUNCATE TABLE appInfo
Then repopulate the data within the same stored procedure, but in SQL Server this seems to be causing locks to the users.
Its not a high traffic database.
The change I tried was the to do
BEGIN TRAN
DELETE TABLE appInfo
COMMIT TRAN
Then repopulate the data, but the users are getting a NO_DATA_FOUND error on this one.
So if I TRUNCATE they get data, but it causes a lock
If I do a delete there is no data found.
Anyone have any insight into this condition and a solution? I was thinking of taking the truncate out to a separate stored procedure and called from within the parent procedure, but that might just be pushing the issue down the road and not actually solving it.
Thanks in advance
When you truncate a table the entire table is locked (from MSDN https://technet.microsoft.com/en-us/library/ms177570%28v=sql.105%29.aspx - TRUNCATE TABLE always locks the table and page but not each row.) When you issue a delete table it locks a row, deletes it, and then locks the next row and deletes it. Your users are continuing to hit the table as it is happening. I would go with truncate because its almost always faster.
I'm trying to write to a log file inside a transaction so that the log survives even if the transaction is rolled back.
--start code
begin tran
insert [something] into dbo.logtable
[[main code here]]
rollback
commit
-- end code
You could say just do the log before the transaction starts but that is not as easy because the transaction starts before this S-Proc is run (i.e. the code is part of a bigger transaction)
So, in short, is there a way to write a special statement inside a transaction that is not part of the transaction. I hope my question makes sense.
Use a table variable (#temp) to hold the log info. Table variables survive a transaction rollback.
See this article.
I do this one of two ways, depending on my needs at the time. Both involve using a variable, which retain their value following a rollback.
1) Create a DECLARE #Log varchar(max) value and use this: #SET #Log=ISNULL(#Log+'; ','')+'Your new log info here'. Keep appending to this as you go through the transaction. I'll insert this into the log after the commit or the rollback as necessary. I'll usually only insert the #Log value into the real log table when there is an error (in theCATCH` block) or If I'm trying to debug a problem.
2) create a DECLARE #LogTable table (RowID int identity(1,1) primary key, RowValue varchar(5000). I insert into this as you progress through your transaction. I like using the OUTPUT clause to insert the actual IDs (and other columns with messages, like 'DELETE item 1234') of rows used in the transaction into this table with. I will insert this table into the actual log table after the commit or the rollback as necessary.
If the parent transaction rolls back the logging data will roll back as well - SQL server does not support proper nested transactions. One possibility is to use a CLR stored procedure to do the logging. This can open its own connection to the database outside the transaction and enter and commit the log data.
Log output to a table, use a time delay, and use WITH(NOLOCK) to see it.
It looks like #arvid wanted to debug the operation of the stored procedure, and is able to alter the stored proc.
The c# code starts a transaction, then calls a s-proc, and at the end it commits or rolls back the transaction. I only have easy access to the s-proc
I had a similar situation. So I modified the stored procedure to log my desired output to a table. Then I put a time delay at the end of the stored procedure
WAITFOR DELAY '00:00:12'; -- 12 second delay, adjust as desired
and in another SSMS window, quickly read the table with READ UNCOMMITTED isolation level (the "WITH(NOLOCK)" below
SELECT * FROM dbo.NicksLogTable WITH(NOLOCK);
It's not the solution you want if you need a permanent record of the logs (edit: including where transactions get rolled back), but it suits my purpose to be able to debug the code in a temporary fashion, especially when linked servers, xp_cmdshell, and creating file tables are all disabled :-(
Apologies for bumping a 12-year old thread, but Microsoft deserves an equal caning for not implementing nested transactions or autonomous transactions in that time period.
If you want to emulate nested transaction behaviour you can use named transactions:
begin transaction a
create table #a (i int)
select * from #a
save transaction b
create table #b (i int)
select * from #a
select * from #b
rollback transaction b
select * from #a
rollback transaction a
In SQL Server if you want a ‘sub-transaction’ you should use save transaction xxxx which works like an oracle checkpoint.
I have a website which is used by all branches of a store and what it does is that it records customer purchases into a table called myTransactions.myTransactions table has a column named SerialNumber. For each purchase I create a record in the transactions table and assign a serial to it. The stored procedure that does this calls a UDF function to get a new serialNumber before inserting the record. Like below :
Create Procedure mytransaction_Insert
as begin
insert into myTransactions(column1,column2,column3,...SerialNumber)
values( Value1 ,Value2,Value3,...., getTransactionNSerialNumber())
end
Create function getTransactionNSerialNumber
as
begin
RETURN isnull(SELECT TOP (1) SerialNumber FROM myTransactions READUNCOMMITTED
ORDER BY SerialNumber DESC),0) + 1
end
The website is being used by so many users in different stores at the same time and it is creating many duplicate serialNumbers(same SerialNumbers). So I added a Sql transaction with ReadCommitted level to the transaction and I still got duplicate transaction numbers. I changed it to SERIALIZABLE in order to lock the resources and I not only got duplicate transaction numbers(!!HOW!!) but I also got sporadic deadlocks between the same stored procedure calls. This is what I tried : (With omissions of try catch blocks and rollbacks)
Create Procedure mytransaction_Insert
as begin
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRASNACTION ins
insert into myTransactions(column1,column2,column3,...SerialNumber)
values( Value1 ,Value2 , Value3, ...., getTransactionNSerialNumber())
COMMIT TRANSACTION ins
SET TRANSACTION ISOLATION READCOMMITTED
end
I even copied the function that gets the serial number directly into the stored procedure instead of the UDF function call and still got duplicate serialNumbers.So,How can a stored procedure line create something Like the c# lock() {} block.
By the way, I have to implement the transaction serial number using the same pattern and I can't change the serialNumber to any other identity field or whatever.And for some reasons I need to generate the serialNumber inside the database and I can't move SerialNumber generation to application level.
Sorry but I have already tried this without READUNCOMMITTED in the function and I still get duplicate SerialNumbers.
As for the IDENTITY column, I should say that this app is going to be used by other companies that require different SerialNumbers and we can't just simply change it to identity.
You have READUNCOMMITTED in the UDF. This will cause it to ignore any exclusive locks held by other transactions.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE is not the same as an applock, it is not a "critical section" in the database, it just controls the locking behaviour of subsequent statements in the transaction.
Take out the READUNCOMMITTED and it should start working as you expect.
Of course, this ignores the fact that you've essentially re-implemented an IDENTITY column. If your serial numbers are really incremental then you should throw all of this away and replace it with a simple IDENTITY column. You claim that you "can't", but don't provide any justification for that statement; it looks to me like you almost certainly can.
What you are missing is a unique constraint (or primary key) down your transactions table. If you had that the duplicate entry would back out when you attempt to commit it.
But I would state clearly that you should use the "Identity" (as said by #Aaronaught) column in SQL. This will start at whatever you want it to and increment forward or backward. If you need your orders to start at a given number then forward it. But if you need an identifier that is unique and also happens to be an integer value then use identity.