I have a parent and child table in MS SQL 2008 and trying to perform a save call using hibernate with cascade enabled.
Table specs that matter: parent table has an identity column and child table has a varchar column with length 50.
Environment: spring,jboss 7.1.1, hibernate 3, jta transaction (transaction is started/commited/rolled back from the code by obtaining the transaction from jndi)
Issue Arises when: the data that i insert in the child table varchar column is more than 50.
The insert query on the child table is fired only when i execute commit, and it fails to insert because of the char length. I get an exception because of this, i catch the exception and perform a rollback, rollback fails because the transaction is in inactive state.
Issue: The problem here is the parent data is getting committed which is not what I want. How to make sure that the transaction is rolled back completely?
Related
My stored procedure is like this.
Im maintaining a RowVersion in Table A.
Starts Transaction
Read RowVersion from Table A rw1
...
Some Calculations
...
Read RowVersion from Table A as rw2
Update Some Tables including Table A
IF(rw1==rw2)
COMMIT
ELSE
ROLLBACK
Currently im using READ COMMIT as Isolation Level But When its updating Table A RowVersion is also changing.
Goal : when two or more users logged in to system and press a button at same time (which will execute this SP) first one only execute the SP and not allow other one to execute
I have three tables in SQL Server:
Employee
EmployeeDetails
EmployeeHistory
I wrote a trigger on the Employee table so that if entry is inserted into Employee, then it also inserts a row into EmployeeHistory; which is working fine.
Now I have created stored procedure with transaction and inserting records into Employee, then EmployeeDetails. After inserting the record into Employee and if there is any issue and the transaction is rolled back, then will the row inserted into EmployeeHistory also be removed or not?
By default, the option XACT_ABORT is ON in a trigger. It can be seen here. When this feature is ON, any error that occurs break off/abort the batch, so your whole transaction will be rolled back.
Microsoft documentation says:
When SET XACT_ABORT is ON, if a Transact-SQL statement raises a
run-time error, the entire transaction is terminated and rolled back.
When SET XACT_ABORT is OFF, in some cases only the Transact-SQL
statement that raised the error is rolled back and the transaction
continues processing. Depending upon the severity of the error, the
entire transaction may be rolled back even when SET XACT_ABORT is OFF.
I have created a database transaction and I am inserting records in Table1 of H2 DB. But no commits done yet.
In between this process, after executing half of the records, I execute one create statement(created Table2).
Table2 is created and along with it, previous INSERT statements are also getting committed in DB.
After this, I'm inserting more records in Table1, if there is a failure in insertion, I still see records in Table1 which were inserted before create statement for Table2.
Due to this, I see some records in DB even after transaction failure. I was expecting ZERO records in DB.
Why is this happening?
Because create table is a DDL statement and no DML statement. And DDL statement usually commit any open transaction.
If you want to avoid this you should create all objects you need during the import before you import the first record.
EDIT 2019-03-22
Although this topic is a bit old I like to mention one thing which could help. You could create a procedure which uses PRAGMA AUTONOMOUS_TRANSACTION which executes an sql statement via execute immediate
PROCEDURE exec_sql_autonomous(p_sql VARCHAR2)
AS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
EXECUTE IMMEDIATE p_sql;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
ROLLBACK;
RAISE;
END;
This way you may be able to create a table while the data inserting transaction is in progress without committing it due to the table creation.
Right now I am having an issue with a stored procedure that is locking up when running.
It's a conversion from Sybase.
The stored procedure originally would do
TRUNCATE TABLE appInfo
Then repopulate the data within the same stored procedure, but in SQL Server this seems to be causing locks to the users.
Its not a high traffic database.
The change I tried was the to do
BEGIN TRAN
DELETE TABLE appInfo
COMMIT TRAN
Then repopulate the data, but the users are getting a NO_DATA_FOUND error on this one.
So if I TRUNCATE they get data, but it causes a lock
If I do a delete there is no data found.
Anyone have any insight into this condition and a solution? I was thinking of taking the truncate out to a separate stored procedure and called from within the parent procedure, but that might just be pushing the issue down the road and not actually solving it.
Thanks in advance
When you truncate a table the entire table is locked (from MSDN https://technet.microsoft.com/en-us/library/ms177570%28v=sql.105%29.aspx - TRUNCATE TABLE always locks the table and page but not each row.) When you issue a delete table it locks a row, deletes it, and then locks the next row and deletes it. Your users are continuing to hit the table as it is happening. I would go with truncate because its almost always faster.
I'm trying to write to a log file inside a transaction so that the log survives even if the transaction is rolled back.
--start code
begin tran
insert [something] into dbo.logtable
[[main code here]]
rollback
commit
-- end code
You could say just do the log before the transaction starts but that is not as easy because the transaction starts before this S-Proc is run (i.e. the code is part of a bigger transaction)
So, in short, is there a way to write a special statement inside a transaction that is not part of the transaction. I hope my question makes sense.
Use a table variable (#temp) to hold the log info. Table variables survive a transaction rollback.
See this article.
I do this one of two ways, depending on my needs at the time. Both involve using a variable, which retain their value following a rollback.
1) Create a DECLARE #Log varchar(max) value and use this: #SET #Log=ISNULL(#Log+'; ','')+'Your new log info here'. Keep appending to this as you go through the transaction. I'll insert this into the log after the commit or the rollback as necessary. I'll usually only insert the #Log value into the real log table when there is an error (in theCATCH` block) or If I'm trying to debug a problem.
2) create a DECLARE #LogTable table (RowID int identity(1,1) primary key, RowValue varchar(5000). I insert into this as you progress through your transaction. I like using the OUTPUT clause to insert the actual IDs (and other columns with messages, like 'DELETE item 1234') of rows used in the transaction into this table with. I will insert this table into the actual log table after the commit or the rollback as necessary.
If the parent transaction rolls back the logging data will roll back as well - SQL server does not support proper nested transactions. One possibility is to use a CLR stored procedure to do the logging. This can open its own connection to the database outside the transaction and enter and commit the log data.
Log output to a table, use a time delay, and use WITH(NOLOCK) to see it.
It looks like #arvid wanted to debug the operation of the stored procedure, and is able to alter the stored proc.
The c# code starts a transaction, then calls a s-proc, and at the end it commits or rolls back the transaction. I only have easy access to the s-proc
I had a similar situation. So I modified the stored procedure to log my desired output to a table. Then I put a time delay at the end of the stored procedure
WAITFOR DELAY '00:00:12'; -- 12 second delay, adjust as desired
and in another SSMS window, quickly read the table with READ UNCOMMITTED isolation level (the "WITH(NOLOCK)" below
SELECT * FROM dbo.NicksLogTable WITH(NOLOCK);
It's not the solution you want if you need a permanent record of the logs (edit: including where transactions get rolled back), but it suits my purpose to be able to debug the code in a temporary fashion, especially when linked servers, xp_cmdshell, and creating file tables are all disabled :-(
Apologies for bumping a 12-year old thread, but Microsoft deserves an equal caning for not implementing nested transactions or autonomous transactions in that time period.
If you want to emulate nested transaction behaviour you can use named transactions:
begin transaction a
create table #a (i int)
select * from #a
save transaction b
create table #b (i int)
select * from #a
select * from #b
rollback transaction b
select * from #a
rollback transaction a
In SQL Server if you want a ‘sub-transaction’ you should use save transaction xxxx which works like an oracle checkpoint.