Transact-SQL transaction rollback not working properly when using GO commands - sql-server

I have a migration script written in Transact-SQL which is using transactions in order to have a proper rollback if something goes wrong during the execution.
Unfortunately, this rollback behaviour is not working as expected when I'm using some GO utility statements in my script.
The issue can be reproduced with a simple script:
BEGIN TRANSACTION
-- Create a table with two nullable columns
CREATE TABLE [dbo].[t1](
[id] [nvarchar](36) NULL,
[name] [nvarchar](36) NULL
)
-- add one row having one NULL column
INSERT INTO [dbo].[t1] VALUES(NEWID(), NULL)
-- set one column as NOT NULLABLE
-- this fails because of the previous insert
ALTER TABLE [dbo].[t1] ALTER COLUMN [name] [nvarchar](36) NOT NULL
GO
-- create a table as next action, so that we can test whether the rollback happened properly
CREATE TABLE [dbo].[t2](
[id] [nvarchar](36) NOT NULL
)
GO
COMMIT TRANSACTION
When I execute this script, I get the following output:
(1 row affected)
Msg 515, Level 16, State 2, Line 23
Cannot insert the value NULL into column 'name', table 'test-transaction.dbo.t1'; column does not allow nulls. UPDATE fails.
The statement has been terminated.
Msg 3902, Level 16, State 1, Line 31
The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.
As expected, it is complaining that the column 'name' contains a NULL value but only the corresponding GO batch fails. The next batch is executed and the table t2 is successfully created.
My understanding of the GO documentation is that it should not impact the T-SQL transactions but this is not the case in my example.
How can I make the whole transaction be rolled back if any of the GO batch fails?
ps: if I remove the GO statements, the transaction rollback is working as expected. But I do need those GO statements, in order to ensure that some parts of the script are executed before others.

Some errors roll back the transaction. Don't bother figuring out which ones, because there's no simple rule.
A multi-batch script should have a single error handler scope that rolls back the transaction on error, and commits at the end. In TSQL you can do this with dynamic sql, eg
BEGIN TRANSACTION
BEGIN TRY
EXEC('
-- Create a table with two nullable columns
CREATE TABLE [dbo].[t1](
[id] [nvarchar](36) NULL,
[name] [nvarchar](36) NULL
)
')
EXEC('
-- add one row having one NULL column
INSERT INTO [dbo].[t1] VALUES(NEWID(), NULL)
')
-- set one column as NOT NULLABLE
-- this fails because of the previous insert
EXEC('
ALTER TABLE [dbo].[t1] ALTER COLUMN [name] [nvarchar](36) NOT NULL
')
EXEC('
-- create a table as next action, so that we can test whether the rollback happened properly
CREATE TABLE [dbo].[t2](
[id] [nvarchar](36) NOT NULL
)
')
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0 ROLLBACK;
THROW;
END CATCH
With SQLCMD you can use the -b option to abort the script on error.

Related

Insert values in second table using triggers

I have one table called [FridgeTemperture], when any record inserted it should add one value in the new table MpSensors. But records are not being inserted in the new table when a record is inserted.
Error
Explicit value must be specified for identity column in table
'MpSensors' either identity_insert is set to ON or when a replication
user is inserting into a not for replication identity column.
CREATE TRIGGER [dbo].[FridgeTemperature_INSERT]
ON [dbo].[FridgeTemperture]
AFTER INSERT
AS
BEGIN
SET IDENTITY_INSERT MpSensors ON;
SET NOCOUNT ON;
DECLARE #fridge_temp varchar(10)
INSERT INTO MpSensors(fridge_temp)
VALUES(#fridge_temp)
SET IDENTITY_INSERT MpSensors OFF;
END
GO
table schema
CREATE TABLE [dbo].[MpSensors](
[id] [int] IDENTITY(1,1) NOT NULL,
[fridge_temp] [varchar](10) NULL
) ON [PRIMARY]
CREATE TABLE [dbo].[FridgeTemperture](
[Id] [int] IDENTITY(1,1) NOT NULL,
[ShopId] [nvarchar](4) NULL,
[Fridgetemp] [decimal](4, 2) NOT NULL,
[UpdatedDate] [datetime2](7) NOT NULL
GO
You don't need the set identity_insert on if you are not attempting to insert values to the identity column. Also, your current insert statement, if you loose the set identity_insert, will simply inside a single null row for any insert statement completed successfully on the FridgeTemperture table.
When using triggers, you have access to the records effected by the statement that fired the trigger via the auto-generated tables called inserted and deleted.
I think you are after something like this:
CREATE TRIGGER [dbo].[FridgeTemperature_INSERT]
ON [dbo].[FridgeTemperture]
AFTER INSERT
AS
BEGIN
INSERT INTO MpSensors(fridge_temp)
SELECT CAST(Fridgetemp as varchar(10))
FROM inserted
END
Though I can't really see any benefit of storing the same value in two different places, and in two different data types.
Update
Following our conversation in the comments, you can simply use an update statement in the trigger instead of an insert statement:
UPDATE MpSensors
SET fridge_temp = (
SELECT TOP 1 CAST(Fridgetemp as varchar(10))
FROM inserted
ORDER BY Id DESC
)
This should give you the latest record in case you have an insert statement that inserts more than a single record into the FridgeTemperture table in a single statement.
create TRIGGER [dbo].[FridgeTemperature_INSERT]
ON [dbo].[FridgeTemperture]
AFTER INSERT
AS
BEGIN
UPDATE MpSensors
SET fridge_temp = CAST(Fridgetemp as varchar(10))
FROM inserted
END
You need to use Select statement with CAST as [fridge_temp] is varchar in MpSensors table in Trigger. Try like this:
CREATE trigger <table_name>
ON <table_name>
AFTER Insert
AS
BEGIN
INSERT INTO <table_name>(column_name)
Select CAST(column_name as varchar(10))
FROM inserted
END
The inserted table stores copies of the affected rows during INSERT and UPDATE statements. During an insert or update transaction, new rows are added to both the inserted table and the trigger table. The rows in the inserted table are copies of the new rows in the trigger table.

TRANSACTION rollback not working as expected

I'm doing some DB schema re-structuring.
I have a script that looks broadly like this:
BEGIN TRAN LabelledTransaction
--Remove FKs
ALTER TABLE myOtherTable1 DROP CONSTRAINT <constraintStuff>
ALTER TABLE myOtherTable2 DROP CONSTRAINT <constraintStuff>
--Remove PK
ALTER TABLE myTable DROP CONSTRAINT PK_for_myTable
--Add replacement id column with new type and IDENTITY
ALTER TABLE myTable ADD id_new int Identity(1, 1) NOT NULL
GO
ALTER TABLE myTable ADD CONSTRAINT PK_for_myTable PRIMARY KEY CLUSTERED (id_new)
GO
SELECT * FROM myTable
--Change referencing table types
ALTER TABLE myOtherTable1 ALTER COLUMN col_id int NULL
ALTER TABLE myOtherTable2 ALTER COLUMN col_id int NOT NULL
--Change referencing table values
UPDATE myOtherTable1 SET consignment_id = Target.id_new FROM myOtherTable1 AS Source JOIN <on key table>
UPDATE myOtherTable2 SET consignment_id = Target.id_new FROM myOtherTable2 AS Source JOIN <on key table>
--Replace old column with new column
ALTER TABLE myTable DROP COLUMN col_id
GO
EXEC sp_rename 'myTable.id_new', 'col_id', 'Column'
GO
--Reinstate any OTHER PKs disabled
ALTER TABLE myTable ADD CONSTRAINT <PK defn>
--Reinstate FKs
ALTER TABLE myOtherTable1 WITH CHECK ADD CONSTRAINT <constraintStuff>
ALTER TABLE myOtherTable2 WITH CHECK ADD CONSTRAINT <constraintStuff>
SELECT * FROM myTable
-- Reload out-of-date views
EXEC sp_refreshview 'someView'
-- Remove obsolete sequence
DROP SEQUENCE mySeq
ROLLBACK TRAN LabelledTransaction
Obviously that's all somewhat redacted, but the fine detail isn't the important thing in here.
Naturally, it's quite hard to locate all the things that need to be turned off/editted before the core change (even with some meta-queries to help me), so I don't always get the script correct first time.
But I put in the ROLLBACK in order to ensure that the failed attempts left the DB unchanged.
But what I actually see is that the ROLLBACK doesn't occur if there were errors in the TRAN. I think I get errors about "no matching TRAN for the rollback"?
My first instinct was that it was about the GO statements, but https://stackoverflow.com/a/11121382/1662268 suggests that labeling the TRAN should have fixed that?
What's happening? Why don't the changes get rolled back properly if there are errors.
How can I write and test these scripts in such a way that I don't have to manually revert any partial changes if the script isn't perfect first time?
EDIT:
Additional comments based on the first answer.
If the linked answer is not applicable to this query, could you expand on why that is, and why it's different from the example that they had given in their answer?
I can't (or rather, I believe that I can't) remove the GOs, because the script above requires the GOs in order to compile. If I remove the GOs then later statements that depend on the newly added/renamed columns don't compile. and the query can't run.
Is there any way to work around this, to remove the GOs?
If you have any error which automatically causes the transaction to be rolled back then the transaction will roll back as part of the current batch.
Then, control will return back to the client tool which will then send the next batch to the server and this next batch (and subsequent ones) will not be wrapped in any transaction.
Finally, when the final batch is executed that tries to run the rollback then you'll get the error message you received.
So, you need to protect each batch from running when its not protected by a transaction.
One way to do it would be to insert our old fried GOTO:
GO
IF ##TRANCOUNT=0 GOTO NBATCH
...Rest of Code
NBATCH:
GO
or SET FMTONLY:
GO
IF ##TRANCOUNT=0 BEGIN
SET FMTONLY ON
END
...Rest of Code
GO
Of course, this won't address all issues - some statements need to be the first or only statement in a batch. To resolve these, we have to combine one of the above techniques with an EXEC of some form:
GO
IF ##TRANCOUNT=0 BEGIN
SET FMTONLY ON
END
EXEC sp_executesql N'/*Code that needs to be in its own batch*/'
GO
(You'll also have to employ this technique if a batch of code relies on work a previous batch has performed which introduces new database objects (tables, columns, etc), since if that previous batch never executed, the new object will not exist)
I've also just discovered the existence of the -b option for the sqlcmd tool. The following script generates two errors when run through SSMS:
begin transaction
go
set xact_abort on
go
create table T(ID int not null,constraint CK_ID check (ID=4))
go
insert into T(ID) values (3)
go
rollback
Errors:
Msg 547, Level 16, State 0, Line 7
The INSERT statement conflicted with the CHECK constraint "CK_ID". The conflict occurred in database "TestDB", table "dbo.T", column 'ID'.
Msg 3903, Level 16, State 1, Line 9
The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION.
However, the same script saved as Abortable.sql and run with the following commandline:
sqlcmd -b -E -i Abortable.sql -S .\SQL2014 -d TestDB
Generates the single error:
Msg 547, Level 16, State 1, Server .\SQL2014, Line 1
The INSERT statement conflicted with the CHECK constraint "CK_ID". The conflict
occurred in database "TestDB", table "dbo.T", column 'ID'.
So, it looks like running your scripts from the commandline and using the -b option may be another approach to take. I've just scoured the SSMS options/properties to see if I can find something equivalent to -b but I've not found it.
Remove the 'GO', that finishes the transaction
Only ROLLBACK if completes - just use TRY/CATCH:
BEGIN TRANSACTION;
BEGIN TRY
--Remove FKs
ALTER TABLE myOtherTable1 DROP CONSTRAINT <constraintStuff>
ALTER TABLE myOtherTable2 DROP CONSTRAINT <constraintStuff>
--Remove PK
ALTER TABLE myTable DROP CONSTRAINT PK_for_myTable
--Add replacement id column with new type and IDENTITY
ALTER TABLE myTable ADD id_new int Identity(1, 1) NOT NULL
ALTER TABLE myTable ADD CONSTRAINT PK_for_myTable PRIMARY KEY CLUSTERED (id_new)
SELECT * FROM myTable
--Change referencing table types
ALTER TABLE myOtherTable1 ALTER COLUMN col_id int NULL
ALTER TABLE myOtherTable2 ALTER COLUMN col_id int NOT NULL
--Change referencing table values
UPDATE myOtherTable1 SET consignment_id = Target.id_new FROM myOtherTable1 AS Source JOIN <on key table>
UPDATE myOtherTable2 SET consignment_id = Target.id_new FROM myOtherTable2 AS Source JOIN <on key table>
--Replace old column with new column
ALTER TABLE myTable DROP COLUMN col_id
EXEC sp_rename 'myTable.id_new', 'col_id', 'Column'
--Reinstate any OTHER PKs disabled
ALTER TABLE myTable ADD CONSTRAINT <PK defn>
--Reinstate FKs
ALTER TABLE myOtherTable1 WITH CHECK ADD CONSTRAINT <constraintStuff>
ALTER TABLE myOtherTable2 WITH CHECK ADD CONSTRAINT <constraintStuff>
SELECT * FROM myTable
-- Reload out-of-date views
EXEC sp_refreshview 'someView'
-- Remove obsolete sequence
DROP SEQUENCE mySeq
ROLLBACK TRANSACTION
END TRY
BEGIN CATCH
print 'Error caught'
select ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage;
END CATCH;

MSSQL trigger or constraint to prevent or allow update/delete based on column value

I have a table of Customers, and I want to prevent update/insert when the row has Status column that is anything other than 1.
What would be the best way to create that kind of functionality direstly on server?
You can create a stored procedure and force its usage to update the table.
create procedure updateCustomers #custID int,#newValue nvarchar(50)
with execute as superUser
as
update customers
set someColumn=#newValue
where custID=#custID and status=1
1) If you want to prevent any INSERT/UPDATE action when the new Status is diff. that 1 then following trigger will prevent such actions. Anyways, this requirement makes no sense because it's much simple to use a check constraint thus: ALTER TABLE dbo.Customer ADD CONSTRAINT ... CHECK ([Status] <> 1)
CREATE TRIGGER dbo.trgIU_Customer_PreventIU
ON dbo.Customer
AFTER INSERT, UPDATE
AS
BEGIN
SET NOCOUNT ON
IF EXISTS(SELECT * FROM inserted i WHERE i.[Status] IS NOT NULL AND i.[Status] <> 1)
BEGIN
RAISERROR('Can''t insert or update rows with Status <> 1', 16, 1)
ROLLBACK
END
END
2) But if you want to simply avoid any update when current [Status] is 1 then following trigger could be used:
CREATE TRIGGER dbo.trgU_Customer_PreventU
ON dbo.Customer
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON
IF EXISTS(SELECT * FROM deleted d WHERE d.[Status] = 1)
BEGIN
RAISERROR('Can''t update rows with Status = 1', 16, 1)
ROLLBACK
END
END
You can try like this:
CREATE TRIGGER [dbo].[PreventUpdate]
ON [dbo].[Customers]
INSTEAD OF UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF EXISTS
(
SELECT 1 FROM Customers c
WHERE c.Status <> 1 )
BEGIN
RAISERROR(...);
END
ELSE
BEGIN
--You update code
END
END
GO
What would be the best way
Do not use a trigger for this purpose. Triggers exist to enforce referential integrity, and using them for other purposes will lead to tears.
Deny users privileges on the table, and provide stored procedures instead, such as doublet28 suggests. That gives you the opportunity to provide a meaningful error message, and leaves the DBA free to update the table without tripping on the trigger (which will be needed sooner than later).

Null values INSERTED in trigger

I want to copy content of one table to another table in the same database.
For this I wrote trigger on source table which triggered on AFTER INSERT UPDATE, there are 2 uniqueidentifier fields in the table which generates values based on newid() as default binding. Based on this uniqueidentifier I am checking whether the record is present on the destination table or not if present then it will update and if not present then insert dataset into the table.
Problem is when i insert a new record the INSERTED in trigger give me NULL values for the uniqueidentifier fields.
In may case only one row is either update or insert so cursor is not used.
Below is my code, I am getting null values in #OriginalTable_MoveDataUID and #OriginalTable_ProcedureUID. Both the MoveDataUID and ProcedureUID are uniqueidentifier fileds.
Please share your thoughts or any alternative for this.
ALTER TRIGGER [dbo].[spec_ref_movedata_procedures_ToUpdate]
ON [dbo].[spec_ref_movedata_procedures]
AFTER INSERT, UPDATE
AS
BEGIN
SET XACT_ABORT ON
BEGIN DISTRIBUTED TRANSACTION
DECLARE #OriginalTable_MoveDataUID NVarchar (100)
DECLARE #OriginalTable_ProcedureUID NVarchar (100)
DECLARE #PresentInHistoryYesNo int
SELECT #OriginalTable_MoveDataUID= MoveDataUID,#OriginalTable_ProcedureUID=ProcedureUID FROM INSERTED
-- inserted for checking purpose
INSERT INTO ERP_Test_NK_spec_ref_movedata_procedures_history_2 (MovedataUID,ProcedureUID) VALUES
(#OriginalTable_MoveDataUID,#OriginalTable_ProcedureUID)
SELECT #PresentInHistoryYesNo = count(*) from spec_ref_movedata_procedures_history WHERE MoveDataUID=#OriginalTable_MoveDataUID AND ProcedureUID=#OriginalTable_ProcedureUID
IF #PresentInHistoryYesNo = 0
BEGIN
-- insert opertions
print 'insert record'
END
ELSE IF #PresentInHistoryYesNo = 1
BEGIN
-- update opertions
print 'update record'
END
COMMIT TRANSACTION
SET XACT_ABORT OFF
END
Instead of using variables, you could do this:
INSERT INTO ERP_Test_NK_spec_ref_movedata_procedures_history_2 (MovedataUID,ProcedureUID)
SELECT MoveDataUID,ProcedureUID FROM INSERTED

DROP TABLE fails for temp table

I have a client application that creates a temp table, the performs a bulk insert into the temp table, then executes some SQL using the table before deleting it.
Pseudo-code:
open connection
begin transaction
CREATE TABLE #Temp ([Id] int NOT NULL)
bulk insert 500 rows into #Temp
UPDATE [OtherTable] SET [Status]=0 WHERE [Id] IN (SELECT [Id] FROM #Temp) AND [Group]=1
DELETE FROM #Temp WHERE [Id] IN (SELECT [Id] FROM [OtherTable] WHERE [Group]=1)
INSERT INTO [OtherTable] ([Group], [Id]) SELECT 1 as [Group], [DocIden] FROM #Temp
DROP TABLE #Temp
COMMIT TRANSACTION
CLOSE CONNECTION
This is failing with an error on the DROP statement:
Cannot drop the table '#Temp', because it does not exist or you do not have permission.
I can't imagine how this failure could occur without something else going on first, but I don't see any other failures occurring before this.
Is there anything that I'm missing that could be causing this to happen?
possibly something is happening in the session in between?
Try checking for the existence of the table before it's dropped:
IF object_id('tempdb..#Temp') is not null
BEGIN
DROP TABLE #Temp
END
I've tested this on SQL Server 2005, and you can drop a temporary table in the transaction that created it:
begin transaction
create table #temp (id int)
drop table #temp
commit transaction
Which version of SQL Server are you using?
You might reconsider why you are dropping the temp table at all. A local temporary table is automatically deleted when the connection ends. There's usually no need to drop it explicitly.
A global temporary table starts with a double hash (f.e. ##MyTable.) But even a global temp table is automatically deleted when no connection refers to it.
I think you aren't creating the table at all, because the statement
CREATE TABLE #Temp ([Id] AS int)
is incorrect. Please, write it as
CREATE TABLE #Temp ([Id] int)
and see if it works.
BEGIN TRAN
IF object_id('DATABASE_NAME..#TABLE_NAME') is not null
BEGIN
DROP TABLE #TABLE_NAME
END
COMMIT TRAN
Note:Please enter your table name where TABLE_NAME and database name where it says DATABASE_NAME

Resources