Running SQL in one transaction has different results to running separately - sql-server

I can't work out why this is the case, but if I run:
BEGIN TRANSACTION
ALTER TABLE [TABLE NAME] DISABLE TRIGGER [TRIGGER NAME];
-- Some query
ALTER TABLE [TABLE NAME] ENABLE TRIGGER [TRIGGER NAME];
COMMIT TRANSACTION
Where 'some query' depends on the trigger being disabled, I get an error (since the trigger hasn't successfully been disabled).
However, if I run the alter statements separately, it's fine.
I've tried DISABLE TRIGGER syntax instead, and I've tried BEGIN and END instead of BEGIN TRANSACTIONetc.
What am I misunderstanding here? Why do these alter statements not appear to alter 'in time'?
EDIT
I'd like to rephrase the question in favour of clarity to accompany the bounty:
Why must we separate batches of DDL and DML?

Since you are executing them in the same batch/statement, you are altering table AFTER sql server has compiled your sql statement.
SQL Server compiles your statement (including all the individual DDL
and DML commands in the batch)
SQL Server alters your TABLE to disable trigger
SQL Server runs your "query", but it is compiled to assume there was
a trigger there, so the trigger runs.
SQL Server alters your TABLE to enable trigger
To solve this, you can still do it within a transaction, but you will need to separate into batches. Insert "GO" between each statement if running from SSMS or similar tool, or call individual SQL statement if calling from code.

Mixing DDL and DML in transactions is not advised as it can produce undesired results:
ALTER TABLE [TABLE NAME] DISABLE TRIGGER [TRIGGER NAME];
BEGIN TRANSACTION
-- Some query
COMMIT TRANSACTION
ALTER TABLE [TABLE NAME] ENABLE TRIGGER [TRIGGER NAME];

Related

Undo insert query in SQL Server

I ran an insert query on a table in SQL Server by mistake and 100 rows got inserted. I want to undo it but I do not have rollback.
You can only delete, but for future you would need to surround your query in transaction or set such behavior manually.
By default SSMS does auto-commit. To alter this behavior use this:
SET IMPLICIT_TRANSACTIONS OFF
more here:
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-implicit-transactions-transact-sql?view=sql-server-ver15

How do I make ALTER COLUMN idempotent?

I have a migration script with the following statement:
ALTER TABLE [Tasks] ALTER COLUMN [SortOrder] int NOT NULL
What will happen if I run that twice? Will it change anything the second time? MS SQL Management Studio just reports "Command(s) completed successfully", but with no details on whether they actually did anything.
If it's not already idempotent, how do I make it so?
I would say that second time, SQL Server checks metadata and do nothing because nothing has changed.
But if you don't like possibility of multiple execution you can add simple condition to your script:
CREATE TABLE Tasks(SortOrder VARCHAR(100));
IF NOT EXISTS (SELECT 1
FROM INFORMATION_SCHEMA.COLUMNS
WHERE [TABLE_NAME] = 'Tasks'
AND [COLUMN_NAME] = 'SortOrder'
AND IS_NULLABLE = 'NO'
AND DATA_TYPE = 'INT')
BEGIN
ALTER TABLE [Tasks] ALTER COLUMN [SortOrder] INT NOT NULL
END
SqlFiddleDemo
When you execute it the second time, the query gets executed but since the table is already altered, there is no effect. So it makes no effect on the table.
No change is there when the script executes twice.
Here is a good MSDN read about: Inside ALTER TABLE
Let's look at what SQL Server does internally when performing an ALTER
TABLE command. SQL Server can carry out an ALTER TABLE command in any
of three ways:
SQL Server might need to change only metadata.
SQL Server might need to examine all the existing data to make sure
it's compatible with the change but then change only metadata.
SQL Server might need to physically change every row.

Understanding SQL Server query execution and transactions

I'm quite experienced with SQL databases but mostly with Oracle and MySQL.
Now I'm dealing with SQL Server 2012 (Management Studio 2008) and facing a weird behaviour that I cannot explain.
Considering these 3 queries and an origin table made of 400k rows:
SELECT ID_TARJETA
INTO [SGMENTIA_TEMP].[dbo].[borra_borra_]
FROM [DATAMART_SEGMENTIA].[DESA].[CLIENTES]
ALTER TABLE [SGMENTIA_TEMP].[dbo].[borra_borra_]
ADD PRIMARY KEY (ID_TARJETA)
SELECT COUNT(*)
FROM [SGMENTIA_TEMP].[dbo].[borra_borra_]
If I run them one after the other it runs OK. (total: ~7sec).
If I select them all and run all the queries at once it runs BAD. (total: ~60sec)
Finally if I wrap it all with a transaction it runs OK again
BEGIN TRANSACTION;
SELECT ID_TARJETA
INTO [SGMENTIA_TEMP].[dbo].[borra_borra_]
FROM [DATAMART_SEGMENTIA].[DESA].[CLIENTES]
ALTER TABLE [SGMENTIA_TEMP].[dbo].[borra_borra_]
ADD PRIMARY KEY(ID_TARJETA)
SELECT COUNT(*)
FROM [SGMENTIA_TEMP].[dbo].[borra_borra_]
COMMIT;
The whole picture makes no sense to me, considering that creating transactions looks quite expensive the first scenario should be a slow one, and the second one should work far better, am I wrong?
The question is quite important for me since, I'm building programatically (jdbc) this sort of packages of queries and I need a way to tweak its performance.
The only difference between the two snippet provided, is that the first uses the default transaction mode and the second uses an Explicit Transaction.
Since SQL Server default transaction mode is Autocommit Transactions, each individual statement is a transaction.
You can find more information about transaction modes here.
You can try this to see if it run in 60 sec too:
BEGIN TRANSACTION;
SELECT ID_TARJETA
INTO [SGMENTIA_TEMP].[dbo].[borra_borra_]
FROM [DATAMART_SEGMENTIA].[DESA].[CLIENTES];
COMMIT;
BEGIN TRANSACTION;
ALTER TABLE [SGMENTIA_TEMP].[dbo].[borra_borra_]
ADD PRIMARY KEY(ID_TARJETA);
COMMIT;
BEGIN TRANSACTION;
SELECT COUNT(*)
FROM [SGMENTIA_TEMP].[dbo].[borra_borra_]
COMMIT;

DDL scripts in a transaction block takes effect even when there are errors

I have some sql script files where i am making some DDL changes as part of commit block
BEGIN TRANSACTION
-- CREATE/ALTER TABLE, COLUMNS, CONTRAINTS etc etc
COMMIT
Sometime when script fail, i still see the changes in DDL applied in database although this whole thing is in transaction block. What am I missing here?
Just because a particular statement causes an error, that doesn't mean that other statements won't also execute. Look at the documentation for XACT_ABORT:
In the first set of statements, the error is generated, but the other statements execute successfully and the transaction is successfully committed
If you want to rollback a transaction when an error occurs, you need to enclose your code in a TRY...CATCH block (or older style, check ##ERROR after ever statement, and goto a label where a ROLLBACK will occur).
#MartinSmith is correct when he says that this is not the case with SQL Server.MSDN states:"... You can use all Transact-SQL statements in an explicit transaction, except for the following statements:
ALTER DATABASE CREATE FULLTEXT INDEX
ALTER FULLTEXT CATALOG DROP DATABASE
ALTER FULLTEXT INDEX DROP FULLTEXT CATALOG
BACKUP DROP FULLTEXT INDEX
CREATE DATABASE RECONFIGURE
CREATE FULLTEXT CATALOG RESTORE "
DDL statment internally uses commit. DDL statment can not be rollback using rollback command.

SQL Server: pause a trigger

I am working with SQL Server 2005 and I have trigger on a table that will copy an deletions into another table. I cannot remove this trigger completely. My problem is that we have now developed an archiving strategy for this table. I need a way of "pausing" a trigger when the stored proc that does the archiving runs.
A little more detail would be useful on how the procedure is accessing the data, but assuming you are just getting the data, then deleting it from the table and wish to disable the trigger for this process, you can do the following
DISABLE TRIGGER trg ON tbl;
then
ENABLE TRIGGER trg ON tbl;
for the duration of the procedure.
This only works for SQL 2005+
An alternative method is to use Context_Info to disable it for a single session, while allowing other sessions to continue to fire the trigger.
Context_Info is a variable which belongs to the session. Its value can be changed using SET Context_Info.
The trigger will mostly look like this:
USE AdventureWorks;
GO
-- creating the table in AdventureWorks database
IF OBJECT_ID('dbo.Table1') IS NOT NULL
DROP TABLE dbo.Table1
GO
CREATE TABLE dbo.Table1(ID INT)
GO
-- Creating a trigger
CREATE TRIGGER TR_Test ON dbo.Table1 FOR INSERT,UPDATE,DELETE
AS
DECLARE #Cinfo VARBINARY(128)
SELECT #Cinfo = Context_Info()
IF #Cinfo = 0x55555
RETURN
PRINT 'Trigger Executed'
-- Actual code goes here
-- For simplicity, I did not include any code
GO
If you want to prevent the trigger from being executed you can do the following:
SET Context_Info 0x55555
INSERT dbo.Table1 VALUES(100)
Before issuing the INSERT statement, the context info is set to a value. In the trigger, we are first checking if the value of context info is the same as the value declared. If yes, the trigger will simply return without executing its code, otherwise the trigger will fire.
source: http://www.mssqltips.com/tip.asp?tip=1591
if DISABLE TRIGGER/ENABLE TRIGGER is not an option for some reason, you can create a table with a single row which will serve as a flag for the trigger.

Resources