Can I conditionally avoid hitting a trigger completely? - sql-server

We had to use a trigger to sync an old system to the new system until we can fully deprecate the old system. The new system doesn't need this trigger at all and, in fact, exits out immediately on the condition that it's the new app.
The impact on the old system is acceptable.
However, the impact on the new system is not because the new system processes many, many more records on a single update. Merely executing the trigger changes an update from 10 seconds (already "UGH") to over a minute and a half.
The new system performs acceptably by disabling the trigger in code (VS Core with EntityFramework btw), running the update and then re-enabling the code, all within a transaction. There is disagreement among my colleagues about whether or not the trigger is disabled for the other application while the transaction is being processed.
I have already seen this post:
https://dba.stackexchange.com/questions/204339/sql-server-how-to-disable-trigger-for-an-update-only-for-your-current-session
And the first answer is the solution I am using. My colleagues tell me that won't work. I believe it will. But the answers 2-whatever seem to contradict the first answer.
My testing proved out the first answer as well but I need to be 100% sure on this.
TIA

However, the impact on the new system is not because the new system processes many, many more records on a single update.
You should find a way to batch the updates into fewer statements. The trigger fires per statement, not per row. EG EF Core does batching automatically, or you can use a TVP or SqlBulkCopy into a temp table, etc.
DISABLE TRIGGER within a transaction eliminates the possibility of other users updating while the trigger is disabled
Yes. You can easily verify that disabling the trigger takes a Sch-M lock on the table for the duration of the transaction, which is incompatible with all other table access.
eg
use tempdb
drop table if exists t
create table t(id int primary key)
go
create trigger t_t on t after insert
as
begin
select 'trigger running' msg
end
go
begin transaction
go
disable trigger t_t on t
go
select object_name(resource_associated_entity_id) table_name, resource_lock_partition, request_mode, request_status
from sys.dm_tran_locks
where request_session_id = ##spid
and resource_type = 'OBJECT'
order by 1,2
rollback

Related

What is the proper way to write insert triggers in SQL Server?

My question is a little bit theoretical because I don't have any concrete working example. But I think it's worth to answer it.
What is the proper way to write insert-triggers in SQL Server?
Let's say I create a trigger like this (more or less pseudocode)
CREATE TRIGGER MY_TRIGGER
ON MY_TABLE
FOR INSERT AS
DECLARE #myVariable;
DECLARE InsertedRows CURSOR FAST_FORWARD FOR SELECT A_COLUMN FROM INSERTED;
OPEN InsertedRows;
FETCH NEXT FROM InsertedRows INTO #NewOrderCode;
...
INSERT INTO ANOTHER_TABLE (
CODE,
DATE_INSERTED
) VALUES (
#myVariable,
GETDATE()
);
...etc
Now what if someone else create another trigger on the same table and that trigger would change some columns on inserted rows? Something like this
CREATE TRIGGER ANOTHER_TRIGGER
ON MY_TABLE
FOR INSERT AS
UPDATE MY_TABLE
SET A_COLUMN = something
WHERE ID IN (SELECT ID FROM INSERTED);
...etc
Then my trigger (if fired after the another trigger) operates on wrong data, because INSERTED data are not the same as the real inserted data in the table which have been changed with the other trigger right?
Summary:
Trigger A updates new inserted rows on table T, trigger B then operates on dirty data because the update from trigger A is not visible in the INSERTED pseudo table which trigger B operates on. BUT if the trigger B would operate directly on the table instead of on the pseudo table INSERTED, it would see updated data by trigger A.
Is that true? Should I always work with the data from the table itself and not from the INSERTED table?
I'd usually recommend against having multiple triggers. For just two, you can, if you want to, define what order you want them to run in. Once you have a few more though, you have no control over the order in which the non-first, non-last triggers run.
It also increasingly makes it difficult just to reason about what's happening during insert.
I'd instead recommend having a single trigger per-table, per-action, that accomplishes all tasks that should happen for that action. If you're concerned about the size of the code that results, that's usually an indication that you ought to be moving that code out of the trigger all together - triggers should be fast and light.
Instead, you should start thinking about having the trigger just record an action and then use e.g. service broker or a SQL Server job that picks up those records and performs additional processing. Importantly, it does that within its own transactions rather than delaying the original INSERT.
I would also caution against the current code you're showing in example 1. Rather than using a cursor and inserting rows one by one, consider writing an INSERT ... SELECT statement that references inserted directly and inserts all new rows into the other table.
One thing you should absolutely avoid in a trigger is using a CURSOR!
A trigger should be very nimble, small, fast - and a cursor is anything but! After all, it's being executed in the context of the transaction that caused it to fire. Don't delay completion of that transaction unnecessarily!
You need to also be aware that Inserted will contain multiple rows and write your trigger accordingly, but please use set-based techniques - not cursors and while loops - to keep your trigger quick and fast.
Don't do heavy lifting, time-consuming work in a trigger - just updating a few columns, or making an entry into another table - that's fine - NO heavy lifting! and no e-mail sending etc!
My Personal Guide to SQL Trigger Happiness
The trigger should be light and fast. Expensive triggers make for a slow database for EVERYBODY (and not incidentally unhappiness for everybody concerned including the trigger author)
One trigger operation table combo please. That is at most one insert trigger on the foo table. Though the same trigger for multiple operations on a table is not necessarily bad.
Don't forget that the inserted and deleted tables may contain more than a single row or even no rows at all. A happy trigger (and more importantly happy database users and administrators) will be well-behaved no matter how many rows are involved in the operation.
Do not Not NOT NOT ever use cursors in triggers. Server-side cursors are usually an abuse of good practice though there are rare circumstances where their use is justified. A trigger is NEVER one of them. Prefer instead a series of set-oriented DML statements to anything resembling a trigger.
Remember there are two classes of triggers - AFTER triggers and INSTEAD OF triggers. Consider this when writing a trigger.
Never overlook that triggers (AFTER or INSTEAD OF) begin execution with ##trancount one greater than the context where the statement that fired them runs at.
Prefer declarative referential integrity (DRI) over triggers as a means of keeping data in the database consistent. Some application integrity rules require triggers. But DRI has come a long way over the years and functions like row_number() make triggers less necessary.
Triggers are transactional. If you tried to do a circular update as you've described, it should result in a deadlock - the first update will block the second from completing.
While looking at this code though, you're trying to cursor through the INSERTED pseudo-table to do the inserts - nothing in the example requires that behaviour. If you just insert directly from the full INSERTED table you'd get a definite improvement, and also less firings of your second trigger.

Updating same table with trigger on insert

One of our tables has a column for saving troubleshooting information, it is an XML data type, pertaining to the row so if an issue arises we can quickly see everything that happened for that transaction. This has become an issue because it grows the database size drastically. After a month there is generally no need to retrieve this information and it is wasting valuable space.
Our solution is to null out the XML log column after it is a month old by using an insert trigger. Our concern is, will this affect the performance of the table enough to be noticeable and potentially cause problems?
Below is what we are trying to achieve:
CREATE PROCEDURE [dbo].[sp_ClearTransactionXmlLogs]
AS
UPDATE [dbo].[CCResponse]
SET [TransactionXML] = NULL
WHERE [DateSaved] < DATEADD(MONTH,-1,GETDATE())
AND [TransactionXML] IS NOT NULL;
CREATE TRIGGER [dbo].[tr_ClearTransactionXmlLogs]
ON [dbo].[CCResponse]
AFTER INSERT
AS EXEC sp_ClearTransactionXmlLogs;
Rather than having this run as a trigger every time an insert happens, why not schedule it as a nightly job, part of your database maintenance jobs?
Usually triggers are used to perform an action after the main operation (insert, update, delete) about the record changed.
If you don't execute any insert, your CCResponse remail with TransactionXML not null.
Instead of trigger, IMHO, I use a planned job.

changetable doesn't have current changes when in a trigger?

I have the following trigger to save the changes to a log table. However, it will not catch the changes triggered the trigger? Or is there another solution?
alter trigger trigger_xxx on table1 after delete, update, insert
as
begin
declare #lastVersion bigint = coalesce((select max(SYS_CHANGE_VERSION) from [log]), 0)
insert into [log]
([SourceColumnDescriptionPattern], SYS_CHANGE_VERSION, SYS_CHANGE_OPERATION, SYS_CHANGE_COLUMNS, SYS_CHANGE_CONTEXT)
SELECT [SourceColumnDescriptionPattern], SYS_CHANGE_VERSION, SYS_CHANGE_OPERATION, SYS_CHANGE_COLUMNS, SYS_CHANGE_CONTEXT
FROM changetable(changes [table1], #lastVersion) as ct
end
Change Tracking is intended for synchronization purposes. For example you can use it to find out if a application side cache needs to be refreshed. Therefore you do not want for that information to show up before the transaction is committed. As your trigger executes within the transaction the changes are not visible yet.
Why are you trying to duplicate the information available in Change Tracking? Cant you just use those functions and DMVs instead of your own?
Assuming you have a good reason, your best bet is probably to use a trigger and capture the affected primary key together with other pertinent information like a timestamp yourself. However there is no real good way to enforce that trigger being executed before all others so you might still end up in the same situation. You could try sp_settriggerorder in your case: http://msdn.microsoft.com/en-us/library/ms186762.aspx It might be enough in your situation.

How to bypass trigger on SQL Server 2008

I want to bypass a trigger on some cases, can any one help me on it ?
I have a try with this link but not able find out the solution.
Thanks in advance
Step 1 Disable Trigger
DISABLE TRIGGER Person.uAddress ON Person.Address;
http://msdn.microsoft.com/en-us/library/ms189748.aspx
Step 2 Do stuff
UPDATE Person.Address SET HouseNumber = REPLACE(HouseNumber, ' ', '');
Step 3 Enable Trigger
ENABLE Trigger Person.uAddress ON Person.Address;
http://msdn.microsoft.com/en-us/library/ms182706.aspx
--
Must say, use with care!
you cant avoid a trigger from being run.
What you can do is add conditions in it, for example:
CREATE TRIGGER trigger_name
ON table
AFTER INSERT
AS
begin
IF (your condition) begin
--code
END
end
just be careful if you have a INSTEAD OF trigger. If you don't code the insert, nothing will be inserted on the table.
You can suppress the trigger by checking for existence of a temp table. The code for which the trigger needs to be suppressed should create a temp table(say #suppress_trigger). In your trigger check for existence of this temp table and return.
Example:
CREATE TABLE [dbo].[dummy](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Val] [char](1) NULL)
--create a history table which gets populated through trigger
CREATE TABLE [dbo].[dummy_hist](
[Id] [int] NULL,
[Val] [char](1) NULL)
CREATE TRIGGER [dbo].[trig_Insert]
ON [dbo].[dummy]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
if OBJECT_ID('tempdb..#Dummy_escape_trig') is not NULL
RETURN
INSERT INTO dummy_hist
SELECT * FROM inserted
END
--Proc for which trigger needs to be suppressed
CREATE PROCEDURE [dbo].[ins_dummy]
#val AS CHAR(1)
AS
BEGIN
SET NOCOUNT ON;
CREATE TABLE #Dummy_escape_trig (id int)
INSERT INTO dummy
VALUES(#val)
END
#Manish: I do not think bypassing a trigger would be good option form best practices perspective. Instead, I would evaluate, take into consideration and filter out the set of conditions required to fire the trigger.
There are some practical problems with disabling a trigger in a production environment. I'm presuming you will be disabling table triggers specifically (as opposed to database or server-wide):
You need ALTER permissions on the table operated on. This is generally considered problematic for security reasons. It gets far more problematic at the database and server level.
The disabling is not limited to your specific connection/session, but will affect all events that fire the trigger, from whatever sessions, while it is disabled. If you have any expectancy of concurrency in the code in question, then you have to assume the same code is going to be called from other threads. What is missing from the earlier suggestions is the full consideration of the concurrency management that using a semaphore of any kind requires. In particular, that temp table defines a semaphore for a protected section of code (in the multi-threading sense) where the semaphore is the existence of the temp table. IF you actually disable the trigger, then to use that temp table for its intended purpose correctly, there has to be a test just before the protected code creates the temp table, to see if it exists already. This may require a global temp table, or you can do a more clever search of the tempdb to see if it exists in any session. The protected code has to test if the semaphore exists, and enter a blocking state if it intends to wait for the resource (the trigger, in this case) to be available as expected. This get complex, and has little value in exchange for the complexity.
Opinion: While it's easy to disable/enable triggers, there are a lot of considerations to be careful of if you do disable temporarily them as part of some specific use case, as this is engineering what should be a concurrency management strategy.
Unless you're relatively comfortable with the nuts and bolts of concurrency management, or unless you have absolutely no requirement for the code to be reentrant/concurrent, you will (likely sporadically) have problems if you user DISABLE TRIGGER, but don't consider these factors.
The safest path I can see, considering it all, is to not disable the trigger, use a local temp table as a semaphore that affects only the current session, carefully code the exit code so that the temp table is definitely destroyed at the end of the protected code, as already suggested, but it still requires a check/block, even though it will only matter when on the same connection, and particularly in parallel execution on same connection. The absolutely safest way to do this is to create an sproc for the protected code alone. The sproc checks/blocks, then if it proceeds (and you'll need to check for deadlock error after the blocking code exits), creates the temp table. Since temps are destroyed when the sproc returns, any path out of the protected code will handle the semaphore. But temp tables are available throughout the session - not just within the sproc (while the sproc is running), or just within a batch even. SQL Server supports parallel queries on a single session, so the temp table created in the one thread of the session is visible in any others. That means it can be seen OUTSIDE the sproc, in the same session, and in fact, the same code could be run at that time. That's why you STILL need real concurrency management in this scenario.
And finally, my apologies for the convoluted comment. I find just about every discussion of multi-threading and concurrency management turns into that, because while the concepts aren't all that difficult, the coding practices have long been considered delicate and fragile, and prone to developer error.
Create a different user. And inside the trigger check for the current user and execute.
Another option would be to include Application Name=MyAppName; in your connection string and in the trigger add a conditional using the SQL APP_NAME function.
IF(APP_NAME() = 'MyAppName')

Simultaneous Triggers in SQL Server

In Microsoft SQL Server :
I've added an insert trigger my table ACCOUNTS that does an insert into table BLAH based upon the inserted values.
The inserts come from only one place, and those happen one at a time. (By that, I mean, that there's never two inserts in a transaction - two web users could, theoretically click submit and have their inserts done in a near-simulataneous way.)
Do I need to adapt the trigger to handle more than one row being in inserted, the special table created for triggers - or does each individual insert transaction launch the trigger separately?
Each insert calls the trigger. However, if a single insert adds more than one row the trigger is only called once, so your trigger has to be able to handle multiple records.
The granularity is at the INSERT statement level not at the transaction level.
So no, if you have two transactions inserting into the same table they will each call the trigger ATOMICALLY.
BOb
in your situation each insert happens in its own transaction and fires off the trigger individually, so you should be fine. if there was ever a circumstance where you had two inserts within the same transaction you would have to modify the trigger to do either a set based insert from the 'inserted' table or some kind of cursor if additional processing is necessary.
If you do only one insert in a transaction, I don't see any reason for more rows to be in inserted, except if there was a possibility of recursive trigger calls.
Still, it could cause you troubles if you'd change the behavior of your application in future and you forget to change the triggers. So just to be sure, I would rather implement the trigger as if it could contain multiple rows in inserted.

Resources