I have two tables, Orders and App.
App is a "helper" table which is populated according to Orders, and then passes the information on via web service to smart phones.
In order to populate App, we have created a parameterized stored procedure which runs at specific times, fluidly passing data from Orders to App.
But some updates to Orders are not caught by this stored procedure, so we were asked to create a trigger on Orders which executes this SP in these specific instances. This, too, works fine.
The problem starts when updates arrive from smart phones to the table App. The same parameterized SP runs "in reverse" to update the fields in Orders, and this works well - except that doing so can fire our supposedly selective trigger, resulting in redundant updates. To demonstrate:
New row in Orders > SP > Row is written in App > App updated by application > SP > Corresponding row in Orders is updated > Trigger catches this update, firing the SP again.
In this chain, only the last step is a problem.
I have tried using DISABLE TRIGGER and ENABLE TRIGGER within the SP to avoid this problem, but this is risky business and certainly cannot be the best possible way.
The solution I'm working on now is by using a field which is updated during application updates to Orders, but is not updated at any other time. For instance:
UPDATE Orders
SET Orders.StartTime = getdate(),
Orders.EndTime = CASE ... END,
Orders.Unique_Field = X
WHERE Orders.ID = #APPID
In standard updates to Orders, the field Unique_Field is not included in any INSERT or UPDATE statements. However, in some updates from App, this field may remain NULL.
My question is: What is the proper and safe way to tell my trigger to ignore any updates that arrive from my SP?
At present, my trigger looks like this:
AFTER UPDATE, INSERT
NOT FOR REPLICATION
AS
BEGIN
DECLARE #BUILDORDERCHECK AS DATETIME
DECLARE #ORDERDATECHECK AS DATETIME
DECLARE #ORDERNO AS INT
DECLARE #CHECKER AS TINYINT
SELECT #BUILDORDERCHECK = I.UpdateRecordDate,
#ORDERDATECHECK = I.OrderDate,
#ORDERNO = I.OrderNo,
#CHECKER = CASE WHEN NOT EXISTS (SELECT Unique_Field FROM Inserted) THEN 1 ELSE 0 END
FROM Inserted I
IF #BUILDORDERCHECK IS NOT NULL
AND #ORDERDATECHECK >= dateadd(day,-2,getdate())
AND #CHECKER = 1
-- Does not fire from BuildOrder
-- Does not fire on tasks older than 2 days
BEGIN
EXECUTE [dbo].[Asp_Apper;1] 0, -- CallCode, DO NOT CHANGE
1, -- Auto,
1, -- AOK,
0, -- CancelMsg,
0, -- TrailerNo
1 -- RejectMsg
END
END
#BUILDORDERCHECK and #ORDERDATECHECK work fine and behave as expected, but I need to find the right way to tell my trigger to check and see if Unique_Field was included in the update statement without being entangled by NULLS. As I said, Unique_Field can be updated by the SP to a value of NULL, so simply checking for NULL doesn't work.
Thanking you all in advance for any thoughts...
EDIT: It's already been pointed out that this trigger seems to ignore cases where more than one row is updated, which is accurate. Usually, we wouldn't build triggers like this; but in this case, updates to Orders are only ever row-by-row, and never in groups. The only time that this isn't the case is when the SP runs, which we want to ignore anyway.
I would use the CONTEXT_INFO and SET CONTEXT_INFO, something like this:
In the trigger, add a check at the top that bails out if a particular context value is set:
IF ISNULL(CONTEXT_INFO(),0x0) = 0x49204C696B6520426967204275747473
RETURN
And then in the (parts of) the stored procedures where you want to take actions that are ignored, just set that same value:
SET CONTEXT_INFO 0x49204C696B6520426967204275747473;
--Code that shouldn't cause the trigger to fire
SET CONTEXT_INFO 0x0
Which keeps things nicely contained (unlike disabling the trigger which has global effects)
Also, I know you've already stated in comments that this trigger only needs to work for single row update but it would be an automatic failure in code review for me for any trigger that doesn't properly deal with multiple rows existing in inserted (or at the very least, checks the number of rows and gives a clear error message if the requirement of single row updates hasn't been fulfilled)
Related
I am trying to write a trigger for what was done as a rather simple job before, so that it fires immediately after change. This was the code for the job.
UPDATE GrdFelde
SET GrdInhalt = 0
WHERE (GrdNummer LIKE 'BEST[A-Z][A-Z]%2') AND (GrdInhalt <> 0)
This is what I have so far.
CREATE TRIGGER [dbo].[GrdFelde_UTrig_Custom] ON [dbo].[GrdFelde] FOR UPDATE AS
SET NOCOUNT ON
IF UPDATE(GrdInhalt)
BEGIN
UPDATE GrdFelde
SET GrdInhalt = 0
WHERE (GrdNummer LIKE 'BEST[A-Z][A-Z]%2') AND (GrdInhalt <> 0)
END
I am new to triggers and not sure if this works. My problem here is, this is a table that stores all changes to a user interface, so it updates quite often, and I don't want to cause performance problems. Is it possible that the trigger only fires when the WHERE criteria are met? And if yes, where would I put this statement?
The trigger will be fired whenever an update statement is executed on the table. This can not be controlled (except disabling the trigger entirely).
You can, however, write it for better performance.
The UPDATE() function will return 1 even if the update/insert statement fails, so you probably don't want to use that as an indicator.
You have no reference to the inserted or to the deleted tables in your trigger, meaning it might effect records not included in the original update statement that triggers it.
I would probably write that trigger like this:
CREATE TRIGGER [dbo].[GrdFelde_UTrig_Custom] ON [dbo].[GrdFelde]
FOR UPDATE AS
SET NOCOUNT ON
UPDATE t
SET GrdInhalt = 0
FROM GrdFelde t
JOIN INSERTED i ON t.<PKColumn(s)> = i.<PKColumn(s)>
JOIN DELETED d ON t.<PKColumn(s)> = d.<PKColumn(s)>
WHERE t.GrdNummer LIKE 'BEST[A-Z][A-Z]%2'
AND t.GrdInhalt <> 0
AND ISNULL(CAST(i.GrdInhalt AS INT), -1) <> ISNULL(CAST(d.GrdInhalt AS INT), -1)
GO
Please note:
By joining the inserted and deleted tables, I'm ensuring the trigger only changes the rows effected by the statement that fired it.
Change <PKColumn(s)> to the column(s) that makes up the primary key of the table.
I'm casting to int and specifing -1 for null values to handle the case of change from null to a value or from a value to null. If your column is already an int, then the cast is redundant. If -1 is a valid value, you might want to consider casting to varchar(11) and replace null with an empty string.
I have a table Conservation_Dev with these columns (amongst 30 other):
I3_IDENTITY - a BIGINT and a unique key
STATE - a 2-letter US state abbreviation
ZONE - when I want to store the time zone for this record
I also have a table TimeZoneCodes that maps US states to time zones (forget the fact that some states are in more than one time zone):
state_code - the 2-letter abbreviation for the state
time_zone - the text with the time zone (EST, CST, etc)
I have data being loaded into Conservation_Dev without the time zone data and that is something that I can't control. I want to create an after insert trigger that updates the record. Doing some research on previous threads I came up with the following:
CREATE TRIGGER [dbo].[PopulateTimeZoneBasedOnUSState]
ON [dbo].[Conservation_Dev]
AFTER INSERT
AS
BEGIN
UPDATE [dbo].[Conservation_Dev]
SET [ZONE] = (SELECT [time_zone]
FROM [dbo].[TimeZoneCodes] Z
WHERE Z.[state_code] = i.[STATE])
FROM Inserted i
WHERE [I3_IDENTITY] = i.[I3_IDENTITY]
END
I get an error, though:
Ambiguous column name 'I3_IDENTITY'
Also, is this the right way to do this? Will this be a problem if the data load is, say, 5 or 10 thousand records at a time through an SSIS import package?
Try:
CREATE TRIGGER [dbo].[PopulateTimeZoneBasedOnUSState]
ON [dbo].[Conservation_Dev]
AFTER INSERT
AS
BEGIN
UPDATE A
SET A.[ZONE] = Z.[time_zone]
FROM [dbo].[Conservation_Dev] as A
INNER JOIN Inserted as i
ON A.[I3_IDENTITY] = i.[I3_IDENTITY]
INNER JOIN [dbo].[TimeZoneCodes] as Z
ON Z.[state_code] = i.[STATE]
END
My vote would be to move this update into the SSIS package so that the insert of data is all in one place. I may then move to a stored procedure that runs after the data is loaded. A trigger is going to happen on every insert. I think table based queries/updates would have a better performance. Triggers can be hard to find from a troubleshooting standpoint, but if they are well documented then it may not be much of an issue. The trigger will allow the Zone to be populated once a record is inserted.
The problem with the trigger you are creating is because SQL Server doesn't know which I3_IDENTITY you are referring to in the first part of the WHERE clause. This should fix it:
CREATE TRIGGER [dbo].[PopulateTimeZoneBasedOnUSState]
ON [dbo].[Conservation_Dev]
AFTER INSERT
AS
BEGIN
UPDATE [dbo].[Conservation_Dev]
SET [ZONE] = (SELECT TOP 1 [time_zone] FROM [dbo].[TimeZoneCodes] Z WHERE Z.[state_code] = i.[STATE])
FROM Inserted i
WHERE [dbo].[Conservation_Dev].[I3_IDENTITY] = i.[I3_IDENTITY]
END
This would be a table level update that will update all time_zones in one sweep. I would use something like this in a stored procedure after the initial inserts were complete.
CREATE PROCEDURE dbo.UpdateAllZones
AS
Update dbo.Conservation_Dev
SET Zone = Time_Zone
From dbo.Conservation_Dev c INNER JOIN dbo.TimeZoneCodes z ON c.state_code = z.state_code
Go
Executed as
EXEC dbo.UpdateAllZones
First ever post, please be gentle...
I have a need to update one column when a table is either updated or row(s) inserted, thus I've created a trigger (AFTER INSERT, UPDATE). The problem is that it's recursive due to the fact that the insert includes an update statement, thus firing the trigger again.
I've also tried separating the INSERT and UPDATE into two different triggers, but I've ran into problem with sp_settriggerorder() and trigger_nestlevel(), because there are other trigger in place, due to out of box application defaults.
My question is, is there any way to use an IF clause stating whether the update came from the application itself or my trigger? Case, if it's my trigger, than I could easily ELSE IF it to a return and it would no longer be recursive.
CREATE TRIGGER [dbo].[JobCardMetlInsertUpdateItemDesc]
ON [dbo].[JobCardMetl] AFTER INSERT
AS
BEGIN TRANSACTION [Description]
UPDATE JobCardMetl
SET JobCardMetl.Description = item.Description
FROM JobCardMetl
INNER JOIN item ON JobCardMetl.Item = item.item
WHERE JobCardMetl.RecordDate = (SELECT MAX(JobCardMetl.RecordDate)
FROM JobCardMetl)
COMMIT TRANSACTION [Description]
Your trigger is very suspicious: It does not reference the INSERTED pseudotable. This means that your trigger is updating records unaffected by the INSERT, always a huge code-smell.
The usual solution to the problem of recursive triggers is to be careful about what columns are being updated, ie. use UPDATED(), and what rows, and the natural business logic should stop the recursion (ie. the nested trigger should find nothing to update, because the guard checks don't qualify).
Ultimately you can use the logical sledgehammer: SET CONTEXT_INFO and CONTEXT_INFO(). You check it, set it and clean it in your trigger. If is already set, you know you're nested from the trigger. The cleaning up part is critical. You also pray no other app/dev does the same, as there is only one context info per session (SQL 2016 improves this).
You could check whether the description is still different from what you want it to be updated to. If the same, you do not update. That way you avoid the endless recursion.
Also, with the WHERE condition you seem to want to limit the update to the currently inserted record, but for that you can use the virtual INSERTED table, which has the records that have been inserted.
Finally, it seems overkill to start a new transaction for an atomic statement. Note that the trigger will anyway execute within the transaction in which the triggering INSERT statement executes.
So taking all that together, you could make your trigger as follows (I assume RecordDate uniquely identifies a record -- change it to whatever is the primary key):
CREATE TRIGGER [dbo].[JobCardMetlInsertUpdateItemDesc]
ON [dbo].[JobCardMetl] AFTER INSERT
AS
UPDATE JobCardMetl
SET j.Description = item.Description
FROM JobCardMetl j
INNER JOIN item ON j.Item = item.item
INNER JOIN INSERTED i ON i.RecordDate = j.RecordDate
WHERE j.Description IS NULL OR j.Description <> item.Description
I would like to have two columns in my table to store the add-time and update-time. As the name suggests, the add-time is the time when a row was first added; the update-time is the last time a row was updated. I can implement first by defaulting value to GETDATE(). As for the second, #Jeremy suggested using triggers here:
On Update: Auto Update Date/Time Field
Is there any easier way?
If I implement a trigger, does that mean two UPDATE statements (or one INSERT and one UPDATE in case the row is just created) have to be executed?
Thanks.
EDIT: For the second part of the question, this is the trigger I have in my database:
CREATE TRIGGER [dbo].[TR_AddUpdateTime]
ON [dbo].[AddUpdateTime]
AFTER UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for trigger here
UPDATE r
SET UpdateTime = GETDATE()
FROM AddUpdateTime r
JOIN inserted i
ON i.Id = r.Id
END
Does this mean that an additional update statement will be executed whenever I make an update to AddUpdateTime table, or MSSQL is smart enough to recognise that I am updating the same record and save both changes at the same time?
Other ways:
Use a stored procedure to wrap the updates
You can do UPDATE MyTable SET ..., UpdatedWhen = DEFAULT...
You need an UPDATE trigger that itself has one more UPDATE. Using a default on the table means you don't need a trigger for INSERT
You could make sure all inserts and updates go through a stored procedure that inserts the time.
No, the insert trigger will modify the values so that it's only one statement.
Edit: For entity framework could you implement the OnSavingChanges event to insert the update-time field (see here)? This is moving the responsibility from the DB to the Code which you may or may not be comfortable with.
In entity framework, you can use the partial class to extend the business logic. In this case, you can use OnPropertyChanged to set the update-time to DateTime.Now. You can use this article on MSDN as a guidance.
1) "Auto update" and "triggers" doesn't really sound like the way to go.
2) SQL Server has a (relatively new) "merge" statement. But that doesn't really sound like what you're looking for, either.
3) Instead:
a) If primary key doesn't exist (if "new"), then INSERT. In this case, first time = last time = GETDATE().
b) Otherwise, if the primary key already exists, then UPDATE. Your update will update only the "last time" column (along with the rest of the fields you need to update for this record.
4) Perhaps you can wrap this logic in a stored procedure?
5) Again - the key is to update BOTH "first time" and "last time*, the FIRST TIME, and then update ONLY "last time" all SUBSEQUENT times.
They might be an easier way but using triggers will be more effective and will guarantee no mater how records inseted or updated (from .net code or direct table inserts/updates), those two fields are populated
To Gurantee that only one trigger get fired each time, combine insert and update trigger
CREATE TRIGGER <trigger name> ON TableA for INSERT,UPDATE
And do conditional checking to distinguish between two actions
IF UPDATE
In my database I have certain data that is important to the functioning of the app (constants, ...). And I have test data that is being generated by testing the site. As the test data is expendable it delete it regularly. Unfortunately the two types of data occur in the same table so I cannot do a delete from T but I have to do a delete from T where IsDev = 0.
How can I make sure that I do not accidentally delete the non-dev data by forgetting to put the filter in? If that happens I have to restore from a production backup which is wasting my time. I would require some sort of foreign key like behavior that fails a delete when a certain condition is met. This would also be useful to ensure that my code does not do anything harmful due to a bug.
Well, you could use a trigger that throws an exception if any of the records in the deleted meta-table have IsDev = 1.
CREATE TRIGGER TR_DEL_protect_constants ON MyTable FOR DELETE AS
BEGIN
IF EXISTS(SELECT 1 FROM deleted WHERE IsDev <> 0)
BEGIN
ROLLBACK
RAISERROR('Can''t delete constants', 1, 16)
RETURN
END
END
I'm guessing a bit on the syntax, but you get the idea.
I would use a trigger.
keep a backup of the rows you want to retain in a separate admin table
Seems like you need a trigger on delete operation that would look at the row and rollback transaction if it sees that it's a row that should never be deleted.
Also, you might want to read this article: Prevent accidental update or delete commands of all rows in a SQL Server table
Depending on how transparent you want to make this, you could use an INSTEAD OF trigger that will always remember the WHERE for you.
CREATE TRIGGER TR_IODEL_DevOnly ON YourTable
INSTEAD OF DELETE
AS
BEGIN
DELETE FROM t
FROM Deleted d
INNER JOIN YourTable t
ON d.PrimaryKey = t.PrimaryKey
WHERE t.IsDev = 0
END
I suggest that instead of writing the delete statement from scratch every time, just create a stored procedure to do the deletions and execute that.
create procedure ResetT as delete from T where IsDev = 0
You could create an extra column IS_TEST in your tables, rename the TABLE_NAME to TABLE_NAME_BAK, and create a view TABLE_NAME on the TABLE_NAME_BAK so that only rows where IS_TEST was set are displayed in it. Setting IS_TEST to zero for the data you wish to keep, and adding a DEFAULT 1 to the IS_TEST column should complete the job. It is similar to the procedure required for creating 'soft deletes'.