I have a table that utilizes an instead of insert trigger. The trigger manipulates the data for each inserted row before inserting it. Because the inserted table is not modifiable, it was implemented using a temp (#) table. At the end of the trigger a select from the temp table is done to return the inserted data to the calling client. When I do a an insert in SSMS, I can see the data that is returned and the columns all have names and values. My Trigger looks like this:
Create TRIGGER [dbo].[RealTableInsteadOfInsert] on [dbo].[RealTable]
INSTEAD OF INSERT
AS
BEGIN
set nocount on;
declare #lastDigit int;
if exists (select * from inserted)
Begin
Select *
into #tempInserted
from inserted
.... Logic to add and manipulate column data .....
INSERT INTO RealTAble(id, col1, col2, col3,....)
Select *
from #tempInserted;
Select id as insertId, *
from #tempInserted;
END
End
My question is how can I capture the output of the instead of trigger into a table for further processing using the returned data. I can't use an output clause on the insert statement as the temp table no longer exists and the data that was calculated/modified on the insert was not done in the inserted table. Is my only option to update the trigger to use a global temp table instead of a local?
You have two main options:
Send data through service broker. This is complicated and potentially slow. On the plus side, it does let you do your further processing work decoupled from the original transaction, which is nice in certain use cases.
Write the data to a real table. This can also give you transactional decoupling, but you don't get automatic activation of the decoupled processing logic if you want that.
OK, but if you write to a real table, and a lot of processes are causing the trigger to fire, how do you find "your" data? Putting aside the fact that a global temp table has the same problem unless you want to get dynamic, one solution is to use a process keyed table, as described by Erland Sommarskog in his data sharing article.
Returning data from triggers (ie, to a client) is a bad idea. The ability to do this at all will soon be removed I know you don't need to do this in your solution, just giving you a heads up.
Hi i' d like to know is there any way to create a rollback copy of a table in SQL server just in case i do wrong insert or update statement i'd like to recover my data as it was before that insert or update statements.
SELECT *
INTO myBackupTableName
FROM Yourtable
Creates a backup of the table.
Assuming that we are discussing a production environment and workload: The more I think about this question/requirement the more strongly I believe rollback is the best answer.
Summarizing suggestions already made:
Select into to create a backup table will create a copy of the table but if you revert to it you will potentially be losing data from other users or batches.
Using the into clause from a select or update statement will get you the changed rows but not the original value.
Another answer would be to create an audit table and use a trigger to populate it. Your audit table would need to include enough details regarding the batch to identify it for rollback. This could end up being quite a rabbit hole. Once you have the trigger on the base table and the audit table you will then need to create the code to use this audit table to revert your batch. The trigger could become a performance issue and if the database has enough changes from enough other users then you still would not be able to revert your batch without possibly losing other users' work.
You can wrap your update AND your validation code inside the same proc and if the validation fails only your changes are rolled back.
https://learn.microsoft.com/en-us/sql/t-sql/language-elements/begin-transaction-transact-sql
https://learn.microsoft.com/en-us/sql/t-sql/language-elements/rollback-transaction-transact-sql
I'm design a new db schema for a SQL Server 2012 database.
Each table should get two extra columns called modified and created which should be automatically change as soon a row gets inserted or updated.
I don't know how rather the best way to get there.
I assuming that trigger are the best way to handle it.
I was trying to find examples with triggers.. but the tutorials which I found insert data in another table etc.
I assumed it's a quite common scenario but I couldn't find the answer yet.
The created column is simple - just a DATETIME2(3) column with a default constraint that gets set when a new row is inserted:
Created DATETIME2(3)
CONSTRAINT DF_YourTable_Created DEFAULT (SYSDATETIME())
So when you insert a row into YourTable and don't specify a value for Created, it will be set to the current date & time.
The modified is a bit more work, since you'll need to write a trigger for the AFTER UPDATE case and update it - you cannot declaratively tell SQL Server to do this for you....
Modified DATETIME2(3)
and then
CREATE TRIGGER updateModified
ON dbo.YourTable
AFTER UPDATE
AS
UPDATE dbo.YourTable
SET modified = SYSDATETIME()
FROM Inserted i
WHERE dbo.YourTable.PrimaryKey = i.PrimaryKey
You need to join the Inserted pseudo table which contains all rows that were updated with your base table on your primary key for that table.
And you'll have to create this AFTER UPDATE trigger for each table that you want to have a modified column in.
Generally, you can have the following columns:
LastModifiedBy
LastModifiedOn
CreatedBy
CreatedOn
where LastModifiedBy and CreatedBy are references to a users table (UserID) and the LastModifiedOn and CreatedOn columns are date and time columns.
You have the following options:
Solution without triggers - I have read somewhere that "The best way to write triggers is not to write such." and you should know that generally they are hurting the performance. So, if you can avoid them it is better to do so, even using triggers may look the easiest thing to do in some cases.
So, just edit all you INSERT and UPDATE statements to include the current UserID and current date and time. If such user ID can not be defined (anonymous user) you can use 0 instead and the default value of the columns (in case no user ID is specified will be NULL). When you see NULL values are inserted you should find the "guilty" statements and edit it.
Solution with triggers - you can created AFTER INSERT, UPDATE trigger and populated the users columns there. It's easy to get the current date and time in the context of the trigger (use GETUTCDATE() for example). The issue here is that the triggers do not allowed passing/accepting parameters. So, as you are not inserting the user ID value and you are not able to pass it to the trigger. How to find the current user?
You can use SET CONTEXT_INFO and CONTEXT_INFO. Before all you insert and update statements you must use the SET CONTEXT_INFO to add the current user ID to the current context and in the trigger you are using the CONTEXT_INFO function to extract it.
So, when using triggers you again need to edit all your INSERT and UPDATE clauses - that's why I prefer not to use them.
Anyway, if you need to have only date and time columns and not created/modified by columns, using triggers is more durable and easier as you are not going to edit any other statements now and in the future.
With SQL Server 2016 we can now use the SESSION_CONTEXT function to read session details. The details are set using sp_set_session_context (as read-only or read and write). The things are a little bit user-friendly:
EXEC sp_set_session_context 'user_id', 4;
SELECT SESSION_CONTEXT(N'user_id');
A nice example.
Attention, above works fine but not in all cases,
I lost a lot of time and found this helpfull:
create TRIGGER yourtable_update_insert
ON yourtable
AFTER UPDATE
as
begin
set nocount on;
update yourtable set modified=getdate(), modifiedby = suser_sname()
from yourtable t
inner join inserted i on t.uniqueid=i.uniqueid
end
go
set nocount on; is needed else you get the error:
Microsoft SQL Server Management Studio
No row was updated.
The data in row 5 was not committed.
Error Source: Microsoft.SqlServer.Management.DataTools.
Error Message: The row value(s) updated or deleted either do not make the row unique or they alter multiple rows(2 rows).
Correct the errors and retry or press ESC to cancel the change(s).
OK Help
CREATE TRIGGER [dbo].[updateModified]
ON [dbo].[Transaction_details]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
UPDATE dbo.Transaction_details
SET ModifedDate = GETDATE() FROM dbo.Transaction_details t JOIN inserted i ON
t.TransactionID = i.TransactionID--SYSDATETIME()
END
One important thing to consider is that you should always have the inserted / updated time for all of your tables and rows be from the same time source. There is a danger - if you do not use triggers - that different applications making direct updates to your tables will be on machines that have different times on their clocks, or that there will not be consistent use of local vs. UTC in the application layer.
Consider a case where the system making the insert or update query that directly sets the updated / modified time value has a clock that is 5 minutes behind (unlikely, but worth considering) or is using local time versus UTC. If another system is polling using an interval of 1 minute, it might miss the update.
For a number of reasons, I never expose my tables directly to applications. To handle this situation, I create a view on the table explicitly listing the fields to be accessed (including the updated / modified time field). I then use an INSTEAD OF UPDATE, INSERT trigger on the view and explicitly set the updatedAt time using the database server's clock. This way I can guarantee that the timebase for all records in the database is identical.
This has a few benefits:
It only makes one insert to the base table and you don't have to
worry about cascading triggers being called
It allows me to control at the field level what information I expose
to the business layer or to other consumers of my data
It allows me to secure the view independently from the base table
It works great on SQL Azure.
Take a look at this example of the trigger on the view:
ALTER TRIGGER [MR3W].[tgUpdateBuilding] ON [MR3W].[vwMrWebBuilding]
INSTEAD OF UPDATE, INSERT AS
BEGIN
SET NOCOUNT ON
IF EXISTS(SELECT * FROM DELETED)
BEGIN
UPDATE [dbo].[Building]
SET
,[BuildingName] = i.BuildingName
,[isActive] = i.isActive
,[updatedAt] = getdate()
FROM dbo.Building b
inner join inserted i on i.BuildingId = b.BuildingId
END
ELSE
BEGIN
INSERT INTO [dbo].[Building]
(
[BuildingName]
,[isActive]
,[updatedAt]
)
SELECT
[BuildingName]
,[isActive]
,getdate()
FROM INSERTED
END
END
I hope this helps, and I would welcome comments if there are reasons this is not the best solution.
This solution might not work for all use cases but wherever possible its a very clean way.
Create an stored procedure for inserting/updating row in table and only use this sp for modifying the table. In stored procedure you can always set created and updated column as required. e.g. setting updatedTime = GetUTCTime()
I have 2 triggers (Insert and Update)
CREATE TRIGGER my_InsertTrigger
AFTER INSERT ON `table`
FOR EACH ROW
BEGIN
Update a previous record if found
END //
CREATE TRIGGER my_UpdateTrigger
AFTER UPDATE ON `table`
FOR EACH ROW
BEGIN
.....
END //
According to my understanding triggers cannot be fire programmatically/manually, it only fires when an insert/update/delete happens on that table.
So my question is in the case mentioned above, will the insert trigger(my_InsertTrigger) invoke the update trigger (my_UpdateTrigger)?
Thanks in advance.
When I changed the trigger syntax to SQL Server (with just inserting one row) one trigger that causes an update on a table will kick off the update trigger on that table (if it exists).
I have seen this on audit tables and two things can, and will, happen:
1) Performance will suffer.
2) If the audit triggers start kicking each other off (by having cross trggers) you can run out of stack space (The triggers can nest 32 levels down).
Consider this trigger:
ALTER TRIGGER myTrigger
ON someTable
AFTER INSERT
AS BEGIN
DELETE FROM someTable
WHERE ISNUMERIC(someField) = 1
END
I've got a table, someTable, and I'm trying to prevent people from inserting bad records. For the purpose of this question, a bad record has a field "someField" that is all numeric.
Of course, the right way to do this is NOT with a trigger, but I don't control the source code... just the SQL database. So I can't really prevent the insertion of the bad row, but I can delete it right away, which is good enough for my needs.
The trigger works, with one problem... when it fires, it never seems to delete the just-inserted bad record... it deletes any OLD bad records, but it doesn't delete the just-inserted bad record. So there's often one bad record floating around that isn't deleted until somebody else comes along and does another INSERT.
Is this a problem in my understanding of triggers? Are newly-inserted rows not yet committed while the trigger is running?
Triggers cannot modify the changed data (Inserted or Deleted) otherwise you could get infinite recursion as the changes invoked the trigger again. One option would be for the trigger to roll back the transaction.
Edit: The reason for this is that the standard for SQL is that inserted and deleted rows cannot be modified by the trigger. The underlying reason for is that the modifications could cause infinite recursion. In the general case, this evaluation could involve multiple triggers in a mutually recursive cascade. Having a system intelligently decide whether to allow such updates is computationally intractable, essentially a variation on the halting problem.
The accepted solution to this is not to permit the trigger to alter the changing data, although it can roll back the transaction.
create table Foo (
FooID int
,SomeField varchar (10)
)
go
create trigger FooInsert
on Foo after insert as
begin
delete inserted
where isnumeric (SomeField) = 1
end
go
Msg 286, Level 16, State 1, Procedure FooInsert, Line 5
The logical tables INSERTED and DELETED cannot be updated.
Something like this will roll back the transaction.
create table Foo (
FooID int
,SomeField varchar (10)
)
go
create trigger FooInsert
on Foo for insert as
if exists (
select 1
from inserted
where isnumeric (SomeField) = 1) begin
rollback transaction
end
go
insert Foo values (1, '1')
Msg 3609, Level 16, State 1, Line 1
The transaction ended in the trigger. The batch has been aborted.
You can reverse the logic. Instead of deleting an invalid row after it has been inserted, write an INSTEAD OF trigger to insert only if you verify the row is valid.
CREATE TRIGGER mytrigger ON sometable
INSTEAD OF INSERT
AS BEGIN
DECLARE #isnum TINYINT;
SELECT #isnum = ISNUMERIC(somefield) FROM inserted;
IF (#isnum = 1)
INSERT INTO sometable SELECT * FROM inserted;
ELSE
RAISERROR('somefield must be numeric', 16, 1)
WITH SETERROR;
END
If your application doesn't want to handle errors (as Joel says is the case in his app), then don't RAISERROR. Just make the trigger silently not do an insert that isn't valid.
I ran this on SQL Server Express 2005 and it works. Note that INSTEAD OF triggers do not cause recursion if you insert into the same table for which the trigger is defined.
Here's my modified version of Bill's code:
CREATE TRIGGER mytrigger ON sometable
INSTEAD OF INSERT
AS BEGIN
INSERT INTO sometable SELECT * FROM inserted WHERE ISNUMERIC(somefield) = 1 FROM inserted;
INSERT INTO sometableRejects SELECT * FROM inserted WHERE ISNUMERIC(somefield) = 0 FROM inserted;
END
This lets the insert always succeed, and any bogus records get thrown into your sometableRejects where you can handle them later. It's important to make your rejects table use nvarchar fields for everything - not ints, tinyints, etc - because if they're getting rejected, it's because the data isn't what you expected it to be.
This also solves the multiple-record insert problem, which will cause Bill's trigger to fail. If you insert ten records simultaneously (like if you do a select-insert-into) and just one of them is bogus, Bill's trigger would have flagged all of them as bad. This handles any number of good and bad records.
I used this trick on a data warehousing project where the inserting application had no idea whether the business logic was any good, and we did the business logic in triggers instead. Truly nasty for performance, but if you can't let the insert fail, it does work.
I think you can use CHECK constraint - it is exactly what it was invented for.
ALTER TABLE someTable
ADD CONSTRAINT someField_check CHECK (ISNUMERIC(someField) = 1) ;
My previous answer (also right by may be a bit overkill):
I think the right way is to use INSTEAD OF trigger to prevent the wrong data from being inserted (rather than deleting it post-factum)
UPDATE: DELETE from a trigger works on both MSSql 7 and MSSql 2008.
I'm no relational guru, nor a SQL standards wonk. However - contrary to the accepted answer - MSSQL deals just fine with both recursive and nested trigger evaluation. I don't know about other RDBMSs.
The relevant options are 'recursive triggers' and 'nested triggers'. Nested triggers are limited to 32 levels, and default to 1. Recursive triggers are off by default, and there's no talk of a limit - but frankly, I've never turned them on, so I don't know what happens with the inevitable stack overflow. I suspect MSSQL would just kill your spid (or there is a recursive limit).
Of course, that just shows that the accepted answer has the wrong reason, not that it's incorrect. However, prior to INSTEAD OF triggers, I recall writing ON INSERT triggers that would merrily UPDATE the just inserted rows. This all worked fine, and as expected.
A quick test of DELETEing the just inserted row also works:
CREATE TABLE Test ( Id int IDENTITY(1,1), Column1 varchar(10) )
GO
CREATE TRIGGER trTest ON Test
FOR INSERT
AS
SET NOCOUNT ON
DELETE FROM Test WHERE Column1 = 'ABCDEF'
GO
INSERT INTO Test (Column1) VALUES ('ABCDEF')
--SCOPE_IDENTITY() should be the same, but doesn't exist in SQL 7
PRINT ##IDENTITY --Will print 1. Run it again, and it'll print 2, 3, etc.
GO
SELECT * FROM Test --No rows
GO
You have something else going on here.
From the CREATE TRIGGER documentation:
deleted and inserted are logical (conceptual) tables. They are
structurally similar to the table on
which the trigger is defined, that is,
the table on which the user action is
attempted, and hold the old values or
new values of the rows that may be
changed by the user action. For
example, to retrieve all values in the
deleted table, use: SELECT * FROM deleted
So that at least gives you a way of seeing the new data.
I can't see anything in the docs which specifies that you won't see the inserted data when querying the normal table though...
I found this reference:
create trigger myTrigger
on SomeTable
for insert
as
if (select count(*)
from SomeTable, inserted
where IsNumeric(SomeField) = 1) <> 0
/* Cancel the insert and print a message.*/
begin
rollback transaction
print "You can't do that!"
end
/* Otherwise, allow it. */
else
print "Added successfully."
I haven't tested it, but logically it looks like it should dp what you're after...rather than deleting the inserted data, prevent the insertion completely, thus not requiring you to have to undo the insert. It should perform better and should therefore ultimately handle a higher load with more ease.
Edit: Of course, there is the potential that if the insert happened inside of an otherwise valid transaction that the wole transaction could be rolled back so you would need to take that scenario into account and determine if the insertion of an invalid data row would constitute a completely invalid transaction...
Is it possible the INSERT is valid, but that a separate UPDATE is done afterwards that is invalid but wouldn't fire the trigger?
The techniques outlined above describe your options pretty well. But what are the users seeing? I can't imagine how a basic conflict like this between you and whoever is responsible for the software can't end up in confusion and antagonism with the users.
I'd do everything I could to find some other way out of the impasse - because other people could easily see any change you make as escalating the problem.
EDIT:
I'll score my first "undelete" and admit to posting the above when this question first appeared. I of course chickened out when I saw that it was from JOEL SPOLSKY. But it looks like it landed somewhere near. Don't need votes, but I'll put it on the record.
IME, triggers are so seldom the right answer for anything other than fine-grained integrity constraints outside the realm of business rules.
MS-SQL has a setting to prevent recursive trigger firing. This is confirgured via the sp_configure stored proceedure, where you can turn recursive or nested triggers on or off.
In this case, it would be possible, if you turn off recursive triggers to link the record from the inserted table via the primary key, and make changes to the record.
In the specific case in the question, it is not really a problem, because the result is to delete the record, which won't refire this particular trigger, but in general that could be a valid approach. We implemented optimistic concurrency this way.
The code for your trigger that could be used in this way would be:
ALTER TRIGGER myTrigger
ON someTable
AFTER INSERT
AS BEGIN
DELETE FROM someTable
INNER JOIN inserted on inserted.primarykey = someTable.primarykey
WHERE ISNUMERIC(inserted.someField) = 1
END
Your "trigger" is doing something that a "trigger" is not suppose to be doing. You can simple have your Sql Server Agent run
DELETE FROM someTable
WHERE ISNUMERIC(someField) = 1
every 1 second or so. While you're at it, how about writing a nice little SP to stop the programming folk from inserting errors into your table. One good thing about SP's is that the parameters are type safe.
I stumbled across this question looking for details on the sequence of events during an insert statement & trigger. I ended up coding some brief tests to confirm how SQL 2016 (EXPRESS) behaves - and thought it would be appropriate to share as it might help others searching for similar information.
Based on my test, it is possible to select data from the "inserted" table and use that to update the inserted data itself. And, of interest to me, the inserted data is not visible to other queries until the trigger completes at which point the final result is visible (at least best as I could test). I didn't test this for recursive triggers, etc. (I would expect the nested trigger would have full visibility of the inserted data in the table, but that's just a guess).
For example - assuming we have the table "table" with an integer field "field" and primary key field "pk" and the following code in our insert trigger:
select #value=field,#pk=pk from inserted
update table set field=#value+1 where pk=#pk
waitfor delay '00:00:15'
We insert a row with the value 1 for "field", then the row will end up with the value 2. Furthermore - if I open another window in SSMS and try:
select * from table where pk = #pk
where #pk is the primary key I originally inserted, the query will be empty until the 15 seconds expire and will then show the updated value (field=2).
I was interested in what data is visible to other queries while the trigger is executing (apparently no new data). I tested with an added delete as well:
select #value=field,#pk=pk from inserted
update table set field=#value+1 where pk=#pk
delete from table where pk=#pk
waitfor delay '00:00:15'
Again, the insert took 15sec to execute. A query executing in a different session showed no new data - during or after execution of the insert + trigger (although I would expect any identity would increment even if no data appears to be inserted).