Is it possible to access the new values for inserted records in a transaction from a trigger on a different table in the same transaction?
When you cause a trigger to fire within a transaction the trigger runs in a nested transaction, so can see all the rows previously written in the transaction.
eg
create table t1(id int)
create table t2(id int)
go
create trigger tt2 on t2 after insert
as
begin
select * from t1;
end
go
begin transaction
insert into t1(id) values (1)
insert into t2(id) values (1)
rollback
outputs
(1 row affected)
id
-----------
1
(1 row affected)
(1 row affected)
So you can see all the records in t1 from a trigger on t2, including any rows affected by the current transaction. But there is no inherent way to tell which rows in t1 were affected by the current transaction.
And you could easily cause deadlocks doing this.
A few possible solutions would be
create a column (TransactionID UniqueIdentifier) in both tables, generate a new GUID just after transaction has started and then insert that ID into both tables. Then, in a trigger, you get TransactionID from Inserted, and read the second table WHERE TransactionID = ... Consider indexes if those tables are large.
use OUTPUT clause for the first insert into some supplementary table. Use that table inside the trigger to know the new IDs from that transaction. Truncate that supplementary table just before committing the transaction. Don't use that table for any other processes / code.
PS. For (1), you can use INT or BIGINT for TransactionID, but you need to use some mechanism to generate a new unique ID
Related
I am loading data from a JSON file to a table "main.jsontable" the trigger job is to insert the data of all different countries from "main.jsontable" into "main.country" table. My problem is that the trigger needs to handle inserting multiple rows my current code is:
create or alter trigger main.afterParsing
on main.jsontable
after insert
as
begin
declare #country nvarchar(50);
insert into main.country(countryName)
values(#country)
end;
but I get this error (obviously because triggers can only handle inserting 1 row at a time):
Cannot insert the value NULL into column 'countryName', table 'assignment2.main.country'; column does not allow nulls. INSERT fails.
Does anyone know how I can use the trigger to insert multiple rows?
Thanks
You need to use the Inserted pseudo table, and you need to understand that it can contain multiple rows (if your INSERT statement inserted more than 1 row at once), and you need to treat that table accordingly, using a proper set-based approach.
Try something like this:
create or alter trigger main.afterParsing
on main.jsontable
after insert
as
begin
insert into main.country(countryName)
select countryName
from Inserted
-- if the data being inserted could have NULL in the
-- countryName - filter those entries out
where countryName is not null
end;
If I execute a procedure that drops a table and then recreate it using 'SELECT INTO'.
IF that procedure raises an exception after dropping the table, does table dropping take place or not?
Unless you wrap them in a transaction,table will be dropped since each statement will be considered as an implicit transaction..
below are some tests
create table t1
(
id int not null primary key
)
drop table t11
insert into t1
select 1 union all select 1
table t11 will be dropped,even though insert will raise an exception..
one more example..
drop table orderstest
print 'dropped table'
waitfor delay '00:00:05'
select * into orderstest
from Orders
now after 2 seconds,kill session and you can still see orderstest being dropped
I checked with some other statements other than select into ,i don't see a reason why select into will behave differently and this applies even if you wrap statements in a stored proc..
IF you want to rollback all,use a transaction or more better use set xact_Abort on
Yes, the dropped table will be gone. I have had this issue when I script a new primary key. Depending on the table, it saves all the data to a table variable in memory, drops the table, creates a new one with the new pk, then loads the data. If the data violates the new pk, the statement fails and the table variable is dropped leaving me with a new table and no data.
My practice is to create the new table with a slightly different name, load the data, change both table names in a statement, then once all the data is confirmed loaded, drop the original table.
Lets say we have three tables in MSSQL database, A, B and C.
I have a trigger on A after update. This trigger takes some values from B and inserts them into C.
The transaction performs operations in following order:
Begin transaction.
Insert a record into table B.
Update previously inserted (in step 2) record in table B.
Update table A (trigger is triggered).
Trigger inserts into table C some values from inserted in step 2 row.
End of transaction.
The problem is, that values inserted into table C based on the row in table B are only the values from right after the B record was inserted, and do not contain the changes made during the updated operation.
Why?
Trigger code:
CREATE TRIGGER MYTRIGGER ON [dbo].[A] AFTER INSERT, UPDATE AS
BEGIN
INSERT INTO C (SOME_TEXT_VALUE) SELECT SOME_TEXT_VALUE FROM B;
END
GO
I have a quick question for you... If I ran the following command:
BEGIN TRANSACTION
<commands...>
DELETE FROM <table>
COMMIT TRANSACTION
And while the above transaction is running an insert is carried out on the table. Will the delete:
remove the data added after the transaction started
only remove data that existed at the start of the transaction or that was added as part of the transaction
Hope someone can help.
You need to dive more to the Locks and Transaction Isolation Levels topic. Look at this example, which may be more common than in the previous answer. INSERT is not blocked here because DELETE just locks set of Keys for a DELETE operation.
And anyway, before DELETE operation start, if other queries in this transaction are not holding locks on this table, there is no reason for SQL Server to prevent INSERT operations from other transaction.
CREATE TABLE t (Id int PRIMARY KEY)
GO
INSERT INTO t VALUES(1)
GO
BEGIN TRAN
DELETE FROM t
-- separate window
INSERT INTO t VALUES(2)
I assume that your are running your code in one SPID and the insert will run on other SPID and the isolation level is the default one in SQL SERVER - READ COMMITTED.
Shortly, the answer is NO, as INSERT will wait for the DELETE to end. Tested like this:
1) Setup:
-- drop table dbo.Test
CREATE TABLE dbo.Test
(
Id INT NOT NULL,
Value NVARCHAR(4000)
)
GO
INSERT INTO Test (Id, Value)
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 1)), text
from sys.messages
GO
2) In query window 1
BEGIN TRANSACTION
DELETE FROM dbo.Test where ID > 100000
3) In query window 2
INSERT INTO Test (Id, Value)
SELECT 100000000 + ROW_NUMBER() OVER (ORDER BY (SELECT 1)), text
from sys.messages
sp_who2 active shows that second query (SPID) is blocked by first query, so query is waiting to get lock
3) In query window 1
COMMIT -- query 1 will finish
4) Second query will finish
So, INSERT has to wait until DELETE finishes.
Yes. Why not? If other queries in your transaction will not hold locks on a your_table, SQL Server will start locking your_table with Update locks, just after DELETE operation start. So, before this action, all other processes can successfully add new rows to the table.
DELETE in your case will delete all committed data, existed in your table before DELETE operation. Uncommitted data of this transaction will be deleted also.
Am facing a problem with trigger.
I created a trigger for a table like this
ALTER TRIGGER [dbo].[manageAttributes]
ON [dbo].[tr_levels]
AFTER insert
AS
BEGIN
set nocount on
declare #levelid int
select #levelid=levelid from inserted
insert into testtable(testid) values(#levelid)
-- Insert statements for trigger here
END
But when I insert rows into table tr_levels like this
insert int tr_levels (column1,colum2) values(1,2)
trigger triggered perfectly
But when I tried to insert into table as a bulk like this
insert int tr_levels (column1,colum2) values(1,2),(3,4),(5,6)..
Trigger doesnt fires for all the rows. It fires only one time for the first row. Is that bug with SQL or is there a solution to trigger the trigger for all rows insertion in a bulk insert query
No, it does fire for all rows - once - but you're ignoring the other rows by acting as if inserted only contains one. select #scalar_variable=column from inserted will arbitrarily retrieve a value from one of the rows and ignore the others. Write a set-based insert using inserted in a FROM clause
You need to treat inserted as a table that can contain 0, 1 or multiple rows. So, something like:
ALTER TRIGGER [dbo].[manageAttributes]
ON [dbo].[tr_levels]
AFTER insert
AS
BEGIN
set nocount on
insert into testtable(testid)
select levelid from inserted
END
You have the same issue many people have that: you think the trigger is fired per row. It is not - it is per operation. And inserted is a table. You take one (random) value and ignore the rest. Fix that and it will work.
Triggers fire once per statement in the base table. So if you insert 5 rows in one statement, the trigger fires once and inserted has the 5 rows.