I insert information into my table using a Bulk Insert to speed things up. Now I want to add a trigger to my table. But this trigger is run once with every Bulk Insert whereas I need to know what the rows that were inserted are during the latest bulk insert.
So, is there a query to know what the inserted rows during BULK INSERT were?
If you have an ID IDENTITY column, you could make a note of the ID before the BULK INSERT and then all rows with an ID higher than this value you wrote down have been bulk inserted.
You cannot have triggers on a per-row basis in SQL Server, nor can you do anything else (like using an OUTPUT statement) to capture the inserted rows, really. You just have to look at what's in the database before and after the BULK INSERT
Related
I am trying to use BULK INSERT to add rows to an existing table from a .csv file. For now I have a small file for testing purposes with the following formatting:
UserID,Username,Firstname,Middlename,Lastname,City,Email,JobTitle,Company,Manager,StartDate,EndDate
273,abc,dd,dd,dd,dd,dd,dd,dd,dd,dd,dd
274,dfg,dd,dd,dd,dd,dd,dd,dd,dd,dd,dd
275,hij,dd,dd,dd,dd,dd,dd,dd,dd,dd,dd
And this is what my query currently looks like:
BULK INSERT DB_NAME.dbo.Users
FROM 'C:\data.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
When I execute this query it returns 1 row affected. I checked the entry in the table and noticed that the data in the file is inserted in the table as a single row.
What could be causing this? What I am trying to accomplish is insert those rows in individual rows in the table. See (long)image below
The first column is actually an IDENTITY column so in the file I just specified a integer even though it will be overwritten by the auto generated ID as I am not sure how to tell the query to start inserting from the second field yet.
There are more columns created in the actual table than specified in the file as not everything needs to be filled. Could that be causing it?
The problem is that you are loading data into the first column. To skip a column create a view over your table with just the columns you want to load and BULK INSERT into view. See below example (from MSDN: https://msdn.microsoft.com/en-us/library/ms179250.aspx):
CREATE VIEW v_myTestSkipCol AS
SELECT Col1,Col3
FROM myTestSkipCol;
GO
USE AdventureWorks2012;
GO
BULK INSERT v_myTestSkipCol
FROM 'C:\myTestSkipCol2.dat'
WITH (FORMATFILE='C:\myTestSkipCol2.xml');
GO
What I would recommend you do instead is to create a staging table which matches the file exactly. Load data into that and then use INSERT statement to copy it into your permanent table. This approach is much more robust and flexible. For example, after loading the staging table you can perform some data validation or cleanup before loading the permanent table.
I have the same application on different hosts bulk inserting into the same table. Each bulk insert fires a trigger. The structure of the table is as follows:
Hostname Quantity
---------------------
BOX_1 10
BOX_1 15
BOX_1 20
BOX_1 11
If I have the following code as part of the trigger:
DECLARE #hostname VARCHAR(20)
SELECT #hostname = Hostname
FROM INSERTED
Each bulk insert contains only one hostname since the application is only capturing data from the box its running on, but if two machines bulk insert simultaneously into the same table, could the INSERTED table be a combination of bulk inserts from different machines?
Or will the triggers execute sequentially, meaning the INSERTED table will always contain data from only one application at a time?
I need to know if my code setting the #hostname variable has any possibility of not being confined to just one choice.
The INSERTED (and DELETED) table will only ever contain rows from the statement that caused the trigger to fire.
See here: https://msdn.microsoft.com/en-us/library/ms191300(v=sql.110).aspx
The inserted table stores copies of the affected rows during INSERT
and UPDATE statements. During an insert or update transaction, new
rows are added to both the inserted table and the trigger table. The
rows in the inserted table are copies of the new rows in the trigger
table.
The rows in these tables are effectively scoped to the insert/update/delete statement that caused the trigger to fire initially.
See also here for some more info: SQL Server Trigger Isolation / Scope Documentation
But bear in mind in your trigger design that some other insert or update operation (a manual bulk insert, or data maintenance) might cause your trigger to fire, and the assumption about the hostname may no longer hold. Probably this will be years down the line after you've moved on or forgotten about this trigger!
Am facing a problem with trigger.
I created a trigger for a table like this
ALTER TRIGGER [dbo].[manageAttributes]
ON [dbo].[tr_levels]
AFTER insert
AS
BEGIN
set nocount on
declare #levelid int
select #levelid=levelid from inserted
insert into testtable(testid) values(#levelid)
-- Insert statements for trigger here
END
But when I insert rows into table tr_levels like this
insert int tr_levels (column1,colum2) values(1,2)
trigger triggered perfectly
But when I tried to insert into table as a bulk like this
insert int tr_levels (column1,colum2) values(1,2),(3,4),(5,6)..
Trigger doesnt fires for all the rows. It fires only one time for the first row. Is that bug with SQL or is there a solution to trigger the trigger for all rows insertion in a bulk insert query
No, it does fire for all rows - once - but you're ignoring the other rows by acting as if inserted only contains one. select #scalar_variable=column from inserted will arbitrarily retrieve a value from one of the rows and ignore the others. Write a set-based insert using inserted in a FROM clause
You need to treat inserted as a table that can contain 0, 1 or multiple rows. So, something like:
ALTER TRIGGER [dbo].[manageAttributes]
ON [dbo].[tr_levels]
AFTER insert
AS
BEGIN
set nocount on
insert into testtable(testid)
select levelid from inserted
END
You have the same issue many people have that: you think the trigger is fired per row. It is not - it is per operation. And inserted is a table. You take one (random) value and ignore the rest. Fix that and it will work.
Triggers fire once per statement in the base table. So if you insert 5 rows in one statement, the trigger fires once and inserted has the 5 rows.
I am not sure if this question is an obvious one. I need to delete a load of data. Delete is expensive. I need to truncate the table but not fully so that the memory is released and watermark is changed.
Is there any feature which would allow me to truncate a table based on a condition for select rows?
Depends on how your table is organised.
1) if your (large) table is partitioned based on similar condition ( eg. you want to delete previous month's data and your table is partitioned by month), you could truncate only that partition, instead of the entire table.
2) The other option, provided you have some downtime, would be to insert the data that you want to keep into a temporary table, truncate the original table and then load the data back.
insert into <table1>
select * from <my_table>
where <condition>;
commit;
truncate table my_table;
insert into my_table
select * from <table1>;
commit;
--since the amount of data might change considerably,
--you might want to collect statistics again
exec dbms_stats.gather_table_stats
(ownname=>'SCHEMA_NAME',
tabname => 'MY_TABLE');
i am working on sql server, where i want to insert the record in a particular table say (a), this table contains two column [id (Identity Field) and name(nvarchar(max)] now after the records is inserted in table (a), a trigger should fire and insert the identity field value in table b.... i am using after insert trigger for this purpose but i am not getting how i would be getting the identity field value in trigger... which should be inserted in table b.
This is what i am using
create trigger tri_inserts on (a)
after insert
as
begin
insert into b (id, name) values (?,?)
end
Please reply as soon as possible..
Thanks and Regards
Abbas Electricwala
create trigger tri_inserts on a
after insert
as
set nocount on
insert into b (id, name)
SELECT id, name FROM INSERTED
GO
#gbn has the best solution, but I want you to understand why the SELECT clause is better than using a VALUES clause in a trigger. Triggers fire for each batch of records inserted/updated/deleted. So the inserted pseudotable or the deleted pseudotable may have one record or they may have a million. The trigger has to be able able to handle either case. If you use a values clause, you only get the action happening for one of the records out the the million. This casues data integrity issues. If you decide to loop through the records in a cursor and use the VALUES clause, your performance will be horrible when you get a large number of records. When I came to this job, we had one such trigger, it took 45 minutes to insert a 40,000 record insert. Removing the cursor and using a set-based solution based on athe SELECT clause (Although a much more complex one than the example)reduced the time for the same insert to around 40 seconds.