SQL trigger with IDENTITY_INSERT - sql-server

I have two tables: Table1 is all the companies, Table2 is companies whose name start with A.
Table1 company (companyId int, companyName varchar(50), companySize int)
Table2 companyStartWithA (companyId int, companyName varchar(50), companySize int)
What I want to do is to create a trigger so that when I insert/update/delete something in Table1, it will automatically do the same in Table2
My code:
CREATE TRIGGER A_TRG_InsertSyncEmp
ON company
AFTER INSERT
AS
BEGIN
INSERT INTO companyStartWithA
SELECT *
FROM INSERTED
WHERE inserted.companyName LIKE 'A%'
END
And I get an error:
An explicit value for the identity column in table 'companyStartWithA' can only be specified when a column list is used and IDENTITY_INSERT is ON.
What can I do?
Thanks

The problem is the fact that you're not explicitly specifying the column in the INSERT statement, and using a SELECT * to fill the data. Both are big no-no's - you should always explicitly specify the column that you want to insert into, and you should always explicitly specify the columns that you want to select. Doing so will fix this problem:
CREATE TRIGGER A_TRG_InsertSyncEmp
ON company
AFTER INSERT
AS
BEGIN
INSERT INTO companyStartWithA (companyName, companySize)
SELECT companyName, companySize
FROM INSERTED
WHERE inserted.companyName LIKE 'A%'
END
But as Sean Lange absolutely correctly commented - this should really be just a view rather than a separate table.....
CREATE VIEW dbo.CompanyStartsWithA
AS
SELECT companyId, companyName, companySize
FROM dbo.Company
WHERE Name LIKE 'A%'
and then you don't need any messy triggers or anything - just insert into dbo.Company and all companies with a name that starts with an A will be visible in this view....

Related

How to compare varbinary data type in where clause

I have a linked server that is created to pull user details from a specific Organisation Unit with a scheduled sql job agent.
The table is created to hold user details has a column for ObjectGUID number and the type is defined as varbinary(50) (I am not sure why..).
The process checks if there is a new user by comparing the ObjectGUID number the saved Users table and if there is a new number then insert the new user in the table.
However I have noticed that the comparisons actually not really working properly.
SELECT
tbl.objectGUID AS UserGUID
FROM [dbo].[ActiveDirectoryUsers] tbl
WHERE tbl.objectGUID NOT IN (SELECT UserGUID FROM dbo.Users)
When I create a new user the new user is appearing in the ActiveDirectoryUsers view.
but when the where clause added to compare results with Users table then result is always empty. It looks like I need to cast or convert the varbinary to varchar then do the comparisons. I tried to cast the varbinary into varchar and uniqueidentifier but still it does not work.
Any idea how would I do the comparisons?
Update
CREATE VIEW [dbo].[ActiveDirectoryUsers] AS
SELECT "SAMAccountName" AS sAMAccountName, "mail" AS Email,
"objectGUID" AS objectGUID
FROM OpenQuery(ADSI, 'SELECT SAMAccountName, mail, objectGUID
FROM ''ldapconnectionstring.com''')
An example of objectGUID in the Users table
0x1DBCC071C69C8242B4895D42750969B1
You should not cast varbinary to smth particular to be able to use it in WHERE clause.
Your problem is that you use NOT IN where NULL values are present.
Try to execute my code first as it is (it will return 1 row) and then uncomment NULL value insert and execute it again.
This time you'll get 0 rows:
declare #t1 table (guid varbinary(50))
insert into #t1
values(0x1DBCC071C69C8242B4895D42750969B1)--, (null);
declare #t2 table (guid varbinary(50))
insert into #t2
values(0x1DBCC071C69C8242B4895D42750969B1), (0x1DBCC071C69C8242B4895D42750969B2);
select *
from #t2 t2
where t2.guid not in (select guid from #t1);
To fix your problem, try to use NOT EXISTS instead of NOT IN like this:
select *
from #t2 t2
where not exists (select *
from #t1 t1
where t1.guid = t2.guid);
In your case the code should be like this:
SELECT tbl.objectGUID AS UserGUID
FROM [dbo].[ActiveDirectoryUsers] tbl
WHERE not exists (SELECT *
FROM dbo.Users u
where u.UserGUID = tbl.objectGUID );

Triggers after insert

I have a table that has the following columns, and I cannot modify the schema (i.e. I can't modify the table or add an identity field).
What I want is to create a trigger to update one column to treat it as an identity field.
accountno firstname lastname keyField
jku45555 John Doe 123
Now what I want is when I insert the next record, I grab the KeyFieldId of the previous record and update the newly inserted record to be 124 (keep in mind this a Varchar field).
I need the best possible way of doing this, and like I said modifying the table is not an option. Thanks!
You want to do something like this... For a table named "Foo", with two columns, First Name and KeyFieldId (both varchar), this trigger will do what you want:
-------------------------------------------------------------------------
-- These lines will create a test table and test data
--DEBUG: CREATE TABLE Foo (FirstName varchar(20), KeyFieldId varchar(10))
--DEBUG: INSERT INTO Foo VALUES ('MyName', '145')
--
CREATE TRIGGER test_Trigger ON Foo
INSTEAD OF INSERT
AS
BEGIN
DECLARE #maxKeyFieldId int;
SELECT #maxKeyFieldId = MAX(CAST(KeyFieldId AS int)) FROM Foo;
WITH RowsToInsert AS (
SELECT *, ROW_NUMBER() OVER (ORDER BY (CAST(KeyFieldId AS int))) AS RowNum
FROM inserted
) INSERT INTO Foo (FirstName, KeyFieldId)
SELECT FirstName, #maxKeyFieldId + RowNum
FROM RowsToInsert;
END
Things to note here:
Create an INSTEAD OF INSERT trigger
Find the "max" value of the INTEGER value of your KeyFieldID
Create a CTE that selects everything from the 'inserted' collection
Add a column to the CTE for a Row Number
Do the actual INSERT by adding row number to the max KeyFieldID

How to get ID from one table and associate with record in another table in SQL Server

I've tried searching for the answer to this one to no avail. There is no good logic behind the way this was setup. The guy does not know what he's doing, but it's what I have to work with (long story).
I'm using SQL Server 2008R2 I need to take records from one table and transfer the data to 4 separate tables all with a one to one relationship (I know - not smart). I need to get the value from the Identity field in the first table the data is inserted into, then populate the other 3 tables with the same ID and disperse the data accordingly. for example:
OldTable: Field1, Field2, Field3, Field4
NewTable1: Identity field, Field1
NewTable2: ID, Field2
NewTable3: ID, Field3
NewTable4: ID, Field4
I'd like to handle this in a stored procedure. I'd like to do a loop, but I read that loops in SQL are inadvisable.
Loop moving through each record in OldTable... (??)
INSERT INTO NewTable1
(Field1)
Select Field1 from OldTable
INSERT INTO NewTable2
(ID, Field2)
Select SCOPE_IDENTITY?, Field2 From OldTable Where OldTable.ID = ??
etc for other 2 tables
Loop to next record in OldTable
I am not sure how to use SCOPE_IDENTITY, but I have a feeling this will be involved in how I accomplish this.
Also, I'm probably going to need to setup a trigger for whenever a new record is created in NewTable1. I know, it's insanity, but I can't do anything about it, just have to work around it.
So, I need to know
1: the best way to initially populate the tables
2: how to make triggers for new records
The solution to 1 might involve 2.
Please help!
You can use the output clause of the merge statement to get a mapping between the existing primary key in OldTable and the newly generated identity ID in NewTable1.
-- Temp table to hold the mapping between OldID and ID
create table #ID
(
OldID int primary key,
ID int
);
-- Add rows to NewTable1 and capture the ID's in #ID
merge NewTable1 as T
using OldTable as S
on 1 = 0
when not matched by target then
insert(Field1) values(S.Field1)
output S.ID, inserted.ID into #ID(OldID, ID);
-- Add rows to NewTable2 using #ID to get the correct value for each row
insert into NewTable2(ID, Field2)
select I.ID, O.Field2
from #ID as I
inner join OldTable as O
on I.OldID = O.ID
insert into NewTable3(ID, Field3)
select I.ID, O.Field3
from #ID as I
inner join OldTable as O
on I.OldID = O.ID
insert into NewTable4(ID, Field4)
select I.ID, O.Field4
from #ID as I
inner join OldTable as O
on I.OldID = O.ID
drop table #ID;
SQL Fiddle
See also Using merge..output to get mapping between source.id and target.id
How about using the OUTPUT clause of the insert statement? Assuming that Field1 is a unique key on the OldTable...
Declare #IDinserted table(ID int, Field1 varchar(255));
Insert Into NewTable1(Field1)
Output inserted.ID, inserted.Field1 into #IDinserted
Select OldID, Field1 from OldTable;
Insert Into NewTable2(RowID, Field2)
Select i.ID, o.#Field2
from #IDinserted i Inner Join OldTable o
on i.Field1=o.Field1;

Subquery returned more than 1 value. This is not permitted

ALTER TRIGGER t1
ON dbo.Customers
FOR INSERT
AS
BEGIN TRANSACTION
/* variables */
DECLARE
#maxid bigint
SELECT #customerid = id FROM inserted
SET IDENTITY_INSERT dbo.new_table ON
DECLARE
#maxid bigint
SELECT #maxid = MAX(ID) FROM new_table
INSERT INTO new_table (ID, ParentID, Foo, Bar, Buzz)
SELECT ID+#maxid, ParentID+#maxid, Foo, Bar, Buzz FROM initial_table
SET IDENTITY_INSERT dbo.new_tableOFF
/* execute */
COMMIT TRANSACTION
GO
fails with:
SQL Server Subquery returned more than 1 value. This is not permitted
when the subquery follows =, !=, <, <= , >, >= or when the subquery is
used as an expression
How to fix it?
What I am trying to do is
insert id and parentid, each INCREASED by #maxid
from initial_table
into new_table
thnx
new_table
id (bigint)
parentid (bigint - linked to id)
foo | bar | buzz (others are nvarchar, not really important)
initial table
id (bigint)
parentid (bigint - linked to id)
foo | bar | buzz (others are nvarchar, not really important)
You are battling against a few errors I suspect.
1.
You are inserting values that violate a unique constraint in new_table.
Avoid the existence error by joining against the table you are inserting into. Adjust the join condition to match your table's constraint:
insert into new_table (ID, ParentID, Foo, Bar, Buzz)
select ID+#maxid, ParentID+#maxid, Foo, Bar, Buzz
from initial_table i
left
join new_table N on
i.ID+#maxid = n.ID or
i.ParentID+#maxid = n.ParentId
where n.ID is null --make sure its not already there
2.
Somewhere, a subquery has returned multiple rows where you expect one.
The subquery error is either in the code that inserts into dbo.Customer (triggering t1), or perhaps in a trigger defined on new_table. I do not see anything in the posted code that would throw the subquery exception.
Triggers (aka, landmines) inserting into tables that have triggers defined on them is a recipe for pain. If possible, try to refactor some of this logic out of triggers and into code you can logically follow.
First you have to assume there will be more than one record in inserted or deleted. You should not ever set a value in inserted or deleted table to a scalar varaible in a SQL server trigger. It will cause a problem if the insert includes more than one record and sooner or later it will.
Next you should not ever consider setting identity insert on in a trigger. What were you thinking? If you have an identity field then use that, don't then try to manually create a value.
Next the subquery issue is associated apparently with the other trigger where you are also assuming only one record at a time would be processed. I would suspect that you will need to examine every trigger in your database and fix this basic problem.
Now when you run this part of the code:
INSERT INTO new_table (ID, ParentID, Foo, Bar, Buzz)
SELECT ID+#maxid, ParentID+#maxid, Foo, Bar, Buzz FROM initial_table
You are trying to insert all records in the table not just the ones in inserted. So since your trigger on the other table is incorreclty written, you are hitting an error which is actually hiding the error you will get when you try to insert 2000 records with the same PK into the new table or worse if you don't have a PK, it will happily insert them all every time you insert one record.
You have a trigger containing the statement:
SELECT #customerid = id FROM inserted
The inserted table contains a row for each row that was inserted (or updated for UPDATE triggers). A statement executed that inserted more than one row, the trigger fired, and your assumption was exposed.
Recode the trigger to operate on rowsets, not a single row.
while using a sub query in d any kind of select, try to tune your query so that the sub query only returns 1 value and not multiple.
If multiple are needed then restructure the query in such a way that the table becomes part of main query.
I am giving example of SQL:
Select col1, (select col2 from table2 where table2.col3=table1.col4) from table1;
If col2 returns multiple rows then the query fails then re-write it to:
Select col1, col2 from table1,table2 where table2.col3=table1.col4;
I hope you get the point.
You shouldn't SELECT it, you should SET it.
SET #maxid = MAX(ID) FROM another_table

SELECT INTO a table variable in T-SQL

Got a complex SELECT query, from which I would like to insert all rows into a table variable, but T-SQL doesn't allow it.
Along the same lines, you cannot use a table variable with SELECT INTO or INSERT EXEC queries.
http://odetocode.com/Articles/365.aspx
Short example:
declare #userData TABLE(
name varchar(30) NOT NULL,
oldlocation varchar(30) NOT NULL
)
SELECT name, location
INTO #userData
FROM myTable
INNER JOIN otherTable ON ...
WHERE age > 30
The data in the table variable would be later used to insert/update it back into different tables (mostly copy of the same data with minor updates). The goal of this would be to simply make the script a bit more readable and more easily customisable than doing the SELECT INTO directly into the right tables.
Performance is not an issue, as the rowcount is fairly small and it's only manually run when needed.
...or just tell me if I'm doing it all wrong.
Try something like this:
DECLARE #userData TABLE(
name varchar(30) NOT NULL,
oldlocation varchar(30) NOT NULL
);
INSERT INTO #userData (name, oldlocation)
SELECT name, location FROM myTable
INNER JOIN otherTable ON ...
WHERE age > 30;
The purpose of SELECT INTO is (per the docs, my emphasis)
To create a new table from values in another table
But you already have a target table! So what you want is
The INSERT statement adds one or more new rows to a table
You can specify the data values in the
following ways:
...
By using a SELECT subquery to specify
the data values for one or more rows,
such as:
INSERT INTO MyTable
(PriKey, Description)
SELECT ForeignKey, Description
FROM SomeView
And in this syntax, it's allowed for MyTable to be a table variable.
You can also use common table expressions to store temporary datasets. They are more elegant and adhoc friendly:
WITH userData (name, oldlocation)
AS
(
SELECT name, location
FROM myTable INNER JOIN
otherTable ON ...
WHERE age>30
)
SELECT *
FROM userData -- you can also reuse the recordset in subqueries and joins
You could try using temporary tables...if you are not doing it from an application. (It may be ok to run this manually)
SELECT name, location INTO #userData FROM myTable
INNER JOIN otherTable ON ...
WHERE age>30
You skip the effort to declare the table that way...
Helps for adhoc queries...This creates a local temp table which wont be visible to other sessions unless you are in the same session. Maybe a problem if you are running query from an app.
if you require it to running on an app, use variables declared this way :
DECLARE #userData TABLE(
name varchar(30) NOT NULL,
oldlocation varchar(30) NOT NULL
);
INSERT INTO #userData
SELECT name, location FROM myTable
INNER JOIN otherTable ON ...
WHERE age > 30;
Edit: as many of you mentioned updated visibility to session from connection. Creating temp tables is not an option for web applications, as sessions can be reused, stick to temp variables in those cases
Try to use INSERT instead of SELECT INTO:
DECLARE #UserData TABLE(
name varchar(30) NOT NULL,
oldlocation varchar(30) NOT NULL
)
INSERT #UserData
SELECT name, oldlocation
First create a temp table :
Step 1:
create table #tblOm_Temp (
Name varchar(100),
Age Int ,
RollNumber bigint
)
**Step 2: ** Insert Some value in Temp table .
insert into #tblom_temp values('Om Pandey',102,1347)
Step 3: Declare a table Variable to hold temp table data.
declare #tblOm_Variable table(
Name Varchar(100),
Age int,
RollNumber bigint
)
Step 4: select value from temp table and insert into table variable.
insert into #tblOm_Variable select * from #tblom_temp
Finally value is inserted from a temp table to Table variable
Step 5: Can Check inserted value in table variable.
select * from #tblOm_Variable
OK, Now with enough effort i am able to insert into #table using the below :
INSERT #TempWithheldTable SELECT
a.SuspendedReason,
a.SuspendedNotes,
a.SuspendedBy ,
a.ReasonCode FROM OPENROWSET( BULK 'C:\DataBases\WithHeld.csv', FORMATFILE =
N'C:\DataBases\Format.txt',
ERRORFILE=N'C:\Temp\MovieLensRatings.txt'
) AS a;
The main thing here is selecting columns to insert .
One reason to use SELECT INTO is that it allows you to use IDENTITY:
SELECT IDENTITY(INT,1,1) AS Id, name
INTO #MyTable
FROM (SELECT name FROM AnotherTable) AS t
This would not work with a table variable, which is too bad...

Resources