I have two tables, Table_1 and Table_2.
Table_1 has columns PK (autoincrementing int) and Value (nchar(10)).
Table_2 has FK (int), Key (nchar(10)) and Value (nchar(10)).
That is to say, Table_1 is a table of data and Table_2 is a key-value store where one row in Table_1 may correspond to 0, 1 or more keys and values in Table_2.
I'd like to write code that programmatically builds up a query that inserts one row into Table_1 and a variable number of rows into Table_2 using the primary key from Table_1.
I can do it easy with one row:
INSERT INTO Table_1 ([Value])
OUTPUT INSERTED.PK, 'Test1Key', 'Test1Val' INTO Table_2 (FK, [Key], [Value])
VALUES ('Test')
But SQL doesn't seem to like the idea of having multiple rows. This fails:
INSERT INTO Table_1 ([Value])
OUTPUT INSERTED.PK, 'Test1Key', 'Test1Val' INTO Table_2 (FK, [Key], [Value])
OUTPUT INSERTED.PK, 'Test2Key', 'Test2Val' INTO Table_2 (FK, [Key], [Value])
OUTPUT INSERTED.PK, 'Test3Key', 'Test3Val' INTO Table_2 (FK, [Key], [Value])
VALUES ('Test')
Is there any way to make this work?
I had to put the code in answer, in comment it looks ugly...
CREATE TABLE #Tmp(PK int, value nchar(10))
INSERT INTO Table_1 ([Value])
OUTPUT INSERTED.PK, inserted.[Value] INTO #Tmp
SELECT 'Test'
INSERT INTO Table_2 (FK, [Key], Value)
SELECT PK, 'Test1Key', 'Test1Val' FROM #Tmp
UNION ALL SELECT PK, 'Test2Key', 'Test2Val' FROM #Tmp
UNION ALL SELECT PK, 'Test3Key', 'Test3Val' FROM #Tmp
Btw, SQL Server won't let you do it all in one query without some ugly hack...
Try putting the INSERTED.PK value into a parameter, then inserting into table 2 with 3 INSERT..VALUES or 1 INSERT..SELECT statement.
Related
brothers, can you help me?
Sample Query:
DECLARE #id_scope TABLE (ID_TBA_PK int)
Insert into TABLE_A
Select ID_TBA_PK,ID_LET_FK,NAME, ADDRESS FROM TABLE_A WHERE
ID_LET_FK=#ID_LET_FK
set #id_scope = scope_identity() --but must get multiple identity
-- because above insert multiple,ID_TBA_PK is autoincement.
then insert to table others:
insert into TABLE_B
select ID_TBA_FK=#id_scope , NAME, ADDRESS FROM TABLE_B
WHERE ID_TBA_FK=#ID_TBA_FK
--(MULTIPLE INSERT TO TABLE_B)
Add the OUTPUT clause to your INSERT. Something like this:
INSERT TABLE_A(NAME, ADDRESS, etc.)
OUTPUT Inserted.ID_TBA_PK into #id_scope
Select ID_TBA_PK,ID_LET_FK,NAME, ADDRESS FROM TABLE_A WHERE
ID_LET_FK=#ID_LET_FK
This line I am assuming ID_TBA_PK is your new identity
OUTPUT Inserted.ID_TBA_PK into #id_scope
so the general case is
INSERT [table] (columns)
OUTPUT INSERTED.[column] into [#other table]
SELECT columns from .....
I have a situation in which I need to insert some values from a query into a table that has an identity PK. For some of the records, I need also to insert values in another table which has a 1-to-1 (partial) relationship:
CREATE TABLE A (
Id int identity primary key clustered,
Somevalue varchar(100),
SomeOtherValue int)
CREATE TABLE B (Id int primary key clustered,
SomeFlag bit)
DECLARE #inserted TABLE(NewId int, OldId)
INSERT INTO A (Somevalue)
OUTPUT Inserted.Id into #inserted(NewId)
SELECT SomeValue
FROM A
WHERE <certain condition>
INSERT INTO B (Id, SomeFlag)
SELECT
i.NewId, B.SomeFlag
FROM #inserted i
JOIN A ON <some condition>
JOIN B ON A.Id = B.Id
The problem is that the query from A in the first INSERT/SELECT returns records that can only be differentiated by the Id, which I cannot insert. Unfortunately I cannot change the structure of the A table, to insert the "previous" Id which would solve my problem.
Any idea that could lead to a solution?
With INSERT ... OUTPUT ... SELECT ... you can't output columns that are not in the target table. You can try MERGE instead:
MERGE INTO A as tgt
USING (SELECT Id, SomeValue FROM A WHERE <your conditions>) AS src
ON 0 = 1
WHEN NOT MATCHED THEN
INSERT (SomeValue)
VALUES (src.SomeValue)
OUTPUT (inserted.Id, src.Id) -- this is your new Id / old Id mapping
INTO #inserted
;
SCOPE_IDENTITY() returns the last identity value generated by the current session and current scope. You could stick that into a #table and use that to insert into B
SELECT SCOPE_IDENTITY() as newid into #c
Though, your INSERT INTO B join conditions implies to me that the value in B is already known ?
I'm making a query that will delete all rows from table1 that has its column table1.id = table2.id
table1.id column is in nvarchar(max) with an xml format like this:
<customer><name>Paulo</name><gender>Male</gender><id>12345</id></customer>
EDIT:
The id column is just a part of a huge XML so the ending tag may not match the starting tag.
I've tried using name.nodes but it only applies to xml columns and changing the column datatype is not a choice, So far this is the my code using PATINDEX
DELETE t1
FROM table1 t1
WHERE PATINDEX('%12345%',id) != 0
But what I need is to search for all values from table2.id which contains like this:
12345
67890
10000
20000
30000
Any approach would be nice like sp_executesql and/or while loop, or is there a better approach than using patindex? thanks!
Select *
--Delete A
From Table1 A
Join Table2 B on CharIndex('id>'+SomeField+'<',ID)>0
I don't know the name of the field in Table2. I am also assuming it is a varchar. If not, cast(SomeField as varchar(25))
EDIT - This is what I tested. It should work
Declare #Table1 table (id varchar(max))
Insert Into #Table1 values
('<customer><name>Paulo</name><gender>Male</gender><id>12345</id></customer>'),
('<customer><name>Jane</name><gender>Femail</gender><id>7895</id></customer>')
Declare #Table2 table (SomeField varchar(25))
Insert into #Table2 values
('12345'),
('67890'),
('10000'),
('20000'),
('30000')
Select *
--Delete A
From #Table1 A
Join #Table2 B on CharIndex('id>'+SomeField+'<',ID)>0
;with cteBase as (
Select *,XMLData=cast(id as xml) From Table1
)
Select *
From cteBase
Where XMLData.value('(customer/id)[1]','int') in (12345,67890,10000,20000,30000)
If you are satisfied with the results, change the final Select * to Delete
This question already has answers here:
How can I remove duplicate rows?
(43 answers)
Closed 9 years ago.
I have a table called table1 which has duplicate values. It looks like this:
new
pen
book
pen
like
book
book
pen
but I want to remove the duplicated rows from that table and insert them into another table called table2.
table2 should look like this:
new
pen
book
like
How can I do this in SQL Server?
Let's assume the field was named name:
INSERT INTO table2 (name)
SELECT name FROM table1 GROUP BY name
that query would get you all the unique names.
You could even put them into a table variable if you wanted:
DECLARE #Table2 TABLE (name VARCHAR(50))
INSERT INTO #Table2 (name)
SELECT name FROM table1 GROUP BY name
or you could use a temp table:
CREATE TABLE #Table2 (name VARCHAR(50))
INSERT INTO #Table2 (name)
SELECT name FROM table1 GROUP BY name
You can do this easily with a INSERT that SELECTs from a CTE where you use ROW_NUMBER(), like:
DECLARE #YourTable table (YourColumn varchar(10))
DECLARE #YourTable2 table (YourColumn varchar(10))
INSERT INTO #YourTable VALUES ('new')
INSERT INTO #YourTable VALUES ('pen')
INSERT INTO #YourTable VALUES ('book')
INSERT INTO #YourTable VALUES ('pen')
INSERT INTO #YourTable VALUES ('like')
INSERT INTO #YourTable VALUES ('book')
INSERT INTO #YourTable VALUES ('book')
INSERT INTO #YourTable VALUES ('pen')
;WITH OrderedResults AS
(
SELECT
YourColumn, ROW_NUMBER() OVER (PARTITION BY YourColumn ORDER BY YourColumn) AS RowNumber
FROM #YourTable
)
INSERT INTO #YourTable2
(YourColumn)
SELECT YourColumn FROM OrderedResults
WHERE RowNumber=1
SELECT * FROM #YourTable2
OUTPUT:
YourColumn
----------
book
like
new
pen
(4 row(s) affected)
You can do this easily with a DELETE on a CTE where you use ROW_NUMBER(), like:
--this will just remove them from your original table
DECLARE #YourTable table (YourColumn varchar(10))
INSERT INTO #YourTable VALUES ('new')
INSERT INTO #YourTable VALUES ('pen')
INSERT INTO #YourTable VALUES ('book')
INSERT INTO #YourTable VALUES ('pen')
INSERT INTO #YourTable VALUES ('like')
INSERT INTO #YourTable VALUES ('book')
INSERT INTO #YourTable VALUES ('book')
INSERT INTO #YourTable VALUES ('pen')
;WITH OrderedResults AS
(
SELECT
YourColumn, ROW_NUMBER() OVER (PARTITION BY YourColumn ORDER BY YourColumn) AS RowNumber
FROM #YourTable
)
DELETE OrderedResults
WHERE RowNumber!=1
SELECT * FROM #YourTable
OUTPUT:
YourColumn
----------
new
pen
book
like
(4 row(s) affected)
I posted something on deleting duplicates a couple of weeks ago by using DELETE TOP X. Only for a single set of duplicates obviously. However in the comments I was given this little jewel by Joshua Patchak.
;WITH cte(rowNumber) AS
(SELECT ROW_NUMBER() OVER (PARTITION BY [List of Natural Key Fields]
ORDER BY [List of Order By Fields])
FROM dbo.TableName)
DELETE FROM cte WHERE rowNumber>1
This will get rid of all of the duplicates in the table.
Here is the original post if you want to read the discussion. Duplicate rows in a table.
I have two tables with foreign key constraint on TableB on TablesAs KeyA column. I was doing manual inserts till now as they were only few rows to be added. Now i need to do a bulk insert, so my question if i insert multiple rows in TableA how can i get all those identity values and insert them into TableB along with other column values. Please see the script below.
INSERT INTO Tablea
([KeyA]
,[Value] )
SELECT 4 ,'StateA'
UNION ALL
SELECT 5 ,'StateB'
UNION ALL
SELECT 6 ,'StateC'
INSERT INTO Tableb
([KeyB]
,[fKeyA] //Get value from the inserted row from TableA
,[Desc])
SELECT 1 ,4,'Value1'
UNION ALL
SELECT 2 ,5,'Value2'
UNION ALL
SELECT 3 ,6, 'Value3'
You can use the OUTPUT clause of INSERT to do this. Here is an example:
CREATE TABLE #temp (id [int] IDENTITY (1, 1) PRIMARY KEY CLUSTERED, Val int)
CREATE TABLE #new (id [int], val int)
INSERT INTO #temp (val) OUTPUT inserted.id, inserted.val INTO #new VALUES (5), (6), (7)
SELECT id, val FROM #new
DROP TABLE #new
DROP TABLE #temp
The result set returned includes the inserted IDENTITY values.
Scope identity sometimes returns incorrect value. See the use of OUTPUT in the workarounds section.