Insert multiple rows WITHOUT repeating the "INSERT INTO ..." part of the statement? - sql-server

I know I've done this before years ago, but I can't remember the syntax, and I can't find it anywhere due to pulling up tons of help docs and articles about "bulk imports".
Here's what I want to do, but the syntax is not exactly right... please, someone who has done this before, help me out :)
INSERT INTO dbo.MyTable (ID, Name)
VALUES (123, 'Timmy'),
(124, 'Jonny'),
(125, 'Sally')
I know that this is close to the right syntax. I might need the word "BULK" in there, or something, I can't remember. Any idea?
I need this for a SQL Server 2005 database. I've tried this code, to no avail:
DECLARE #blah TABLE
(
ID INT NOT NULL PRIMARY KEY,
Name VARCHAR(100) NOT NULL
)
INSERT INTO #blah (ID, Name)
VALUES (123, 'Timmy')
VALUES (124, 'Jonny')
VALUES (125, 'Sally')
SELECT * FROM #blah
I'm getting Incorrect syntax near the keyword 'VALUES'.

Your syntax almost works in SQL Server 2008 (but not in SQL Server 20051):
CREATE TABLE MyTable (id int, name char(10));
INSERT INTO MyTable (id, name) VALUES (1, 'Bob'), (2, 'Peter'), (3, 'Joe');
SELECT * FROM MyTable;
id | name
---+---------
1 | Bob
2 | Peter
3 | Joe
1 When the question was answered, it was not made evident that the question was referring to SQL Server 2005. I am leaving this answer here, since I believe it is still relevant.

INSERT INTO dbo.MyTable (ID, Name)
SELECT 123, 'Timmy'
UNION ALL
SELECT 124, 'Jonny'
UNION ALL
SELECT 125, 'Sally'
For SQL Server 2008, can do it in one VALUES clause exactly as per the statement in your question (you just need to add a comma to separate each values statement)...

If your data is already in your database you can do:
INSERT INTO MyTable(ID, Name)
SELECT ID, NAME FROM OtherTable
If you need to hard code the data then SQL 2008 and later versions let you do the following...
INSERT INTO MyTable (Name, ID)
VALUES ('First',1),
('Second',2),
('Third',3),
('Fourth',4),
('Fifth',5)

Using INSERT INTO ... VALUES syntax like in Daniel Vassallo's answer
there is one annoying limitation:
From MSDN
The maximum number of rows that can be constructed by inserting rows directly in the VALUES list is 1000
The easiest way to omit this limitation is to use derived table like:
INSERT INTO dbo.Mytable(ID, Name)
SELECT ID, Name
FROM (
VALUES (1, 'a'),
(2, 'b'),
--...
-- more than 1000 rows
)sub (ID, Name);
LiveDemo
This will work starting from SQL Server 2008+

This will achieve what you're asking about:
INSERT INTO table1 (ID, Name)
VALUES (123, 'Timmy'),
(124, 'Jonny'),
(125, 'Sally');
For future developers, you can also insert from another table:
INSERT INTO table1 (ID, Name)
SELECT
ID,
Name
FROM table2
Or even from multiple tables:
INSERT INTO table1 (column2, column3)
SELECT
t2.column,
t3.column
FROM table2 t2
INNER JOIN table3 t3
ON t2.ID = t3.ID

You could do this (ugly but it works):
INSERT INTO dbo.MyTable (ID, Name)
select * from
(
select 123, 'Timmy'
union all
select 124, 'Jonny'
union all
select 125, 'Sally'
...
) x

You can use a union:
INSERT INTO dbo.MyTable (ID, Name)
SELECT ID, Name FROM (
SELECT 123, 'Timmy'
UNION ALL
SELECT 124, 'Jonny'
UNION ALL
SELECT 125, 'Sally'
) AS X (ID, Name)

This looks OK for SQL Server 2008. For SS2005 & earlier, you need to repeat the VALUES statement.
INSERT INTO dbo.MyTable (ID, Name)
VALUES (123, 'Timmy')
VALUES (124, 'Jonny')
VALUES (125, 'Sally')
EDIT:: My bad. You have to repeat the 'INSERT INTO' for each row in SS2005.
INSERT INTO dbo.MyTable (ID, Name)
VALUES (123, 'Timmy')
INSERT INTO dbo.MyTable (ID, Name)
VALUES (124, 'Jonny')
INSERT INTO dbo.MyTable (ID, Name)
VALUES (125, 'Sally')

It would be easier to use XML in SQL Server to insert multiple rows otherwise it becomes very tedious.
View full article with code explanations here http://www.cyberminds.co.uk/blog/articles/how-to-insert-multiple-rows-in-sql-server.aspx
Copy the following code into sql server to view a sample.
declare #test nvarchar(max)
set #test = '<topic><dialog id="1" answerId="41">
<comment>comment 1</comment>
</dialog>
<dialog id="2" answerId="42" >
<comment>comment 2</comment>
</dialog>
<dialog id="3" answerId="43" >
<comment>comment 3</comment>
</dialog>
</topic>'
declare #testxml xml
set #testxml = cast(#test as xml)
declare #answerTemp Table(dialogid int, answerid int, comment varchar(1000))
insert #answerTemp
SELECT ParamValues.ID.value('#id','int') ,
ParamValues.ID.value('#answerId','int') ,
ParamValues.ID.value('(comment)[1]','VARCHAR(1000)')
FROM #testxml.nodes('topic/dialog') as ParamValues(ID)

USE YourDB
GO
INSERT INTO MyTable (FirstCol, SecondCol)
SELECT 'First' ,1
UNION ALL
SELECT 'Second' ,2
UNION ALL
SELECT 'Third' ,3
UNION ALL
SELECT 'Fourth' ,4
UNION ALL
SELECT 'Fifth' ,5
GO
OR YOU CAN USE ANOTHER WAY
INSERT INTO MyTable (FirstCol, SecondCol)
VALUES
('First',1),
('Second',2),
('Third',3),
('Fourth',4),
('Fifth',5)

I've been using the following:
INSERT INTO [TableName] (ID, Name)
values (NEWID(), NEWID())
GO 10
It will add ten rows with unique GUIDs for ID and Name.
Note: do not end the last line (GO 10) with ';' because it will throw error: A fatal scripting error occurred. Incorrect syntax was encountered while parsing GO.

Corresponding to INSERT (Transact-SQL) (SQL Server 2005) you can't omit INSERT INTO dbo.Blah and have to specify it every time or use another syntax/approach,

In PostgreSQL, you can do it as follows;
A generic example for a 2 column table;
INSERT INTO <table_name_here>
(<column_1>, <column_2>)
VALUES
(<column_1_value>, <column_2_value>),
(<column_1_value>, <column_2_value>),
(<column_1_value>, <column_2_value>),
...
(<column_1_value>, <column_2_value>);
See the real world example here;
A - Create the table
CREATE TABLE Worker
(
id serial primary key,
code varchar(256) null,
message text null
);
B - Insert bulk values
INSERT INTO Worker
(code, message)
VALUES
('a1', 'this is the first message'),
('a2', 'this is the second message'),
('a3', 'this is the third message'),
('a4', 'this is the fourth message'),
('a5', 'this is the fifth message'),
('a6', 'this is the sixth message');

This is working very fast,and efficient in SQL.
Suppose you have Table Sample with 4 column a,b,c,d where a,b,d are int and c column is Varchar(50).
CREATE TABLE [dbo].[Sample](
[a] [int] NULL,
[b] [int] NULL,
[c] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[D] [int] NULL
)
So you cant inset multiple records in this table using following query without repeating insert statement,
DECLARE #LIST VARCHAR(MAX)
SET #LIST='SELECT 1, 1, ''Charan Ghate'',11
SELECT 2,2, ''Mahesh More'',12
SELECT 3,3,''Mahesh Nikam'',13
SELECT 4,4, ''Jay Kadam'',14'
INSERT SAMPLE (a, b, c,d) EXEC(#LIST)
Also With C# using SqlBulkCopy bulkcopy = new SqlBulkCopy(con)
You can insert 10 rows at a time
DataTable dt = new DataTable();
dt.Columns.Add("a");
dt.Columns.Add("b");
dt.Columns.Add("c");
dt.Columns.Add("d");
for (int i = 0; i < 10; i++)
{
DataRow dr = dt.NewRow();
dr["a"] = 1;
dr["b"] = 2;
dr["c"] = "Charan";
dr["d"] = 4;
dt.Rows.Add(dr);
}
SqlConnection con = new SqlConnection("Connection String");
using (SqlBulkCopy bulkcopy = new SqlBulkCopy(con))
{
con.Open();
bulkcopy.DestinationTableName = "Sample";
bulkcopy.WriteToServer(dt);
con.Close();
}

Others here have suggested a couple multi-record syntaxes. Expounding upon that, I suggest you insert into a temp table first, and insert your main table from there.
The reason for this is loading the data from a query can take longer, and you may end up locking the table or pages longer than is necessary, which slows down other queries running against that table.
-- Make a temp table with the needed columns
select top 0 *
into #temp
from MyTable (nolock)
-- load data into it at your leisure (nobody else is waiting for this table or these pages)
insert #temp (ID, Name)
values (123, 'Timmy'),
(124, 'Jonny'),
(125, 'Sally')
-- Now that all the data is in SQL, copy it over to the real table. This runs much faster in most cases.
insert MyTable (ID, Name)
select ID, Name
from #temp
-- cleanup
drop table #temp
Also, your IDs should probably be identity(1,1) and you probably shouldn't be inserting them, in the vast majority of circumstances. Let SQL decide that stuff for you.

Oracle SQL Server Insert Multiple Rows
In a multitable insert, you insert computed rows derived from the rows returned from the evaluation of a subquery into one or more tables.
Unconditional INSERT ALL:- To add multiple rows to a table at once, you use the following form of the INSERT statement:
INSERT ALL
INTO table_name (column_list) VALUES (value_list_1)
INTO table_name (column_list) VALUES (value_list_2)
INTO table_name (column_list) VALUES (value_list_3)
...
INTO table_name (column_list) VALUES (value_list_n)
SELECT 1 FROM DUAL; -- SubQuery
Specify ALL followed by multiple insert_into_clauses to perform an unconditional multitable insert. Oracle Database executes each insert_into_clause once for each row returned by the subquery.
MySQL Server Insert Multiple Rows
INSERT INTO table_name (column_list)
VALUES
(value_list_1),
(value_list_2),
...
(value_list_n);
Single Row insert Query
INSERT INTO table_name (col1,col2) VALUES(val1,val2);

Created a table to insert multiple records at the same.
CREATE TABLE TEST
(
id numeric(10,0),
name varchar(40)
)
After that created a stored procedure to insert multiple records.
CREATE PROCEDURE AddMultiple
(
#category varchar(2500)
)
as
BEGIN
declare #categoryXML xml;
set #categoryXML = cast(#category as xml);
INSERT INTO TEST(id, name)
SELECT
x.v.value('#user','VARCHAR(50)'),
x.v.value('.','VARCHAR(50)')
FROM #categoryXML.nodes('/categories/category') x(v)
END
GO
Executed the procedure
EXEC AddMultiple #category = '<categories>
<category user="13284">1</category>
<category user="132">2</category>
</categories>';
Then checked by query the table.
select * from TEST;

Related

Detect duplicate row within a batch - T-SQL

I have the following scenario:
I have a transaction, in which a batch of records is being sent from one table (source) to another(target). The transaction is defined within TRY-CATCH statements to detect errors. Having a PK constraint defined on the target table, I want to detect records/rows which violate the constraint, and then isolate these records into a separate table (duplicates). The TRY will detect the violation, however which T-SQL statement(s) can identify those rows and only isolate them?
Bellow a simplified script that identify duplicates on pk-values and extract only one recordset of a PK to the target table and the other to a duplicate table:
DROP TABLE IF EXISTS #source;
DROP TABLE IF EXISTS #target;
DROP TABLE IF EXISTS #duplicates;
CREATE TABLE #source ( pk INT, value INT )
INSERT INTO #source
VALUES (10, 100), (10, 101), (10, 102), (10, 103), (20, 200), (20, 201)
SELECT *
INTO #target
FROM ( SELECT PkRowId = ROW_NUMBER() OVER(PARTITION BY pk ORDER BY value), pk, value FROM #source ) AS sub
WHERE PkRowId = 1;
SELECT *
INTO #duplicates
FROM ( SELECT PkRowId = ROW_NUMBER() OVER(PARTITION BY pk ORDER BY value), pk, value FROM #source ) AS sub
WHERE PkRowId > 1;
SELECT * FROM #target;
SELECT * FROM #duplicates;
Which recordset will get PkRowId 1 you can control with ORDER BY if necessary.

SQL Server - How can i insert text file into a table without bulk insert

I want to insert a text file into table without Bulk Insert command.
What is the script for it? I cannot find it anywhere.
There is a script for bulk insert, but I need to do a normal insert.
The only way i could think is as follows:
File content is as follows:
id, col1, col2
2, A, B
4, A, B
declare #test table (id int,col1 varchar(10),col2 varchar(10))
declare #inter table (op varchar(50))
insert into #inter
exec xp_cmdshell 'type E:\Data\readit.txt'
insert into #test
select substring(op,0,charindex(',',op)),
substring(reverse(op),0,charindex(',',reverse(op))),
replace(replace(replace(op,substring(op,0,charindex(',',op)),''),substring(reverse(op),0,charindex(',',reverse(op))),''),',','')
from #inter where op <>(select top 1 op from #inter)
select * from #test
The result is:
id col1 col2
2 B A
4 B A
As you can see its extremely complicated, so you should use bulk insert

Get inserted table identity value and update another table

I have two tables with foreign key constraint on TableB on TablesAs KeyA column. I was doing manual inserts till now as they were only few rows to be added. Now i need to do a bulk insert, so my question if i insert multiple rows in TableA how can i get all those identity values and insert them into TableB along with other column values. Please see the script below.
INSERT INTO Tablea
([KeyA]
,[Value] )
SELECT 4 ,'StateA'
UNION ALL
SELECT 5 ,'StateB'
UNION ALL
SELECT 6 ,'StateC'
INSERT INTO Tableb
([KeyB]
,[fKeyA] //Get value from the inserted row from TableA
,[Desc])
SELECT 1 ,4,'Value1'
UNION ALL
SELECT 2 ,5,'Value2'
UNION ALL
SELECT 3 ,6, 'Value3'
You can use the OUTPUT clause of INSERT to do this. Here is an example:
CREATE TABLE #temp (id [int] IDENTITY (1, 1) PRIMARY KEY CLUSTERED, Val int)
CREATE TABLE #new (id [int], val int)
INSERT INTO #temp (val) OUTPUT inserted.id, inserted.val INTO #new VALUES (5), (6), (7)
SELECT id, val FROM #new
DROP TABLE #new
DROP TABLE #temp
The result set returned includes the inserted IDENTITY values.
Scope identity sometimes returns incorrect value. See the use of OUTPUT in the workarounds section.

Using merge..output to get mapping between source.id and target.id

Very simplified, I have two tables Source and Target.
declare #Source table (SourceID int identity(1,2), SourceName varchar(50))
declare #Target table (TargetID int identity(2,2), TargetName varchar(50))
insert into #Source values ('Row 1'), ('Row 2')
I would like to move all rows from #Source to #Target and know the TargetID for each SourceID because there are also the tables SourceChild and TargetChild that needs to be copied as well and I need to add the new TargetID into TargetChild.TargetID FK column.
There are a couple of solutions to this.
Use a while loop or cursors to insert one row (RBAR) to Target at a time and use scope_identity() to fill the FK of TargetChild.
Add a temp column to #Target and insert SourceID. You can then join that column to fetch the TargetID for the FK in TargetChild.
SET IDENTITY_INSERT OFF for #Target and handle assigning new values yourself. You get a range that you then use in TargetChild.TargetID.
I'm not all that fond of any of them. The one I used so far is cursors.
What I would really like to do is to use the output clause of the insert statement.
insert into #Target(TargetName)
output inserted.TargetID, S.SourceID
select SourceName
from #Source as S
But it is not possible
The multi-part identifier "S.SourceID" could not be bound.
But it is possible with a merge.
merge #Target as T
using #Source as S
on 0=1
when not matched then
insert (TargetName) values (SourceName)
output inserted.TargetID, S.SourceID;
Result
TargetID SourceID
----------- -----------
2 1
4 3
I want to know if you have used this? If you have any thoughts about the solution or see any problems with it? It works fine in simple scenarios but perhaps something ugly could happen when the query plan get really complicated due to a complicated source query. Worst scenario would be that the TargetID/SourceID pairs actually isn't a match.
MSDN has this to say about the from_table_name of the output clause.
Is a column prefix that specifies a table included in the FROM clause of a DELETE, UPDATE, or MERGE statement that is used to specify the rows to update or delete.
For some reason they don't say "rows to insert, update or delete" only "rows to update or delete".
Any thoughts are welcome and totally different solutions to the original problem is much appreciated.
In my opinion this is a great use of MERGE and output. I've used in several scenarios and haven't experienced any oddities to date.
For example, here is test setup that clones a Folder and all Files (identity) within it into a newly created Folder (guid).
DECLARE #FolderIndex TABLE (FolderId UNIQUEIDENTIFIER PRIMARY KEY, FolderName varchar(25));
INSERT INTO #FolderIndex
(FolderId, FolderName)
VALUES(newid(), 'OriginalFolder');
DECLARE #FileIndex TABLE (FileId int identity(1,1) PRIMARY KEY, FileName varchar(10));
INSERT INTO #FileIndex
(FileName)
VALUES('test.txt');
DECLARE #FileFolder TABLE (FolderId UNIQUEIDENTIFIER, FileId int, PRIMARY KEY(FolderId, FileId));
INSERT INTO #FileFolder
(FolderId, FileId)
SELECT FolderId,
FileId
FROM #FolderIndex
CROSS JOIN #FileIndex; -- just to illustrate
DECLARE #sFolder TABLE (FromFolderId UNIQUEIDENTIFIER, ToFolderId UNIQUEIDENTIFIER);
DECLARE #sFile TABLE (FromFileId int, ToFileId int);
-- copy Folder Structure
MERGE #FolderIndex fi
USING ( SELECT 1 [Dummy],
FolderId,
FolderName
FROM #FolderIndex [fi]
WHERE FolderName = 'OriginalFolder'
) d ON d.Dummy = 0
WHEN NOT MATCHED
THEN INSERT
(FolderId, FolderName)
VALUES (newid(), 'copy_'+FolderName)
OUTPUT d.FolderId,
INSERTED.FolderId
INTO #sFolder (FromFolderId, toFolderId);
-- copy File structure
MERGE #FileIndex fi
USING ( SELECT 1 [Dummy],
fi.FileId,
fi.[FileName]
FROM #FileIndex fi
INNER
JOIN #FileFolder fm ON
fi.FileId = fm.FileId
INNER
JOIN #FolderIndex fo ON
fm.FolderId = fo.FolderId
WHERE fo.FolderName = 'OriginalFolder'
) d ON d.Dummy = 0
WHEN NOT MATCHED
THEN INSERT ([FileName])
VALUES ([FileName])
OUTPUT d.FileId,
INSERTED.FileId
INTO #sFile (FromFileId, toFileId);
-- link new files to Folders
INSERT INTO #FileFolder (FileId, FolderId)
SELECT sfi.toFileId, sfo.toFolderId
FROM #FileFolder fm
INNER
JOIN #sFile sfi ON
fm.FileId = sfi.FromFileId
INNER
JOIN #sFolder sfo ON
fm.FolderId = sfo.FromFolderId
-- return
SELECT *
FROM #FileIndex fi
JOIN #FileFolder ff ON
fi.FileId = ff.FileId
JOIN #FolderIndex fo ON
ff.FolderId = fo.FolderId
I would like to add another example to add to #Nathan's example, as I found it somewhat confusing.
Mine uses real tables for the most part, and not temp tables.
I also got my inspiration from here: another example
-- Copy the FormSectionInstance
DECLARE #FormSectionInstanceTable TABLE(OldFormSectionInstanceId INT, NewFormSectionInstanceId INT)
;MERGE INTO [dbo].[FormSectionInstance]
USING
(
SELECT
fsi.FormSectionInstanceId [OldFormSectionInstanceId]
, #NewFormHeaderId [NewFormHeaderId]
, fsi.FormSectionId
, fsi.IsClone
, #UserId [NewCreatedByUserId]
, GETDATE() NewCreatedDate
, #UserId [NewUpdatedByUserId]
, GETDATE() NewUpdatedDate
FROM [dbo].[FormSectionInstance] fsi
WHERE fsi.[FormHeaderId] = #FormHeaderId
) tblSource ON 1=0 -- use always false condition
WHEN NOT MATCHED
THEN INSERT
( [FormHeaderId], FormSectionId, IsClone, CreatedByUserId, CreatedDate, UpdatedByUserId, UpdatedDate)
VALUES( [NewFormHeaderId], FormSectionId, IsClone, NewCreatedByUserId, NewCreatedDate, NewUpdatedByUserId, NewUpdatedDate)
OUTPUT tblSource.[OldFormSectionInstanceId], INSERTED.FormSectionInstanceId
INTO #FormSectionInstanceTable(OldFormSectionInstanceId, NewFormSectionInstanceId);
-- Copy the FormDetail
INSERT INTO [dbo].[FormDetail]
(FormHeaderId, FormFieldId, FormSectionInstanceId, IsOther, Value, CreatedByUserId, CreatedDate, UpdatedByUserId, UpdatedDate)
SELECT
#NewFormHeaderId, FormFieldId, fsit.NewFormSectionInstanceId, IsOther, Value, #UserId, CreatedDate, #UserId, UpdatedDate
FROM [dbo].[FormDetail] fd
INNER JOIN #FormSectionInstanceTable fsit ON fsit.OldFormSectionInstanceId = fd.FormSectionInstanceId
WHERE [FormHeaderId] = #FormHeaderId
Here's a solution that doesn't use MERGE (which I've had problems with many times I try to avoid if possible). It relies on two memory tables (you could use temp tables if you want) with IDENTITY columns that get matched, and importantly, using ORDER BY when doing the INSERT, and WHERE conditions that match between the two INSERTs... the first one holds the source IDs and the second one holds the target IDs.
-- Setup... We have a table that we need to know the old IDs and new IDs after copying.
-- We want to copy all of DocID=1
DECLARE #newDocID int = 99;
DECLARE #tbl table (RuleID int PRIMARY KEY NOT NULL IDENTITY(1, 1), DocID int, Val varchar(100));
INSERT INTO #tbl (DocID, Val) VALUES (1, 'RuleA-2'), (1, 'RuleA-1'), (2, 'RuleB-1'), (2, 'RuleB-2'), (3, 'RuleC-1'), (1, 'RuleA-3')
-- Create a break in IDENTITY values.. just to simulate more realistic data
INSERT INTO #tbl (Val) VALUES ('DeleteMe'), ('DeleteMe');
DELETE FROM #tbl WHERE Val = 'DeleteMe';
INSERT INTO #tbl (DocID, Val) VALUES (6, 'RuleE'), (7, 'RuleF');
SELECT * FROM #tbl t;
-- Declare TWO temp tables each with an IDENTITY - one will hold the RuleID of the items we are copying, other will hold the RuleID that we create
DECLARE #input table (RID int IDENTITY(1, 1), SourceRuleID int NOT NULL, Val varchar(100));
DECLARE #output table (RID int IDENTITY(1,1), TargetRuleID int NOT NULL, Val varchar(100));
-- Capture the IDs of the rows we will be copying by inserting them into the #input table
-- Important - we must specify the sort order - best thing is to use the IDENTITY of the source table (t.RuleID) that we are copying
INSERT INTO #input (SourceRuleID, Val) SELECT t.RuleID, t.Val FROM #tbl t WHERE t.DocID = 1 ORDER BY t.RuleID;
-- Copy the rows, and use the OUTPUT clause to capture the IDs of the inserted rows.
-- Important - we must use the same WHERE and ORDER BY clauses as above
INSERT INTO #tbl (DocID, Val)
OUTPUT Inserted.RuleID, Inserted.Val INTO #output(TargetRuleID, Val)
SELECT #newDocID, t.Val FROM #tbl t
WHERE t.DocID = 1
ORDER BY t.RuleID;
-- Now #input and #output should have the same # of rows, and the order of both inserts was the same, so the IDENTITY columns (RID) can be matched
-- Use this as the map from old-to-new when you are copying sub-table rows
-- Technically, #input and #output don't even need the 'Val' columns, just RID and RuleID - they were included here to prove that the rules matched
SELECT i.*, o.* FROM #output o
INNER JOIN #input i ON i.RID = o.RID
-- Confirm the matching worked
SELECT * FROM #tbl t

SQL Server Merge statement

I am doing merge statement in my stored procedure. I need to count the rows during updates and inserts. If i use a common variable to get the updated rows (for both update and insert) how i can differ, this is the count which i got from update and this is the count which i got from insert. Please give me a better way
You can create a table variable to hold the action type then OUTPUT the pseudo $action column to it.
Example
/*Table to use as Merge Target*/
DECLARE #A TABLE (
[id] [int] NOT NULL PRIMARY KEY CLUSTERED,
[C] [varchar](200) NOT NULL)
/*Insert some initial data to be updated*/
INSERT INTO #A
SELECT 1, 'A' UNION ALL SELECT 2, 'B'
/*Table to hold actions*/
DECLARE #Actions TABLE(act CHAR(6))
/*Do the Merge*/
MERGE #A AS target
USING (VALUES (1, '#a'),( 2, '#b'),(3, 'C'),(4, 'D'),(5, 'E')) AS source (id, C)
ON (target.id = source.id)
WHEN MATCHED THEN
UPDATE SET C = source.C
WHEN NOT MATCHED THEN
INSERT (id, C)
VALUES (source.id, source.C)
OUTPUT $action INTO #Actions;
/*Check the result*/
SELECT act, COUNT(*) AS Cnt
FROM #Actions
GROUP BY act
Returns
act Cnt
------ -----------
INSERT 3
UPDATE 2

Resources