Related
For the following example I set shipping method to 'UPS'
CREATE TABLE [dbo].[Customer] (CustomerID int primary key, ShipMethodRef INT)
INSERT INTO [dbo].[Customer] VALUES (5497, 20);
CREATE TABLE [dbo].ShipMethod(ShipMethodID int PRIMARY KEY, Name varchar(10));
INSERT INTO [dbo].ShipMethod VALUES (20, 'Fedex'), (21, 'UPS')
UPDATE [dbo].[Customer]
set ShipMethodRef = CASE WHEN EXISTS (SELECT ShipMethodID from [dbo].[ShipMethod]
WHERE [dbo].[ShipMethod].Name = 'UPS')
THEN (SELECT ShipMethodID from [dbo].[ShipMethod]
WHERE [dbo].[ShipMethod].Name = 'UPS')
ELSE curTable.ShipMethodRef END
OUTPUT ShipMethod.Name as ShipMethodName
FROM [dbo].[Customer] curTable
JOIN [dbo].ShipMethod ShipMethod ON curTable.ShipMethodRef = ShipMethod.ShipMethodID
WHERE CustomerID=5497;
The OUTPUT clause returns Fedex - How can I change it to reflect the post insert state that the customer's shipping method is 'UPS' (as their shipping method Id is now 21)?
I don't think this can be done with a single statement except in the way Martin showed in his comment, but you can get the output from inserted into a table variable or a temporary table and then select from that joined to the translation tables.
Here's how I would do that (note the update statement is simplified):
DECLARE #UpdatedIds AS TABLE (ShipMethodID int);
UPDATE [dbo].[Customer]
SET ShipMethodRef = COALESCE((
SELECT ShipMethodID
FROM [dbo].[ShipMethod]
WHERE [dbo].[ShipMethod].Name = 'UPS'
), ShipMethodRef)
OUTPUT inserted.ShipMethodRef INTO #UpdatedIds
FROM [dbo].[Customer]
WHERE CustomerID=5497;
SELECT SM.ShipMethodID, SM.Name
FROM [dbo].ShipMethod AS SM
JOIN #UpdatedIds AS Updated
ON SM.ShipMethodID = Updated.ShipMethodID
I have been trying to get acclimated to set based processing with SQL Server. Below is a simplified version of cursor processing for this task. It involves creating an order from items in a shopping cart. The order is created, line items are added to the order details table, the total is accumulated and eventually updated on the order table. Can anyone suggest how to do this with a set based approach instead of a cursor?
One other question is that in most cases the cursor will process at most 10 or 12 line items at a time. Is that enough of a reason to not have to consider the set based approach?
declare getCart2 cursor for
select MemberID,ProductID,Quantity,Price
from Carts
where MemberID = #MemberID
open getCart2
fetch next from getCart2 into #MemberID,#ProductID,#Quantity,#Price
Insert into Orders
(MemberID,TotalAmount0
Values
(#MemberID, 0.00)
set #OrderID = ##Identity
while ##FETCH_STATUS = 0 Begin
Insert into OrderDetails
(OrderID,ProductID,Quantity)
Values
(#OderID,#ProductID,#Quantity)
set #TotalAmout = #TotalAmount + (#Quantity * #Price)
set #PrevMemberID = #MemberID
fetch next from getCart2 into #MemberID,#ProductID,#Quantity,#Price
End
close getCart2
deallocate getCart2
Update Orders
Set TotalAmount = #TotalAmount
Where OrderID = #OrderID
Thanks for your help.
Here goes an approach:
In this case I am creating a temporary table variable that will store the order id's on it.
Then, it performs the insertions on the Order table and after that, in the OrderDetails.
Finally, it computes the TotalAmount and updates on the Orders table.
Although you don't have it in your code (and in mine as well), but I recommend you to use this code inside a transaction.
Hope it helps you improve your performance.
USE [tempdb];
GO
SET NOCOUNT ON;
IF OBJECT_ID(N'dbo.Carts', N'U') IS NOT NULL DROP TABLE [dbo].[Carts];
IF OBJECT_ID(N'dbo.Orders', N'U') IS NOT NULL DROP TABLE [dbo].[Orders];
IF OBJECT_ID(N'dbo.OrderDetails', N'U') IS NOT NULL DROP TABLE [dbo].[OrderDetails];
GO
-- Creates the tables like you have
CREATE TABLE [dbo].[Carts] (MemberID INT, ProductID INT, Quantity INT, Price DECIMAL(10, 2));
CREATE TABLE [dbo].[Orders] (OrderID INT IDENTITY(1, 1), MemberID INT, TotalAmount DECIMAL(10, 2));
CREATE TABLE [dbo].[OrderDetails] (OrderID INT, ProductID INT, Quantity INT);
-- Inserts dummy data
INSERT INTO [dbo].[Carts] VALUES (1001, 80, 5, 25.00);
INSERT INTO [dbo].[Carts] VALUES (1002, 120, 2, 12.90);
INSERT INTO [dbo].[Carts] VALUES (1010, 70, 3, 12.00)
INSERT INTO [dbo].[Carts] VALUES (1034, 176, 5, 45.00);
-- Temporary table that stores the inserted Order ID's
DECLARE #OrdersToProcess TABLE (OrderID INT, MemberID INT);
-- Inserts all Orders
INSERT INTO Orders (MemberID, TotalAmount)
OUTPUT inserted.OrderID, inserted.MemberID INTO #OrdersToProcess
SELECT MemberID, 0
FROM [dbo].[Carts]
-- Inserts order details
INSERT INTO OrderDetails (OrderID, ProductID, Quantity)
SELECT OrderID, ProductID, Quantity
FROM [dbo].[Carts] C
INNER JOIN #OrdersToProcess O ON C.MemberID = O.MemberID;
-- Updates order totals
UPDATE [dbo].[Orders]
SET TotalAmount = T.Total FROM
(
SELECT OrderID, SUM(Quantity * Price) AS [Total]
FROM [dbo].[Carts] C
INNER JOIN #OrdersToProcess O ON C.MemberID = O.MemberID
GROUP BY OrderID
) T
WHERE [dbo].[Orders].OrderID = T.OrderID
SELECT * FROM [dbo].[Orders];
SELECT * FROM [dbo].[OrderDetails];
Results:
As I understand your problem, this store procedure should be called when a particular member presses the check-out button, so it should create a single order with all the items in the cart of that member.
You can use something like this:
INSERT INTO Orders (MemberID, TotalAmount)
VALUES (#MemberID, 0)
SET #OrderID=SCOPE_IDENTITY()
INSERT INTO OrderDetails (OrderID, ProductID, Quantity)
SELECT OrderID, ProductID, Quantity
FROM [dbo].[Carts] C
WHERE C.MemberID=#MemberID
UPDATE dbo.Orders SET TotalAmount=(
SELECT SUM(c.Quantity*c.Price)
FROM dbo.Carts c
WHERE c.MemberID=#MemberID
) WHERE OrderID=#OrderID
It's true that this reads the Carts table twice, but with a proper index (on the MemberID column) that should be fast enough.
Very simplified, I have two tables Source and Target.
declare #Source table (SourceID int identity(1,2), SourceName varchar(50))
declare #Target table (TargetID int identity(2,2), TargetName varchar(50))
insert into #Source values ('Row 1'), ('Row 2')
I would like to move all rows from #Source to #Target and know the TargetID for each SourceID because there are also the tables SourceChild and TargetChild that needs to be copied as well and I need to add the new TargetID into TargetChild.TargetID FK column.
There are a couple of solutions to this.
Use a while loop or cursors to insert one row (RBAR) to Target at a time and use scope_identity() to fill the FK of TargetChild.
Add a temp column to #Target and insert SourceID. You can then join that column to fetch the TargetID for the FK in TargetChild.
SET IDENTITY_INSERT OFF for #Target and handle assigning new values yourself. You get a range that you then use in TargetChild.TargetID.
I'm not all that fond of any of them. The one I used so far is cursors.
What I would really like to do is to use the output clause of the insert statement.
insert into #Target(TargetName)
output inserted.TargetID, S.SourceID
select SourceName
from #Source as S
But it is not possible
The multi-part identifier "S.SourceID" could not be bound.
But it is possible with a merge.
merge #Target as T
using #Source as S
on 0=1
when not matched then
insert (TargetName) values (SourceName)
output inserted.TargetID, S.SourceID;
Result
TargetID SourceID
----------- -----------
2 1
4 3
I want to know if you have used this? If you have any thoughts about the solution or see any problems with it? It works fine in simple scenarios but perhaps something ugly could happen when the query plan get really complicated due to a complicated source query. Worst scenario would be that the TargetID/SourceID pairs actually isn't a match.
MSDN has this to say about the from_table_name of the output clause.
Is a column prefix that specifies a table included in the FROM clause of a DELETE, UPDATE, or MERGE statement that is used to specify the rows to update or delete.
For some reason they don't say "rows to insert, update or delete" only "rows to update or delete".
Any thoughts are welcome and totally different solutions to the original problem is much appreciated.
In my opinion this is a great use of MERGE and output. I've used in several scenarios and haven't experienced any oddities to date.
For example, here is test setup that clones a Folder and all Files (identity) within it into a newly created Folder (guid).
DECLARE #FolderIndex TABLE (FolderId UNIQUEIDENTIFIER PRIMARY KEY, FolderName varchar(25));
INSERT INTO #FolderIndex
(FolderId, FolderName)
VALUES(newid(), 'OriginalFolder');
DECLARE #FileIndex TABLE (FileId int identity(1,1) PRIMARY KEY, FileName varchar(10));
INSERT INTO #FileIndex
(FileName)
VALUES('test.txt');
DECLARE #FileFolder TABLE (FolderId UNIQUEIDENTIFIER, FileId int, PRIMARY KEY(FolderId, FileId));
INSERT INTO #FileFolder
(FolderId, FileId)
SELECT FolderId,
FileId
FROM #FolderIndex
CROSS JOIN #FileIndex; -- just to illustrate
DECLARE #sFolder TABLE (FromFolderId UNIQUEIDENTIFIER, ToFolderId UNIQUEIDENTIFIER);
DECLARE #sFile TABLE (FromFileId int, ToFileId int);
-- copy Folder Structure
MERGE #FolderIndex fi
USING ( SELECT 1 [Dummy],
FolderId,
FolderName
FROM #FolderIndex [fi]
WHERE FolderName = 'OriginalFolder'
) d ON d.Dummy = 0
WHEN NOT MATCHED
THEN INSERT
(FolderId, FolderName)
VALUES (newid(), 'copy_'+FolderName)
OUTPUT d.FolderId,
INSERTED.FolderId
INTO #sFolder (FromFolderId, toFolderId);
-- copy File structure
MERGE #FileIndex fi
USING ( SELECT 1 [Dummy],
fi.FileId,
fi.[FileName]
FROM #FileIndex fi
INNER
JOIN #FileFolder fm ON
fi.FileId = fm.FileId
INNER
JOIN #FolderIndex fo ON
fm.FolderId = fo.FolderId
WHERE fo.FolderName = 'OriginalFolder'
) d ON d.Dummy = 0
WHEN NOT MATCHED
THEN INSERT ([FileName])
VALUES ([FileName])
OUTPUT d.FileId,
INSERTED.FileId
INTO #sFile (FromFileId, toFileId);
-- link new files to Folders
INSERT INTO #FileFolder (FileId, FolderId)
SELECT sfi.toFileId, sfo.toFolderId
FROM #FileFolder fm
INNER
JOIN #sFile sfi ON
fm.FileId = sfi.FromFileId
INNER
JOIN #sFolder sfo ON
fm.FolderId = sfo.FromFolderId
-- return
SELECT *
FROM #FileIndex fi
JOIN #FileFolder ff ON
fi.FileId = ff.FileId
JOIN #FolderIndex fo ON
ff.FolderId = fo.FolderId
I would like to add another example to add to #Nathan's example, as I found it somewhat confusing.
Mine uses real tables for the most part, and not temp tables.
I also got my inspiration from here: another example
-- Copy the FormSectionInstance
DECLARE #FormSectionInstanceTable TABLE(OldFormSectionInstanceId INT, NewFormSectionInstanceId INT)
;MERGE INTO [dbo].[FormSectionInstance]
USING
(
SELECT
fsi.FormSectionInstanceId [OldFormSectionInstanceId]
, #NewFormHeaderId [NewFormHeaderId]
, fsi.FormSectionId
, fsi.IsClone
, #UserId [NewCreatedByUserId]
, GETDATE() NewCreatedDate
, #UserId [NewUpdatedByUserId]
, GETDATE() NewUpdatedDate
FROM [dbo].[FormSectionInstance] fsi
WHERE fsi.[FormHeaderId] = #FormHeaderId
) tblSource ON 1=0 -- use always false condition
WHEN NOT MATCHED
THEN INSERT
( [FormHeaderId], FormSectionId, IsClone, CreatedByUserId, CreatedDate, UpdatedByUserId, UpdatedDate)
VALUES( [NewFormHeaderId], FormSectionId, IsClone, NewCreatedByUserId, NewCreatedDate, NewUpdatedByUserId, NewUpdatedDate)
OUTPUT tblSource.[OldFormSectionInstanceId], INSERTED.FormSectionInstanceId
INTO #FormSectionInstanceTable(OldFormSectionInstanceId, NewFormSectionInstanceId);
-- Copy the FormDetail
INSERT INTO [dbo].[FormDetail]
(FormHeaderId, FormFieldId, FormSectionInstanceId, IsOther, Value, CreatedByUserId, CreatedDate, UpdatedByUserId, UpdatedDate)
SELECT
#NewFormHeaderId, FormFieldId, fsit.NewFormSectionInstanceId, IsOther, Value, #UserId, CreatedDate, #UserId, UpdatedDate
FROM [dbo].[FormDetail] fd
INNER JOIN #FormSectionInstanceTable fsit ON fsit.OldFormSectionInstanceId = fd.FormSectionInstanceId
WHERE [FormHeaderId] = #FormHeaderId
Here's a solution that doesn't use MERGE (which I've had problems with many times I try to avoid if possible). It relies on two memory tables (you could use temp tables if you want) with IDENTITY columns that get matched, and importantly, using ORDER BY when doing the INSERT, and WHERE conditions that match between the two INSERTs... the first one holds the source IDs and the second one holds the target IDs.
-- Setup... We have a table that we need to know the old IDs and new IDs after copying.
-- We want to copy all of DocID=1
DECLARE #newDocID int = 99;
DECLARE #tbl table (RuleID int PRIMARY KEY NOT NULL IDENTITY(1, 1), DocID int, Val varchar(100));
INSERT INTO #tbl (DocID, Val) VALUES (1, 'RuleA-2'), (1, 'RuleA-1'), (2, 'RuleB-1'), (2, 'RuleB-2'), (3, 'RuleC-1'), (1, 'RuleA-3')
-- Create a break in IDENTITY values.. just to simulate more realistic data
INSERT INTO #tbl (Val) VALUES ('DeleteMe'), ('DeleteMe');
DELETE FROM #tbl WHERE Val = 'DeleteMe';
INSERT INTO #tbl (DocID, Val) VALUES (6, 'RuleE'), (7, 'RuleF');
SELECT * FROM #tbl t;
-- Declare TWO temp tables each with an IDENTITY - one will hold the RuleID of the items we are copying, other will hold the RuleID that we create
DECLARE #input table (RID int IDENTITY(1, 1), SourceRuleID int NOT NULL, Val varchar(100));
DECLARE #output table (RID int IDENTITY(1,1), TargetRuleID int NOT NULL, Val varchar(100));
-- Capture the IDs of the rows we will be copying by inserting them into the #input table
-- Important - we must specify the sort order - best thing is to use the IDENTITY of the source table (t.RuleID) that we are copying
INSERT INTO #input (SourceRuleID, Val) SELECT t.RuleID, t.Val FROM #tbl t WHERE t.DocID = 1 ORDER BY t.RuleID;
-- Copy the rows, and use the OUTPUT clause to capture the IDs of the inserted rows.
-- Important - we must use the same WHERE and ORDER BY clauses as above
INSERT INTO #tbl (DocID, Val)
OUTPUT Inserted.RuleID, Inserted.Val INTO #output(TargetRuleID, Val)
SELECT #newDocID, t.Val FROM #tbl t
WHERE t.DocID = 1
ORDER BY t.RuleID;
-- Now #input and #output should have the same # of rows, and the order of both inserts was the same, so the IDENTITY columns (RID) can be matched
-- Use this as the map from old-to-new when you are copying sub-table rows
-- Technically, #input and #output don't even need the 'Val' columns, just RID and RuleID - they were included here to prove that the rules matched
SELECT i.*, o.* FROM #output o
INNER JOIN #input i ON i.RID = o.RID
-- Confirm the matching worked
SELECT * FROM #tbl t
I know I've done this before years ago, but I can't remember the syntax, and I can't find it anywhere due to pulling up tons of help docs and articles about "bulk imports".
Here's what I want to do, but the syntax is not exactly right... please, someone who has done this before, help me out :)
INSERT INTO dbo.MyTable (ID, Name)
VALUES (123, 'Timmy'),
(124, 'Jonny'),
(125, 'Sally')
I know that this is close to the right syntax. I might need the word "BULK" in there, or something, I can't remember. Any idea?
I need this for a SQL Server 2005 database. I've tried this code, to no avail:
DECLARE #blah TABLE
(
ID INT NOT NULL PRIMARY KEY,
Name VARCHAR(100) NOT NULL
)
INSERT INTO #blah (ID, Name)
VALUES (123, 'Timmy')
VALUES (124, 'Jonny')
VALUES (125, 'Sally')
SELECT * FROM #blah
I'm getting Incorrect syntax near the keyword 'VALUES'.
Your syntax almost works in SQL Server 2008 (but not in SQL Server 20051):
CREATE TABLE MyTable (id int, name char(10));
INSERT INTO MyTable (id, name) VALUES (1, 'Bob'), (2, 'Peter'), (3, 'Joe');
SELECT * FROM MyTable;
id | name
---+---------
1 | Bob
2 | Peter
3 | Joe
1 When the question was answered, it was not made evident that the question was referring to SQL Server 2005. I am leaving this answer here, since I believe it is still relevant.
INSERT INTO dbo.MyTable (ID, Name)
SELECT 123, 'Timmy'
UNION ALL
SELECT 124, 'Jonny'
UNION ALL
SELECT 125, 'Sally'
For SQL Server 2008, can do it in one VALUES clause exactly as per the statement in your question (you just need to add a comma to separate each values statement)...
If your data is already in your database you can do:
INSERT INTO MyTable(ID, Name)
SELECT ID, NAME FROM OtherTable
If you need to hard code the data then SQL 2008 and later versions let you do the following...
INSERT INTO MyTable (Name, ID)
VALUES ('First',1),
('Second',2),
('Third',3),
('Fourth',4),
('Fifth',5)
Using INSERT INTO ... VALUES syntax like in Daniel Vassallo's answer
there is one annoying limitation:
From MSDN
The maximum number of rows that can be constructed by inserting rows directly in the VALUES list is 1000
The easiest way to omit this limitation is to use derived table like:
INSERT INTO dbo.Mytable(ID, Name)
SELECT ID, Name
FROM (
VALUES (1, 'a'),
(2, 'b'),
--...
-- more than 1000 rows
)sub (ID, Name);
LiveDemo
This will work starting from SQL Server 2008+
This will achieve what you're asking about:
INSERT INTO table1 (ID, Name)
VALUES (123, 'Timmy'),
(124, 'Jonny'),
(125, 'Sally');
For future developers, you can also insert from another table:
INSERT INTO table1 (ID, Name)
SELECT
ID,
Name
FROM table2
Or even from multiple tables:
INSERT INTO table1 (column2, column3)
SELECT
t2.column,
t3.column
FROM table2 t2
INNER JOIN table3 t3
ON t2.ID = t3.ID
You could do this (ugly but it works):
INSERT INTO dbo.MyTable (ID, Name)
select * from
(
select 123, 'Timmy'
union all
select 124, 'Jonny'
union all
select 125, 'Sally'
...
) x
You can use a union:
INSERT INTO dbo.MyTable (ID, Name)
SELECT ID, Name FROM (
SELECT 123, 'Timmy'
UNION ALL
SELECT 124, 'Jonny'
UNION ALL
SELECT 125, 'Sally'
) AS X (ID, Name)
This looks OK for SQL Server 2008. For SS2005 & earlier, you need to repeat the VALUES statement.
INSERT INTO dbo.MyTable (ID, Name)
VALUES (123, 'Timmy')
VALUES (124, 'Jonny')
VALUES (125, 'Sally')
EDIT:: My bad. You have to repeat the 'INSERT INTO' for each row in SS2005.
INSERT INTO dbo.MyTable (ID, Name)
VALUES (123, 'Timmy')
INSERT INTO dbo.MyTable (ID, Name)
VALUES (124, 'Jonny')
INSERT INTO dbo.MyTable (ID, Name)
VALUES (125, 'Sally')
It would be easier to use XML in SQL Server to insert multiple rows otherwise it becomes very tedious.
View full article with code explanations here http://www.cyberminds.co.uk/blog/articles/how-to-insert-multiple-rows-in-sql-server.aspx
Copy the following code into sql server to view a sample.
declare #test nvarchar(max)
set #test = '<topic><dialog id="1" answerId="41">
<comment>comment 1</comment>
</dialog>
<dialog id="2" answerId="42" >
<comment>comment 2</comment>
</dialog>
<dialog id="3" answerId="43" >
<comment>comment 3</comment>
</dialog>
</topic>'
declare #testxml xml
set #testxml = cast(#test as xml)
declare #answerTemp Table(dialogid int, answerid int, comment varchar(1000))
insert #answerTemp
SELECT ParamValues.ID.value('#id','int') ,
ParamValues.ID.value('#answerId','int') ,
ParamValues.ID.value('(comment)[1]','VARCHAR(1000)')
FROM #testxml.nodes('topic/dialog') as ParamValues(ID)
USE YourDB
GO
INSERT INTO MyTable (FirstCol, SecondCol)
SELECT 'First' ,1
UNION ALL
SELECT 'Second' ,2
UNION ALL
SELECT 'Third' ,3
UNION ALL
SELECT 'Fourth' ,4
UNION ALL
SELECT 'Fifth' ,5
GO
OR YOU CAN USE ANOTHER WAY
INSERT INTO MyTable (FirstCol, SecondCol)
VALUES
('First',1),
('Second',2),
('Third',3),
('Fourth',4),
('Fifth',5)
I've been using the following:
INSERT INTO [TableName] (ID, Name)
values (NEWID(), NEWID())
GO 10
It will add ten rows with unique GUIDs for ID and Name.
Note: do not end the last line (GO 10) with ';' because it will throw error: A fatal scripting error occurred. Incorrect syntax was encountered while parsing GO.
Corresponding to INSERT (Transact-SQL) (SQL Server 2005) you can't omit INSERT INTO dbo.Blah and have to specify it every time or use another syntax/approach,
In PostgreSQL, you can do it as follows;
A generic example for a 2 column table;
INSERT INTO <table_name_here>
(<column_1>, <column_2>)
VALUES
(<column_1_value>, <column_2_value>),
(<column_1_value>, <column_2_value>),
(<column_1_value>, <column_2_value>),
...
(<column_1_value>, <column_2_value>);
See the real world example here;
A - Create the table
CREATE TABLE Worker
(
id serial primary key,
code varchar(256) null,
message text null
);
B - Insert bulk values
INSERT INTO Worker
(code, message)
VALUES
('a1', 'this is the first message'),
('a2', 'this is the second message'),
('a3', 'this is the third message'),
('a4', 'this is the fourth message'),
('a5', 'this is the fifth message'),
('a6', 'this is the sixth message');
This is working very fast,and efficient in SQL.
Suppose you have Table Sample with 4 column a,b,c,d where a,b,d are int and c column is Varchar(50).
CREATE TABLE [dbo].[Sample](
[a] [int] NULL,
[b] [int] NULL,
[c] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[D] [int] NULL
)
So you cant inset multiple records in this table using following query without repeating insert statement,
DECLARE #LIST VARCHAR(MAX)
SET #LIST='SELECT 1, 1, ''Charan Ghate'',11
SELECT 2,2, ''Mahesh More'',12
SELECT 3,3,''Mahesh Nikam'',13
SELECT 4,4, ''Jay Kadam'',14'
INSERT SAMPLE (a, b, c,d) EXEC(#LIST)
Also With C# using SqlBulkCopy bulkcopy = new SqlBulkCopy(con)
You can insert 10 rows at a time
DataTable dt = new DataTable();
dt.Columns.Add("a");
dt.Columns.Add("b");
dt.Columns.Add("c");
dt.Columns.Add("d");
for (int i = 0; i < 10; i++)
{
DataRow dr = dt.NewRow();
dr["a"] = 1;
dr["b"] = 2;
dr["c"] = "Charan";
dr["d"] = 4;
dt.Rows.Add(dr);
}
SqlConnection con = new SqlConnection("Connection String");
using (SqlBulkCopy bulkcopy = new SqlBulkCopy(con))
{
con.Open();
bulkcopy.DestinationTableName = "Sample";
bulkcopy.WriteToServer(dt);
con.Close();
}
Others here have suggested a couple multi-record syntaxes. Expounding upon that, I suggest you insert into a temp table first, and insert your main table from there.
The reason for this is loading the data from a query can take longer, and you may end up locking the table or pages longer than is necessary, which slows down other queries running against that table.
-- Make a temp table with the needed columns
select top 0 *
into #temp
from MyTable (nolock)
-- load data into it at your leisure (nobody else is waiting for this table or these pages)
insert #temp (ID, Name)
values (123, 'Timmy'),
(124, 'Jonny'),
(125, 'Sally')
-- Now that all the data is in SQL, copy it over to the real table. This runs much faster in most cases.
insert MyTable (ID, Name)
select ID, Name
from #temp
-- cleanup
drop table #temp
Also, your IDs should probably be identity(1,1) and you probably shouldn't be inserting them, in the vast majority of circumstances. Let SQL decide that stuff for you.
Oracle SQL Server Insert Multiple Rows
In a multitable insert, you insert computed rows derived from the rows returned from the evaluation of a subquery into one or more tables.
Unconditional INSERT ALL:- To add multiple rows to a table at once, you use the following form of the INSERT statement:
INSERT ALL
INTO table_name (column_list) VALUES (value_list_1)
INTO table_name (column_list) VALUES (value_list_2)
INTO table_name (column_list) VALUES (value_list_3)
...
INTO table_name (column_list) VALUES (value_list_n)
SELECT 1 FROM DUAL; -- SubQuery
Specify ALL followed by multiple insert_into_clauses to perform an unconditional multitable insert. Oracle Database executes each insert_into_clause once for each row returned by the subquery.
MySQL Server Insert Multiple Rows
INSERT INTO table_name (column_list)
VALUES
(value_list_1),
(value_list_2),
...
(value_list_n);
Single Row insert Query
INSERT INTO table_name (col1,col2) VALUES(val1,val2);
Created a table to insert multiple records at the same.
CREATE TABLE TEST
(
id numeric(10,0),
name varchar(40)
)
After that created a stored procedure to insert multiple records.
CREATE PROCEDURE AddMultiple
(
#category varchar(2500)
)
as
BEGIN
declare #categoryXML xml;
set #categoryXML = cast(#category as xml);
INSERT INTO TEST(id, name)
SELECT
x.v.value('#user','VARCHAR(50)'),
x.v.value('.','VARCHAR(50)')
FROM #categoryXML.nodes('/categories/category') x(v)
END
GO
Executed the procedure
EXEC AddMultiple #category = '<categories>
<category user="13284">1</category>
<category user="132">2</category>
</categories>';
Then checked by query the table.
select * from TEST;
I have an Id column in this view, but it jumps from 40,000 to 7,000,000.
I don't want my crazy stored procedure to loop untill it reaches 7,000,000 so i was wondering if i could create a column that was the row number. It would be an expression of some sort, but I don't know how to make it. Please assist!
Thank you in advance.
You really should do what you're doing with updates, not loops. But if you insist...
declare #ID int
declare #LastID int
select #LastID = 0
while (1 = 1)
begin
select #ID = min(Id)
from [vCategoryClaimsData]
where Id > #LastID
-- if no ID found then we've reached the end of the table
if #ID is null break
-- look up the data for #ID
SELECT #claim_Number = dbo.[vCategoryClaimsData].[Claim No],
...
where Id = #LastID
-- do your processing here
...
-- set #LastID to the ID you just processed
select #LastID = #ID
end
Make sure the Id column is indexed. This will allow skipping the non-sequential Id values.
That being said, it looks like the processing you're doing could be handled with update statements. That would be much more efficient, and eliminate many of the problems others have brought up.
If you can, try and re-write the stored procedure to use sets versus row based processing.
To do what you need, you'll use the ROW_NUMBER function. To do this, I've provided some sample code below.
USE tempdb
GO
IF OBJECT_ID('tempdb.dbo.IDRownumbersView') IS NOT NULL
DROP VIEW5 dbo.IDRownumbersView
IF OBJECT_ID('tempdb.dbo.IDRownumbersTable') IS NOT NULL
DROP TABLE dbo.IDRownumbersTable
CREATE TABLE dbo.IDRownumbersTable
(
RowID int PRIMARY KEY CLUSTERED
,CharValue varchar(5)
,DateValue datetime
)
INSERT INTO IDRownumbersTable VALUES (10, 'A', GETDATE())
INSERT INTO IDRownumbersTable VALUES (20, 'B', GETDATE())
INSERT INTO IDRownumbersTable VALUES (30, 'C', GETDATE())
INSERT INTO IDRownumbersTable VALUES (40, 'D', GETDATE())
INSERT INTO IDRownumbersTable VALUES (50, 'E', GETDATE())
INSERT INTO IDRownumbersTable VALUES (100, 'F', GETDATE())
INSERT INTO IDRownumbersTable VALUES (110, 'G', GETDATE())
INSERT INTO IDRownumbersTable VALUES (120, 'H', GETDATE())
GO
CREATE VIEW dbo.IDRownumbersView
AS
SELECT ROW_NUMBER() OVER (ORDER BY RowID ASC) AS RowNumber
,RowID
,CharValue
,DateValue
FROM dbo.IDRownumbersTable
GO
SELECT * FROM dbo.IDRownumbersView
Relational tables have no row numbers. You can project a row number into a result by using the built in ROW_NUMBER() OVER (ORDER BY ...) function.
Your procedure has many, many problems. It uses loop #counter as lookup key (!!!). It assumes key stability between iteration (ie. assumes #counter+1 is the next key, ignoring any concurent insert/delete). It assumes stability inside the loop (no transactions, no locking to ensure the validity of EXISTS).
What you're tryign to do is try to emulate a keyset driven cursor. Just use a keyset cursor.