tried to insert many records of a data table to SQL table by stored procedure.I need insert data table's records all together.
I'm using C# programming language and i want to send many records to SQL server by ADO.net stored procedure all together. i want to know about data table types and use it if that helps me.
To pass many rows efficiently to a stored procedure, use a table-valued parameter. The C# app can specify parameter type structured and a value of a data table. For maximum performance, make sure the DataTable column types match the server-side table type column types (including max length for string columns).
Below is an example excerpt from the documentation link above:
// Assumes connection is an open SqlConnection object.
using (connection)
{
// Create a DataTable with the modified rows.
DataTable addedCategories = CategoriesDataTable.GetChanges(DataRowState.Added);
// Configure the SqlCommand and SqlParameter.
SqlCommand insertCommand = new SqlCommand("usp_InsertCategories", connection);
insertCommand.CommandType = CommandType.StoredProcedure;
SqlParameter tvpParam = insertCommand.Parameters.AddWithValue("#tvpNewCategories", addedCategories);
tvpParam.SqlDbType = SqlDbType.Structured;
// Execute the command.
insertCommand.ExecuteNonQuery();
}
Here are T-SQL snippets to create the table type and proc.
CREATE TYPE dbo.CategoryTableType AS TABLE
( CategoryID int, CategoryName nvarchar(50) );
GO
CREATE PROC dbo.usp_InsertCategories
AS
#tvpNewCategories dbo.CategoryTableType READONLY
INSERT INTO dbo.Categories (CategoryID, CategoryName)
SELECT nc.CategoryID, nc.CategoryName FROM #tvpNewCategories AS nc;
GO
Even for trivial inserts, TVP performance can provide performance similar to SqlBulkCopy (many thousands per second) with the advantage of a stored procedure interface.
Related
When our applications are submitting SQL via ADO.Net, with input parameters, the parameter definitions are defaulting to nvarchar. If that field is defined as a varchar, and an index, on the database table, the index is not used resulting in a scan instead of a seek. We are converting from Teradata to SQL Server so this is a systemic issue at this point in the conversion. The applications team submitted this to me:
When we define anything as a String in Code, the ADO.Net provider automatically assumes that is a
NVarchar in SQLServer.
One of their solutions is to only remedy this for tables over 1,000 rows. I think this is faulty on so many levels, but am looking for some additional input.
I am a Teradata DBA transitioning to a MSSQL DBA.
I would assume this setting in ADO.Net would be configurable. To me it is obvious that the Input parameter definition needs to match the field definition in the table, especially if it is part of an index resulting a a full table scan.
Can anybody help me with (1) how to set the input parameter definition to match the table definition?, and (2) if this is systemic, why would it be a bad idea to only fix those parameters and queries if the table is over 1,000 records in size?
As you can see in the code below, you can specify in that form the datatype, of the parameter
connection.Open();
SqlCommand command = new SqlCommand(null, connection);
// Create and prepare an SQL statement.
command.CommandText =
"INSERT INTO Region (RegionID, RegionDescription) " +
"VALUES (#id, #desc)";
SqlParameter idParam = new SqlParameter("#id", SqlDbType.Int, 0);
SqlParameter descParam =
new SqlParameter("#desc", SqlDbType.Text, 100);
idParam.Value = 20;
descParam.Value = "First Region";
command.Parameters.Add(idParam);
command.Parameters.Add(descParam);
see Microsoft descriction of prepared statements
I am working on a project which consists transferring a few thousands of Excel rows to a SQL Server (it was also called T-SQL if I'm right?) database. I put together some logic in VBA to shape up the data.
Let me give you some context first about this. The data I'm about to transfer are invoice files. On each row, there are code of the stock items, prices, invoice number, invoice date, name of the client etc. These needs to be transferred to the database of a proprietary ERP system.
There are two tables on the database which I'm interested in for now. First one holds the header data for the invoice (client data, date, invoice number, invoice total etc.). Second table holds the information on the stock items (what has been sold, how many and for how much money etc).
After each insert onto the first table, I have to get the inserted row's primary key, in order to insert rows to the second table, which requires the PK of the first table on each row.
Now, my approach was to use the SCOPE_IDENTITY() function of the T-SQL. When I try to do it on the database directly via SQL Server Management Studio, it works without a hitch.
But when I try to use it in the code, it returns an empty recordset.
Code I'm using is as follows:
Public Function Execute(query As String, Optional is_batch As Boolean = False) As ADODB.Recordset
If conn.State = 0 Then
OpenConnection
End If
Set rs = conn.Execute(query) 'this is the actual query to be executed
Dim identity As ADODB.Recordset 'this rs supposed to hold the PK of inserted row, but returns an empty recordset
Set identity = conn.Execute("SELECT SCOPE_IDENTITY();")
If TypeName(identity.Fields(0).Value) = "Null" Then
pInsertedId = -1
Else
pInsertedId = identity.Fields(0).Value 'I'm saving it in an object variable, to access it easily later on
End If
Set Execute = rs 'to be returned to the caller
'closing the connection is handled outside this procedure
End Function
When I run this on VBA, second query SELECT SCOPE_IDENTITY();just returns an empty recordset. Same query works successfully when ran on the db directly.
Actually I'm able to pull this off by other means. There is a UUID column which I'm supposed to insert to the row in the first table. I can just simply query the table with this UUID and get the PK, but I'm just curious why this won't work.
Any ideas?
Your code doesn't insert any data, so no identity values are generated in the current scope, as defined in the official documentation for SCOPE_IDENTITY():
Returns the last identity value inserted into an identity column in the same scope. A scope is a module: a stored procedure, trigger, function, or batch. Therefore, if two statements are in the same stored procedure, function, or batch, they are in the same scope.
Your code effectively is the same as inserting data in one query window in SSMS and querying SCOPE_IDENTITY() in another query window. Well, this isn't how it works. You must query it in the same scope, i.e. a stored procedure, trigger, function, or batch. Otherwise, use ID values generated by you and insert them with the data.
I can use code like this to extract the columns from a SQL Server 2012 table:
var sqlConnection = new SqlConnection(conns);
var dt = sqlConnection.GetSchema
(SqlClientMetaDataCollectionNames.Columns, new string[] { null , null , "mytable" , null });
However, I am unable to determine the right kind of schema query to get the columns from my user-defined Table type. How is that done?
All ideas appreciated (Using .NET 4.5.1).
Not sure if this is still a relevant question, but I just received an answer to something nearly identical. Check out the answer for this question:
Retrieve UDT table structure in VB.NET
The answer shows how to get a "sanitized" type name from sys.types, then it creates simple sql query in the form:
declare #t MyUDTType; select * from #t;
Then returns the empty DataTable to the calling application.
I have a stored procedure that I need to pass multiple parameters in SQL Server 2012. My application will build a report for all employees or certain employees. I have a check list box for the employees if the user wants to choose certain employees instead of all of them. I want to use those selected employees in the where clause in the stored procedure. I've read in SQL Server 2012 you can pass a table as a parameter with multiple values. I can't seem to find a good example to fit my situation. Any help would be greatly appreciated.
Let's say you are passing EmployeeIDs, and they are integers. First, create a table type in the database:
CREATE TYPE dbo.EmployeesTVP AS TABLE(EmployeeID INT PRIMARY KEY);
Now your stored procedure can say:
CREATE PROCEDURE dbo.GetEmployees
#empTVP dbo.EmployeesTVP READONLY
AS
BEGIN
SET NOCOUNT ON;
SELECT EmployeeID, Name FROM dbo.Employees AS e
WHERE EXISTS (SELECT 1 FROM #empTVP WHERE EmployeeID = e.EmployeeID);
END
GO
And if you want to handle the all scenario, you can say:
IF EXISTS (SELECT 1 FROM #empTVP)
SELECT EmployeeID, Name FROM dbo.Employees AS e
WHERE EXISTS (SELECT 1 FROM #empTVP WHERE EmployeeID = e.EmployeeID);
ELSE
SELECT EmployeeID, Name FROM dbo.Employees;
(You could combine these with an OR conditional, but I tend to find this just gives the optimizer fits.)
Then you create a DataTable in your C# code from the checkboxes, and pass the parameter to your stored procedure as a parameter type of Structured. You can fill in the rest but:
DataTable tvp = new DataTable();
// define / populate DataTable from checkboxes
using (connectionObject)
{
SqlCommand cmd = new SqlCommand("dbo.GetEmployees", connectionObject);
cmd.CommandType = CommandType.StoredProcedure;
SqlParameter tvparam = cmd.Parameters.AddWithValue("#empTVP", tvp);
tvparam.SqlDbType = SqlDbType.Structured;
// ... execute the cmd, grab a reader, etc.
}
Complementary to the Aaron response:
1/ Send your data packed as XML and use an XML data type for your parameter
or
2/ Join your employee IDs values into a single string, comma separated, then split those values.
I'm using SqlBulkCopy to insert/update from a .net DataTable object to a SQL Server table that includes a sql_variant column. However SqlBulkCopy insists on storing DateTime values put into that column as sql type 'datetime' when what I need is 'datetime2'.
My DataTable is defined like this:
DataTable dataTable = new DataTable();
dataTable.Columns.Add(new DataColumn("VariantValue", typeof(object))); //this represents my sql_variant column
Then I throw some data in there that requires a 'datetime2' to store.
DataRow row = dataTable.NewRow();
row[0] = DateTime.MinValue;
dataTable.Rows.Add(row);
And then I use SqlBulkCopy to get that data into Sql Server:
using (SqlBulkCopy bulk = new SqlBulkCopy(myConnection))
{
bulk.DestinationTableName = "tblDestination";
bulk.WriteToServer(dataTable);
}
My bulk copy will fail if a DateTime value is present in the data table that falls outside the range of the sql 'datetime' type (such as '1/1/0001'). That's why the column needs to be of type 'datetime2'.
When you're writing normal insert statements that insert into a sql_variant column you can control what the type of the variant column is by using CAST or CONVERT. For example:
insert into [tblDestination] (VariantValue) values (CAST('1/1/0001' AS datetime2))
Then if you were to display the actual type of the variant column like this:
SELECT SQL_VARIANT_PROPERTY(VariantValue,'BaseType') AS basetype FROM test
You'd see that indeed it is being stored as a 'datetime2'.
But I'm using SqlBulkCopy and, as far as I know, there's no place to tell it that .net DateTime objects should be stored in columns of type 'datetime2' and not 'datetime'. There's no place on the DataTable object, that I know of, to declare this either. Can anyone help me figure out how to get this to happen?
According to the MSDN page for SqlBulkCopy (under "Remarks"):
SqlBulkCopy will fail when bulk loading a DataTable column of type
SqlDateTime into a SQL Server column whose type is one of the
date/time types added in SQL Server 2008.
So, SqlBulkCopy won't be able to handle DateTime2 values. Instead, I'd suggest one of two options:
Insert each row individually (i.e. use a foreach on your DataTable), handling the datatype there. (It might help to use a stored proc to wrap the insert and utilize SqlCommand.Parameters to type the data for the insert.)
Bulk insert into a temp table of strings, then transfer the data to your primary table (converting data types as necessary) in SQL. (which I think will get unnecessarily complicated, but you may be able to eek out some performance for large datasets)