Importing Large XML file into SQL 2.5Gb - sql-server

Hi I am trying to import a large XML file into a table on my sql server (2014)
I have used the code below for smaller files and thought it would be ok as this is a once off, I kicked it off yesterday and the query was still running when I came into work today so this is obviously the wrong route.
here is the code.
CREATE TABLE files_index_bulk
(
Id INT IDENTITY PRIMARY KEY,
XMLData XML,
LoadedDateTime DATETIME
)
INSERT INTO files_index_bulk(XMLData, LoadedDateTime)
SELECT CONVERT(XML, BulkColumn, 2) AS BulkColumn, GETDATE()
FROM OPENROWSET(BULK 'c:\scripts\icecat\files.index.xml', SINGLE_BLOB) AS x;
SELECT * FROM files_index_bulk
Can anyone point out another way of doing this please ive looked around at importing large files and it keeps coming back to using bulk. which I already am.
thanks in advance.
here is the table I am using I want to pull all the data into.
USE [ICECATtesting]
GO
/****** Object: Table [dbo].[files_index] Script Date: 28/04/2017 20:10:44
******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[files_index](
[Product_ID] [int] NULL,
[path] [varchar](100) NULL,
[Updated] [varchar](50) NULL,
[Quality] [varchar](50) NULL,
[Supplier_id] [int] NULL,
[Prod_ID] [varchar](1) NULL,
[Catid] [int] NULL,
[On_Market] [int] NULL,
[Model_Name] [varchar](250) NULL,
[Product_View] [int] NULL,
[HighPic] [varchar](1) NULL,
[HighPicSize] [int] NULL,
[HighPicWidth] [int] NULL,
[HighPicHeight] [int] NULL,
[Date_Added] [varchar](150) NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
and here is a snippit of the xml file.
<ICECAT-interface xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://data.icecat.biz/xsd/files.index.xsd">
<files.index Generated="20170427010009">
<file path="export/level4/EN/11.xml" Product_ID="11" Updated="20170329110432" Quality="SUPPLIER" Supplier_id="2" Prod_ID="PS300E-03YNL-DU" Catid="151" On_Market="0" Model_Name="Satellite 3000-400" Product_View="587591" HighPic="" HighPicSize="0" HighPicWidth="0" HighPicHeight="0" Date_Added="20050627000000">
</file>
<file path="export/level4/EN/12.xml" Product_ID="12" Updated="20170329110432" Quality="ICECAT" Supplier_id="7" Prod_ID="91.42R01.32H" Catid="151" On_Market="0" Model_Name="TravelMate 740LF" Product_View="40042" HighPic="http://images.icecat.biz/img/norm/high/12-31699.jpg" HighPicSize="19384" HighPicWidth="170" HighPicHeight="192" Date_Added="20050627000000">
</file>
<file path="export/level4/EN/13.xml" Product_ID="13" Updated="20170329110432" Quality="SUPPLIER" Supplier_id="2" Prod_ID="PP722E-H390W-NL" Catid="151" On_Market="0" Model_Name="Portégé 7220CT / NW2" Product_View="37021" HighPic="http://images.icecat.biz/img/norm/high/13-31699.jpg" HighPicSize="27152" HighPicWidth="280" HighPicHeight="280" Date_Added="20050627000000">
</file>

The max size of an XML column value in SQL Server is 2GB. It will not be possible to import a 2.5GB file into a single XML column.
UPDATE
Since your underlying objective is to transform XML elements within the file into table rows, you don't need to stage the entire file contents into a single XML column. You can avoid the 2GB limitation, reduce memory requirements, and improve performance by shredding the XML in client code and using a bulk insert technique to insert batches of multiple rows.
The example Powershell script below uses an XmlTextReader to avoid reading the entire XML into a DOM and uses SqlBulkCopy to insert batches of many rows at once. The combination of these techniques should allow you to insert millions rows in minutes rather than hours. These same techniques can be implemented in a custom app or SSIS script task.
I noticed a couple of the table columns specify varchar(1) yet the XML attribute values contain many characters. You'll need to either expand length of the columns or transform the source values.
[String]$global:connectionString = "Data Source=YourServer;Initial Catalog=YourDatabase;Integrated Security=SSPI";
[System.Data.DataTable]$global:dt = New-Object System.Data.DataTable;
[System.Xml.XmlTextReader]$global:xmlReader = New-Object System.Xml.XmlTextReader("C:\FilesToImport\files.xml");
[Int32]$global:batchSize = 10000;
Function Add-FileRow() {
$newRow = $dt.NewRow();
$null = $dt.Rows.Add($newRow);
$newRow["Product_ID"] = $global:xmlReader.GetAttribute("Product_ID");
$newRow["path"] = $global:xmlReader.GetAttribute("path");
$newRow["Updated"] = $global:xmlReader.GetAttribute("Updated");
$newRow["Quality"] = $global:xmlReader.GetAttribute("Quality");
$newRow["Supplier_id"] = $global:xmlReader.GetAttribute("Supplier_id");
$newRow["Prod_ID"] = $global:xmlReader.GetAttribute("Prod_ID");
$newRow["Catid"] = $global:xmlReader.GetAttribute("Catid");
$newRow["On_Market"] = $global:xmlReader.GetAttribute("On_Market");
$newRow["Model_Name"] = $global:xmlReader.GetAttribute("Model_Name");
$newRow["Product_View"] = $global:xmlReader.GetAttribute("Product_View");
$newRow["HighPic"] = $global:xmlReader.GetAttribute("HighPic");
$newRow["HighPicSize"] = $global:xmlReader.GetAttribute("HighPicSize");
$newRow["HighPicWidth"] = $global:xmlReader.GetAttribute("HighPicWidth");
$newRow["HighPicHeight"] = $global:xmlReader.GetAttribute("HighPicHeight");
$newRow["Date_Added"] = $global:xmlReader.GetAttribute("Date_Added");
}
try
{
# init data table schema
$da = New-Object System.Data.SqlClient.SqlDataAdapter("SELECT * FROM dbo.files_index WHERE 0 = 1;", $global:connectionString);
$null = $da.Fill($global:dt);
$bcp = New-Object System.Data.SqlClient.SqlBulkCopy($global:connectionString);
$bcp.DestinationTableName = "dbo.files_index";
$recordCount = 0;
while($xmlReader.Read() -eq $true)
{
if(($xmlReader.NodeType -eq [System.Xml.XmlNodeType]::Element) -and ($xmlReader.Name -eq "file"))
{
Add-FileRow -xmlReader $xmlReader;
$recordCount += 1;
if(($recordCount % $global:batchSize) -eq 0)
{
$bcp.WriteToServer($dt);
$dt.Rows.Clear();
Write-Host "$recordCount file elements processed so far";
}
}
}
if($dt.Rows.Count -gt 0)
{
$bcp.WriteToServer($dt);
}
$bcp.Close();
$xmlReader.Close();
Write-Host "$recordCount file elements imported";
}
catch
{
throw;
}

Try this. Just another method that I have used for some time. It's pretty fast (could be faster). I pull a huge xml db from a gaming company every night. This is how i get it an import it.
$xml = new XMLReader();
$xml->open($xml_file); // file is your xml file you want to parse
while($xml->read() && $xml->name != 'game') { ; } // get past the header to your first record (game in my case)
while($xml->name == 'game') { // now while we are in this record
$element = new SimpleXMLElement($xml->readOuterXML());
$gameRec = $this->createGameRecord($element, $os); // this is my function to reduce some clutter - and I use it elsewhere too
/* this looks confusing, but it is not. There are over 20 fields, and instead of typing them all out, I just made a string. */
$sql = "INSERT INTO $table (";
foreach($gameRec as $field=>$game){
$sql .= " $field,";
}
$sql = rtrim($sql, ",");
$sql .=") values (";
foreach($gameRec as $field=>$game) {
$sql .= " :$field,";
}
$sql = rtrim($sql,",");
$sql .= ") ON DUPLICATE KEY UPDATE "; // online game doesn't have a gamerank - not my choice LOL, so I adjust that for here
switch ($os) {
case 'pc' : $sql .= "gamerank = ".$gameRec['gamerank'] ; break;
case 'mac': $sql .= "gamerank = ".$gameRec['gamerank'] ; break;
case 'pl' : $sql .= "playercount = ".$gameRec['playercount'] ; break;
case 'og' :
$playercount = $this->getPlayerCount($gameRec['gameid']);
$sql .= "playercount = ".$playercount['playercount'] ;
break;
}
try {
$stmt = $this->connect()->prepare($sql);
$stmt->execute($gameRec);
} catch (PDOException $e) {// Kludge
echo 'os: '.$os.'<br/>table: '.$table.'<br/>XML LINK: '.$comprehensive_xml.'<br/>Current Record:<br/><pre>'.print_r($gameRec).'</pre><br/>'.
'SQL: '.$sql.'<br/>';
die('Line:33<br/>Function: pullBFG()<BR/>Cannot add game record <br/>'.$e->getMessage());
}
/// VERY VERY VERY IMPORTANT do not forget these 2 lines, or it will go into a endless loop - I know, I've done it. locks up your system after a bit hahaah
$xml->next('game');
unset($element);
}// while there are games
This should get you started. Obviously, adjust the "game" to your xml records. Trim out the fat I have here.
Here is the createGameRecord($element, $type='pc')
Basically it turns it into an array to use elsewhere, and makes it easier to add it to the db. with a single line as seen above: $stmt->execute($gameRec); Where $gameRec was returned from this function. PDO knows gameRec is an array, and will parse it out as you INSERT IT. the "delHardReturns() is another of my fucntion that gets rid of those hard returns /r /n etc.. Seems to mess up the SQL. I think SQL has a function for that, but I have not pursed it.
Hope you find this useful.
private function createGameRecord($element, $type='pc') {
if( ($type == 'pc') || ($type == 'og') ) { // player count is handled separately
$game = array(
'gamename' => strval($element->gamename),
'gameid' => strval($element->gameid),
'genreid' => strval($element->genreid),
'allgenreid' => strval($element->allgenreid),
'shortdesc' => $this->delHardReturns(strval($element->shortdesc)),
'meddesc' => $this->delHardReturns(strval($element->meddesc)),
'bullet1' => $this->delHardReturns(strval($element->bullet1)),
'bullet2' => $this->delHardReturns(strval($element->bullet2)),
'bullet3' => $this->delHardReturns(strval($element->bullet3)),
'bullet4' => $this->delHardReturns(strval($element->bullet4)),
'bullet5' => $this->delHardReturns(strval($element->bullet5)),
'longdesc' => $this->delHardReturns(strval($element->longdesc)),
'foldername' => strval($element->foldername),
'hasdownload' => strval($element->hasdownload),
'hasdwfeature' => strval($element->hasdwfeature),
'releasedate' => strval($element->releasedate)
);
if($type === 'pc') {
$game['hasvideo'] = strval($element->hasvideo);
$game['hasflash'] = strval($element->hasflash);
$game['price'] = strval($element->price);
$game['gamerank'] = strval($element->gamerank);
$game['gamesize'] = strval($element->gamesize);
$game['macgameid'] = strval($element->macgameid);
$game['family'] = strval($element->family);
$game['familyid'] = strval($element->familyid);
$game['productid'] = strval($element->productid);
$game['pc_sysreqos'] = strval($element->systemreq->pc->sysreqos);
$game['pc_sysreqmhz'] = strval($element->systemreq->pc->sysreqmhz);
$game['pc_sysreqmem'] = strval($element->systemreq->pc->sysreqmem);
$game['pc_sysreqhd'] = strval($element->systemreq->pc->sysreqhd);
if(empty($game['gamerank'])) $game['gamerank'] = 99999;
$game['gamesize'] = $this->readableBytes((int)$game['gamesize']);
}// dealing with PC type
if($type === 'og') {
$game['onlineiframeheight'] = strval($element->onlineiframeheight);
$game['onlineiframewidth'] = strval($element->onlineiframewidth);
}
$game['releasedate'] = substr($game['releasedate'],0,10);
} else {// not type = pl
$game['playercount'] = strval($element->playercount);
$game['gameid'] = strval($element->gameid);
}// no type = pl else
return $game;
}/

Updated: Much faster. I did some research, and while the above post I made shows one (slow) method, I was able to find one that works even faster - for me it does.
I put this as a new answer due to the complete difference from my previous post.
LOAD XML LOCAL INFILE 'path/to/file.xml' INTO TABLE tablename ROWS IDENTIFIED BY '<xml-identifier>'
Example
<students>
<student>
<name>john doe</name>
<boringfields>bla bla bla......</boringfields>
</student>
</students>
Then, MYSQL command would be:
LOAD XML LOCAL INFILE 'path/to/students.xml' INTO TABLE tablename ROWS IDENTIFIED BY '<student>'
rows identified must have single quote and angle brackets.
when I switched to this method, I went from 12min +/- to 30 seconds!! +/-
tips that worked for me. was use the
DELETE FROM tablename
otherwise it will just append to your db.
Ref: https://dev.mysql.com/doc/refman/5.5/en/load-xml.html

Related

linq2db - server side bulkcopy

I'm trying to do a "database side" bulk copy (i.e. SELECT INTO/INSERT INTO) using linq2db. However, my code is trying to bring the dataset over the wire which is not possible given the size of the DB in question.
My code looks like this:
using (var db = new MyDb()) {
var list = db.SourceTable.
Where(s => s.Year > 2012).
GroupBy(s => new { s.Column1, s.Column2 }).
Select(g => new DestinationTable {
Property1 = 'Constant Value',
Property2 = g.First().Column1,
Property3 = g.First().Column2,
Property4 = g.Count(s => s.Column3 == 'Y')
});
db.Execute("TRUNCATE TABLE DESTINATION_TABLE");
db.BulkCopy(new BulkCopyOptions {
BulkCopyType = BulkCopyType.MultipleRows
}, list);
}
The generated SQL looks like this:
BeforeExecute
-- DBNAME SqlServer.2017
TRUNCATE TABLE DESTINATION_TABLE
DataConnection
Query Execution Time (AfterExecute): 00:00:00.0361209. Records Affected: -1.
DataConnection
BeforeExecute
-- DBNAME SqlServer.2017
DECLARE #take Int -- Int32
SET #take = 1
DECLARE #take_1 Int -- Int32
SET #take_1 = 1
DECLARE #take_2 Int -- Int32
...
SELECT
(
SELECT TOP (#take)
[p].[YEAR]
FROM
[dbo].[SOURCE_TABLE] [p]
WHERE
(([p_16].[YEAR] = [p].[YEAR] OR [p_16].[YEAR] IS NULL AND [p].[YEAR] IS NULL) AND ...
...)
FROM SOURCE_TABLE p_16
WHERE p_16.YEAR > 2012
GROUP BY
...
DataConnection
That is all that is logged as the bulkcopy fails with a timeout, i.e. SqlException "Execution Timeout Expired".
Please note that running this query as an INSERT INTO statement takes less than 1 second directly in the DB.
PS: Anyone have any recommendations as to good code based ETL tools to do large DB (+ 1 TB) ETL. Given the DB size I need things to run in the database and not bring data over the wire. I've tried pyspark, python bonobo, c# etlbox and they all move too much data around. I thought linq2db had potential, i.e. basically just act like a C# to SQL transpiler but it is also trying to move data around.
I would suggest to rewrite your query because group by can not return first element. Also Truncate is a part of the library.
var sourceQuery =
from s in db.SourceTable
where s.Year > 2012
select new
{
Source = s,
Count = Sql.Ext.Count(s.Column3 == 'Y' ? 1 : null).Over()
.PartitionBy(s.Column1, s.Column2).ToValue()
RN = Sql.Ext.RowNumber().Over()
.PartitionBy(s.Column1, s.Column2).OrderByDesc(s.Year).ToValue()
};
db.DestinationTable.Truncate();
sourceQuery.Where(s => s.RN == 1)
.Insert(db.DestinationTable,
e => new DestinationTable
{
Property1 = 'Constant Value',
Property2 = e.Source.Column1,
Property3 = e.Source.Column2,
Property4 = e.Count
});
After some investigation I stumbled onto this issue. Which lead me to the solution. The code above needs to change to:
db.Execute("TRUNCATE TABLE DESTINATION_TABLE");
db.SourceTable.
Where(s => s.Year > 2012).
GroupBy(s => new { s.Column1, s.Column2 }).
Select(g => new DestinationTable {
Property1 = 'Constant Value',
Property2 = g.First().Column1,
Property3 = g.First().Column2,
Property4 = g.Count(s => s.Column3 == 'Y')
}).Insert(db.DestinationTable, e => e);
Documentation of the linq2db project leaves a bit to be desired however, in terms of functionality its looking like a great project for ETLs (without horrible 1000s of line copy/paste sql/ssis scripts).

Cannot understand how will Entity Framewrok generate a SQL statement for an Update operation using timestamp?

I have the following method inside my asp.net mvc web application :
var rack = IT.ITRacks.Where(a => !a.Technology.IsDeleted && a.Technology.IsCompleted);
foreach (var r in rack)
{
long? it360id = technology[r.ITRackID];
if (it360resource.ContainsKey(it360id.Value))
{
long? CurrentIT360siteid = it360resource[it360id.Value];
if (CurrentIT360siteid != r.IT360SiteID)
{
r.IT360SiteID = CurrentIT360siteid.Value;
IT.Entry(r).State = EntityState.Modified;
count = count + 1;
}
}
IT.SaveChanges();
}
When I checked SQL Server profiler I noted that EF will generated the following SQL statement:
exec sp_executesql N'update [dbo].[ITSwitches]
set [ModelID] = #0, [Spec] = null, [RackID] = #1, [ConsoleServerID] = null, [Description] = null, [IT360SiteID] = #2, [ConsoleServerPort] = null
where (([SwitchID] = #3) and ([timestamp] = #4))
select [timestamp]
from [dbo].[ITSwitches]
where ##ROWCOUNT > 0 and [SwitchID] = #3',N'#0 int,#1 int,#2 bigint,#3 int,#4 binary(8)',#0=1,#1=539,#2=1502,#3=1484,#4=0x00000000000EDCB2
I can not understand the purpose of having the following section :-
select [timestamp]
from [dbo].[ITSwitches]
where ##ROWCOUNT > 0 and [SwitchID] = #3',N'#0 int,#1 int,#2 bigint,#3 int,#4 binary(8)',#0=1,#1=539,#2=1502,#3=1484,#4=0x00000000000EDCB2
Can anyone advice?
Entity Framework uses timestamps to check whether a row has changed. If the row has changed since the last time EF retrieved it, then it knows it has a concurrency problem.
Here's an explanation:
http://www.remondo.net/entity-framework-concurrency-checking-with-timestamp/
This is because EF (and you) want to update the updated client-side object by the newly generated rowversion value.
First the update is executed. If this succeeds (because the rowversion is still the one you had in the client) a new rowversion is generated by the database and EF retrieves that value. Suppose you'd immediately want to make a second update. That would be impossible if you didn't have the new rowversion.
This happens with all properties that are marked as identity or computed (by DatabaseGenertedOption).

Update or insert from csv into sql server

I have a CSV file containing user info, say
first_name;last_name;user_id;full_name
column separator is ;, row terminator is \n.
What I need to do is to insert or update into users table. Unique key is user_id: if record with this user_id already exists, I need to update, if it doesn't I need to insert.
However, there are some problems that prevent me from using management studio data-import or bulk insert.
First, there are more fields in the users table (not just 4) and the order of columns in csv file does not correspond to the order of columns in the table. So I need to be able to specify which column from the file goes to which column in the table.
Secondly, some additional fields need to be filled. For example, users.email = users.user_id. Here is another obstacle - though for a newly inserted row users.email = users.user_id it is possible that users.email will change in the future, so I cannot just insert user_id and then run update [users] set [email] = [user_id].
Use fgetcsv
Example with two colonne :
<?php
$row = 0;
$update = "";
//separator of column
$separator = ";";
//creates two variables containing the index columns to read / modify
$idx_nom = 0;
$idx_rubrique = 1;
//Open file writing
if (($handle = fopen("test.csv", "r")) !== FALSE)
{
//we travel the file line by line, storing the data in a table
while (($data = fgetcsv($handle, 1000, ";")) !== FALSE)
{
//can not control that the second line if the first contains the column headers
if ($row != 0)
{
//conditions of one column
if (stristr($data[$idx_nom], 'chemise'))
{
$data[$idx_rubrique] = 1;
}
else if (stristr($data[$idx_nom], 'costume'))
{
$data[$idx_rubrique] = 2;
}
else if (stristr($data[$idx_nom], 'cravate'))
{
$data[$idx_rubrique] = 3;
}
}
$update .= implode($separator,$data)."\r\n";
$row++;
}
fclose($handle);
}
//update from csv
$ouvre=fopen("test.csv","w+");
fwrite($ouvre,$update);
fclose($ouvre);
?>

How to insert a file into sql-server via tiny_tds?

In a data importing script:
client = TinyTds.Client.new(...)
insert_str = "INSERT INTO [...] (...) VALUE (...)"
client.execute(insert_str).do
So far so good.
However, how can I attach a .pdf file into the varbinary field (SQL Server 2000)?
I've recently had the same issue and using activerecord was not really adapted for what I wanted to do...
So, without using activerecord:
client = TinyTds.Client.new(...)
data = "0x" + File.open(file, 'rb').read.unpack('H*').first
insert_str = "INSERT INTO [...] (...) VALUE (... #{data})"
client.execute(insert_str).do
To send proper varbinary data, you need to read the file, convert it to hexadecimal string with unpack('H*').first and prepend '0x' to the result.
Here is PHP-MSSQL code to save binary data:
mssql_query("SET TEXTSIZE 2147483647",$link);
$sql = "UPDATE UploadTable SET UploadTable_Data = ".varbinary_encode($data)." WHERE Person_ID = '".intval($p_id)."'";
mssql_query($sql,$link) or
die('cannot upload_resume() in '.__FILE__.' on line '.__LINE__.'.<br/>'.mssql_get_last_message());
function varbinary_encode($data=null) {
$encoded = null;
if (!is_null($data)) {
$a = unpack("H*hex", $data);
$encoded = "0x";
$encoded .= $a['hex'];
}
return $encoded;
}
Here is PHP-MSSQL code to get binary data:
mssql_query("SET TEXTSIZE 2147483647",$link);
$sql = "SELECT * FROM UploadTable WHERE ID = 123";
$db_result = mssql_query($sql,$link);
// work with result like normal
I ended up using activerecord:
require 'rubygems'
require 'tiny_tds'
require 'activerecord-sqlserver-adapter'
..
my_table.create(:file_name => "abc.pdf", :file_data => File.open("abc.pdf", "rb").read)
For SQLServer 2000 support, go for 2.3.x version activerecord-sqlserver-adapter gem.

EF ExecuteStoredCommand with ReturnValue parameter

I'm creating a new application which needs to interface with legacy code :(.
The stored procedure I'm attempting to call uses RETURN for its result. My attempts to execute and consume the return value result in the exception:
InvalidOperationException: When executing a command, parameters must be exclusively database parameters or values.
Changing the stored proc to return the value another way isn't desired, since it either requires updating the legacy app or maintaining a nearly duplicate stored proc.
The legacy stored proc synopsis:
DECLARE #MyID INT
INSERT INTO MyTable ...
SELECT #MyID = IDENTITY()
RETURN #MyID
My Entity Framework / DbContext work, which yields the above InvalidOperationException.
SqlParameter parm = new SqlParameter() {
ParameterName = "#MyID",
Direction = System.Data.ParameterDirection.ReturnValue
};
DbContext.Database.ExecuteSqlCommand("EXEC dbo.MyProc", parm);
Looking for any and all solutions which don't require the stored proc to be modified.
You can capture the return value of the stored procedure into an output parameter instead:
SqlParameter parm = new SqlParameter() {
ParameterName = "#MyID",
SqlDbType = SqlDbType.Int,
Direction = System.Data.ParameterDirection.Output
};
Database.ExecuteSqlCommand("exec #MyId = dbo.MyProc", parm);
int id = (int)parm.Value;
I know it's a bit late, but this works for me:
var param = new SqlParameter("#Parameter1", txtBoxORsmth.text);
someVariable = ctx.Database.SqlQuery<int>("EXEC dbo.MyProc", param).First();
You don't have to use ExecuteSqlCommand.
You can just get the underlying connection from DbContext.Database.Connection and use raw ADO.NET (CreateCommand(), ExecuteNonQuery(), ...)
The error message
InvalidOperationException: When executing a command, parameters must be exclusively database parameters or values.
means that you're not providing the right type (or something else which isn't shown in your code snippet) in the params list of SQLParameters.
In my case I had forgotten to remove a MergeOption because I changed the way the SQL command was executed.
This extension method will do all the dirty work for you now. See fuller description on SO here.
This will return the int from a stored proc using DBContext:
var newId = DbContext.Database.SqlQuery<int>("EXEC dbo.MyProc #MyID = {0}", parm).First();
Create table for With parameter example for testing or change the second sql query according to yours.
CREATE TABLE [dbo].[Test](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](50) NOT NULL,
CONSTRAINT [PK_Test] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
//========================= =================================//
public void Test()
{
using (var db = new DbContext())
{
string sql = "dbo.MyProc"; //With Out Parameter
int id1 = (int)db.Database.SqlQuery<decimal>(sql).FirstOrDefault();
db.SaveChanges();
//Or
sql = "INSERT Test(Name) values({0}) SELECT SCOPE_IDENTITY();"; //With Parameter
int id2 = (int)db.Database.SqlQuery<decimal>(sql, new object[] { "Thulasi Ram.S" }).FirstOrDefault();
db.SaveChanges();
}
}
I tried the ways above, but only this way works out for me(Must have the 'ToList()' function):
SqlParameter res = new SqlParameter()
{
ParameterName = "Count",
Value=1,
Direction = System.Data.ParameterDirection.Output
};
db.Database.SqlQuery<object>(
"[dbo].[GetWorkerCountBySearchConditions] #Count ,
res
).ToList();
return Convert.ToInt32(res.Value);
I'm sure this isn't the only valid answer, but one that I ultimately used and has been working successfully.
The key seemed to be naming the ReturnValue parameter RetVal.
SqlParameter id = create.Parameters.Add("RetVal", System.Data.SqlDbType.Int);
id.Direction = System.Data.ParameterDirection.ReturnValue;
putting it all together:
SqlCommand proc = new SqlCommand("dbo.MyProc", new SqlConnection(<connection string>));
proc.CommandType = System.Data.CommandType.StoredProcedure;
SqlParameter id = create.Parameters.Add("RetVal", System.Data.SqlDbType.Int);
id.Direction = System.Data.ParameterDirection.ReturnValue;
proc.ExecuteNonQuery();
int newId = Convert.ToInt32(id.Value);

Resources