I have the following table :
CREATE TABLE DI_Simulation
(
[city] nvarchar(255),
[profession] nvarchar(255)
);
I load the data from an URL with a Script task where I created a class Simulation and added two string attributes. I then deserialize the downloaded JSON data and create output rows.
I specify that the output columns city and profession are of type DT_WSTR but the following characters [é,à,è,...] are always replaced...
I tried different collations on both columns but no changes were seen. I also tried forcing UTF8 conversion on the Script Task but that also didn't work.
Any suggestions ?
EDIT: I should also mention that I have other tables where the insertion is made correctly, but this one especially has this issue, which I'm thinking the Script Task has something to do with it.
ServicePointManager.Expect100Continue = true;
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Ssl3;
// Convert json string to .net object using the old school JavaScriptSerializer class
string Uri = "https://....";
JavaScriptSerializer serialize = new JavaScriptSerializer
{
MaxJsonLength = Int32.MaxValue,
};
var simulation = serialize.Deserialize<Simulation[]>(DownloadJson(Uri));
EDIT 2:
WebClient client = new WebClient();
Stream stream = client.OpenRead(Url);
StreamReader streamreader = new StreamReader(stream, System.Text.Encoding.GetEncoding(1252));
var ags = streamreader.ReadToEnd();
/*System.IO.File.WriteAllText(#"C:\Users\hhamdani\Desktop\Data Integration Objetcs\simulation_data.json",
ags,
System.Text.Encoding.GetEncoding(1252));*/
var simulation = serialize.Deserialize<Simulation[]>(ags);
Instead of downloading with DownloadJson, I used streamreader to get the Json Data from the URL and forced the Encoding, when I save the data on a txt file it's good, but on the Database it's the same issue.
Works fine from a script source component based on my reproduction
Setup
Table creation
A trivial table with two columns
CREATE TABLE dbo.[SO_71842511] (
[TestCase] int,
[SomeText] nvarchar(50)
)
SCR Do Unicode
Proof that we can inject unicode characters into the data flow task from a script source.
Define the Script Task as a Source. Add 2 columns to the output, one int, one DT_WSTR
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
[Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
public class ScriptMain : UserComponent
{
public override void CreateNewOutputRows()
{
Output0Buffer.AddRow();
Output0Buffer.SomeText = "e e plain";
Output0Buffer.TestCase = 0;
Output0Buffer.AddRow();
Output0Buffer.SomeText = "é e forward";
Output0Buffer.TestCase = 1;
Output0Buffer.AddRow();
Output0Buffer.SomeText = "à a back";
Output0Buffer.TestCase = 2;
Output0Buffer.AddRow();
Output0Buffer.SomeText = "è e backward";
Output0Buffer.TestCase = 3;
}
}
Results
WebClient client = new WebClient();
Stream stream = client.OpenRead(Url);
StreamReader streamreader = new StreamReader(stream, System.Text.Encoding.UTF8);
var ags = streamreader.ReadToEnd();
This did the job for me.
Thanks #billinkc
Related
I am developing a service using WebApi2 and EntityFramework6.
I have a legacy SQLServer DB that my service must work with.
That DB is using heavily the 'hierarchyid' data type and this type is used internally in DB's stored procedures.
Seems like EF6 is not supporting 'hierarchyid' data type, so i used this fork that adds support for 'hierarchyid'.
While the retrieval from the DB is working great with the 'hierarchyid' type, my problem is with the Stored Procedures that need a 'hierarchyid' as a parameter.
The stored procedure looks like this:
CREATE PROCEDURE [dbo].[GetSomethingByNodeId]
(
#startingRoot HIERARCHYID
,#return HIERARCHYID OUTPUT
)
My client code for invoking this stored procedure looks like this:
var param1 = new SqlParameter("#startingRoot", new HierarchyId("/"));
var param2 = new SqlParameter{ ParameterName = "#return", Value = 0, Direction = ParameterDirection.Output };
var obj = context.Database.SqlQuery<HierarchyId>("GetSomethingByNodeId" #startingRoot, #return out", param1, param2).ToList();
But unfortunately calling this query throws an exception that says:
An unhandled exception of type 'System.ArgumentException' occurred in EntityFramework.SqlServer.dll
Additional information: No mapping exists from object type System.Data.Entity.Hierarchy.HierarchyId to a known managed provider native type.
Any ideas on how i can make this work?
Unfortunately, MetaType.GetMetaTypeFromValue does not allow to add types (all supported types are hardcoded).
I think you can accomplish your goal with nvarchar parameters and conversions.
In your C# code:
var param1 = new SqlParameter("#startingRoot", "/1/");
var param2 = new SqlParameter { ParameterName = "#return", Value = "", Size = 1000, Direction = ParameterDirection.Output };
var ids = context.Database.SqlQuery<HierarchyId>("GetSomethingByNodeId #startingRoot, #return out", param1, param2).ToList();
var returnedId = new HierarchyId(param2.Value.ToString());
In your procedure (I wrote some test code inside):
CREATE PROCEDURE [dbo].[GetSomethingByNodeId]
(
#startingRoot nvarchar(max), #return nvarchar(max) OUTPUT
)
as
declare #hid hierarchyid = hierarchyid::Parse('/1/')
select #return = #hid.ToString()
declare #root hierarchyid = hierarchyid::Parse(#startingRoot)
select #root as field
Also, you can try to use Microsoft.SqlServer.Types and SqlHierarchyId type like this:
var sqlHierarchyId = SqlHierarchyId.Parse("/");
var param1 = new SqlParameter("#startingRoot", sqlHierarchyId) { UdtTypeName = "HierarchyId" };
But, I think, this is wrong direction.
Oleg's answer is correct, hierarchyid is still not integrated to the EF very well, and you should operate with strings in .net. Here is one more approach which was used from the first days of HierarchyId datatype:
Stored Procedure:
CREATE PROCEDURE GetSomethingByNodeId
#startingRoot hierarchyid, -- you don't need to use nvarchar here. String which will come from the application will be converted to hierarchyId implicitly
#return nvarchar(500) OUTPUT
AS
BEGIN
SELECT #return = #startingRoot.GetAncestor(1).ToString();
END
In an application you are adding a partial class for your EF data context with the SP call using plain old ADO.NET. Probably you will write this other way or use Dapper instead, but the main idea here is passing parameter as string to SQL Server, and it will convert to the HierarchyId implicitly:
public partial class TestEntities
{
public string GetSomethingByNodeId(string startingRoot)
{
using (var connection = new SqlConnection(this.Database.Connection.ConnectionString))
{
var command = new SqlCommand("GetSomethingByNodeId", connection);
command.CommandType = CommandType.StoredProcedure;
command.Parameters.AddWithValue("#startingRoot", startingRoot);
var outParameter = new SqlParameter("#return", SqlDbType.NVarChar, 500);
outParameter.Direction = ParameterDirection.Output;
command.Parameters.Add(outParameter);
connection.Open();
command.ExecuteNonQuery();
return outParameter.Value.ToString();
}
}
}
Then call this method as any other stored procedure using your EF context:
using (var context = new TestEntities())
{
var s = context.GetSomethingByNodeId("/1/1.3/");
}
UPD: here is how the extension method for legacy HierarchyId procedure call will look like with Dapper (as for me it looks much better than plain ADO.NET):
public string GetSomethingByNodeId(string startingRoot)
{
using (var connection = new SqlConnection(this.Database.Connection.ConnectionString))
{
var parameters = new DynamicParameters();
parameters.Add("startingRoot", startingRoot);
parameters.Add("return", null, DbType.String, ParameterDirection.Output, 500);
connection.Open();
connection.Execute("GetSomethingByNodeId", parameters, commandType: CommandType.StoredProcedure);
return parameters.Get<string>("return");
}
}
I have been trying to implement bulk insert operation for postgre database using Npgsql version 3.1.2, but I had facing one issue ('insufficient data left in message ')
regarding datatype miss match for the column paymentdone(bit(1)) datatype in postgre table. I had try with bool, char, integer datatype (C#) but that is also got same error.
Code For bulk data insertion
public void BulkInsert(string connectionString, DataTable dataTable)
{
using (var npgsqlConn = new NpgsqlConnection(connectionString))
{
npgsqlConn.Open();
var commandFormat = string.Format(CultureInfo.InvariantCulture, "COPY {0} {1} FROM STDIN BINARY", "logging.testtable", "(firstName,LastName,LogDateTime,RowStatus,active,id,paymentdone)");
using (var writer = npgsqlConn.BeginBinaryImport(commandFormat))
{
foreach (DataRow item in dataTable.Rows)
{
writer.WriteRow(item.ItemArray);
}
}
npgsqlConn.Close();
}
}
DataTable Function
private static void BulkInsert()
{
DataTable table = new DataTable();
table.Columns.Add("firstName", typeof(String));
table.Columns.Add("LastName", typeof(String));
table.Columns.Add("LogDateTime", typeof(DateTime));
table.Columns.Add("RowStatus", typeof(int));
table.Columns.Add("active", typeof(bool));
table.Columns.Add("id", typeof(long));
table.Columns.Add("paymentdone", typeof(bool));
var dataRow = table.NewRow();
dataRow[0] = "Test";
dataRow[1] = "Temp";
dataRow[2] = DateTime.Now;
dataRow[3] = 1;
dataRow[4] = true;
dataRow[5] = 10;
dataRow[6] = true;
table.Rows.Add(dataRow);
BulkInsert(ConfigurationManager.ConnectionStrings["StoreEntities"].ConnectionString, table);
}
This is probably happening because when Npgsql sees a boolean, its default is to sent a PostgreSQL boolean and not a BIT(1). When using binary COPY, you must write exactly the types PostgreSQL expects.
One solution is probably to use .NET BitArray instead of boolean. Npgsql will infer PostgreSQL BIT() from that type and everything should work.
But a safer solution is simply to call StartRow() and then to use the overload of Write() which accepts an NpgsqlDbType. This allows you to unambiguously specify which PostgreSQL type you want to send.
OK I managed to upload the word DOCX into my SQL Server database into a varbinary (Max) column.
I can retrieve the DOCX from the database and covert it from varbinary back into an array and offer it as a download with:
Byte[] bytes = (Byte[])dt.Rows[0]["TD_DocFile"];
Response.Buffer = true;
Response.Charset = "";
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.ContentType = dt.Rows[0]["TD_DocContentType"].ToString();
Response.AddHeader("content-disposition", "attachment;filename="
+ dt.Rows[0]["TD_DocTitle"].ToString());
Response.BinaryWrite(bytes);
Response.Flush();
Response.End();
Instead of downloading the document I would prefer to use it in variable so I exchange placeholders in it.
I tried to find a way to convert it to a string or so I can use it for docx eg.
DocX letter = this.document();
Best option I saw so far was the filestream version
public static MemoryStream databaseFileRead(string varID) {
MemoryStream memoryStream = new MemoryStream();
using (var varConnection = Locale.sqlConnectOneTime(Locale.sqlDataConnectionDetails))
using (var sqlQuery = new SqlCommand(#"SELECT [RaportPlik] FROM [dbo].[Raporty] WHERE [RaportID] = #varID", varConnection)) {
sqlQuery.Parameters.AddWithValue("#varID", varID);
using (var sqlQueryResult = sqlQuery.ExecuteReader())
if (sqlQueryResult != null) {
sqlQueryResult.Read();
var blob = new Byte[(sqlQueryResult.GetBytes(0, 0, null, 0, int.MaxValue))];
sqlQueryResult.GetBytes(0, 0, blob, 0, blob.Length);
//using (var fs = new MemoryStream(memoryStream, FileMode.Create, FileAccess.Write)) {
memoryStream.Write(blob, 0, blob.Length);
//}
}
}
return memoryStream;
}
But I couldn't convert the binary array bytes nor the memory stream into a variable docx would understand. Maybe I just looked for the wrong conversation. Can someone give me a hint please?
The field is called TD_DocContentType from the database. I can accept that I am weak on the conversion in this instance. I can't see what I am doing wrong. Need a new idea please.
Kind Regards,
Rene
I found the solution for my problem. Took a while and lot of more research to do. I tried to tackle the problem from the wrong angle.
This solution reads the binary field from the database and writes it as a file to an internal web folder called doctemp.
private void download(DataTable dt)
{
var physicalPath = Server.MapPath("~\\doctemp\\{0}");
string outputFileName = string.Format(physicalPath, dt.Rows[0]["TD_DocTitle"]);
filename = outputFileName;
Byte[] bytes = (Byte[])dt.Rows[0]["TD_DocFile"];
File.WriteAllBytes(outputFileName, bytes);
}
I am using SMS caster to send sms.It has an option to Import csv files.
Now I want to dynamically create csv file of CellNo column of Person table from Visual Studio 2010 connected SQL Server 2008.So that I click on a button and it creates a csv file which I can then access from my software SMSCaster to send sms.
The solutions available are either manual-based or if some query is provided it requires Microsoft OLEDB.....so is there any simple query to convert queryresult into .csv file?
Try this :
Namespace : System.IO;
var _lines = new List<string>();
for(int _i=0;i<gridview1.rows.count;_i++)
{
string[] _mobileNos = gridView1.rows[_i].cells[mobilecolumn index in gridview].text;
var header = string.Join(",", _mobileNos);
_lines.Add(header);
}
File.WriteAllLines("FileName.csv",_lines);
Here is the solution that worked:
public void gridtoCSVFILE()
{
string ing;
List<string> lines = new List<string>();
for (int i = 0; i < gvStudCellNo.Rows.Count; i++)
{
ing = gvStudCellNo.Rows[i].Cells[0].Value.ToString();
lines.Add(ing);
File.WriteAllLines("StudentsCellNo.csv", lines);
}
}
//it will create csv file in your bin folder...also it automatically replaces each new file with the old one
I have mvc application. In action I have Dictionary<string,int>. The Key is ID and Value is sortOrderNumber. I want to create stored procedure that will be get key(id) find this record in database and save orderNumber column by value from Dictionary. I want to call stored procedure once time and pass data to it, instead of calling many times for updating data.
Have you any ideas?
Thanks!
The accepted answer of using a TVP is generally correct, but needs some clarification based on the amount of data being passed in. Using a DataTable is fine (not to mention quick and easy) for smaller sets of data, but for larger sets it does not scale given that it duplicates the dataset by placing it in the DataTable simply for the means of passing it to SQL Server. So, for larger sets of data there is an option to stream the contents of any custom collection. The only real requirement is that you need to define the structure in terms of SqlDb types and iterate through the collection, both of which are fairly trivial steps.
A simplistic overview of the minimal structure is shown below, which is an adaptation of the answer I posted on How can I insert 10 million records in the shortest time possible?, which deals with importing data from a file and is hence slightly different as the data is not currently in memory. As you can see from the code below, this setup is not overly complicated yet highly flexible as well as efficient and scalable.
SQL object # 1: Define the structure
-- First: You need a User-Defined Table Type
CREATE TYPE dbo.IDsAndOrderNumbers AS TABLE
(
ID NVARCHAR(4000) NOT NULL,
SortOrderNumber INT NOT NULL
);
GO
SQL object # 2: Use the structure
-- Second: Use the UDTT as an input param to an import proc.
-- Hence "Tabled-Valued Parameter" (TVP)
CREATE PROCEDURE dbo.ImportData (
#ImportTable dbo.IDsAndOrderNumbers READONLY
)
AS
SET NOCOUNT ON;
-- maybe clear out the table first?
TRUNCATE TABLE SchemaName.TableName;
INSERT INTO SchemaName.TableName (ID, SortOrderNumber)
SELECT tmp.ID,
tmp.SortOrderNumber
FROM #ImportTable tmp;
-- OR --
some other T-SQL
-- optional return data
SELECT #NumUpdates AS [RowsUpdated],
#NumInserts AS [RowsInserted];
GO
C# code, Part 1: Define the iterator/sender
using System.Collections;
using System.Data;
using System.Data.SqlClient;
using System.IO;
using Microsoft.SqlServer.Server;
private static IEnumerable<SqlDataRecord> SendRows(Dictionary<string,int> RowData)
{
SqlMetaData[] _TvpSchema = new SqlMetaData[] {
new SqlMetaData("ID", SqlDbType.NVarChar, 4000),
new SqlMetaData("SortOrderNumber", SqlDbType.Int)
};
SqlDataRecord _DataRecord = new SqlDataRecord(_TvpSchema);
StreamReader _FileReader = null;
// read a row, send a row
foreach (KeyValuePair<string,int> _CurrentRow in RowData)
{
// You shouldn't need to call "_DataRecord = new SqlDataRecord" as
// SQL Server already received the row when "yield return" was called.
// Unlike BCP and BULK INSERT, you have the option here to create an
// object, do manipulation(s) / validation(s) on the object, then pass
// the object to the DB or discard via "continue" if invalid.
_DataRecord.SetString(0, _CurrentRow.ID);
_DataRecord.SetInt32(1, _CurrentRow.sortOrderNumber);
yield return _DataRecord;
}
}
C# code, Part 2: Use the iterator/sender
public static void LoadData(Dictionary<string,int> MyCollection)
{
SqlConnection _Connection = new SqlConnection("{connection string}");
SqlCommand _Command = new SqlCommand("ImportData", _Connection);
SqlDataReader _Reader = null; // only needed if getting data back from proc call
SqlParameter _TVParam = new SqlParameter();
_TVParam.ParameterName = "#ImportTable";
// _TVParam.TypeName = "IDsAndOrderNumbers"; //optional for CommandType.StoredProcedure
_TVParam.SqlDbType = SqlDbType.Structured;
_TVParam.Value = SendRows(MyCollection); // method return value is streamed data
_Command.Parameters.Add(_TVParam);
_Command.CommandType = CommandType.StoredProcedure;
try
{
_Connection.Open();
// Either send the data and move on with life:
_Command.ExecuteNonQuery();
// OR, to get data back from a SELECT or OUTPUT clause:
SqlDataReader _Reader = _Command.ExecuteReader();
{
Do something with _Reader: If using INSERT or MERGE in the Stored Proc, use an
OUTPUT clause to return INSERTED.[RowNum], INSERTED.[ID] (where [RowNum] is an
IDENTITY), then fill a new Dictionary<string, int>(ID, RowNumber) from
_Reader.GetString(0) and _Reader.GetInt32(1). Return that instead of void.
}
}
finally
{
_Reader.Dispose(); // optional; needed if getting data back from proc call
_Command.Dispose();
_Connection.Dispose();
}
}
Using Table Valued parameters is really not that complex.
given this SQL:
CREATE TYPE MyTableType as TABLE (ID nvarchar(25),OrderNumber int)
CREATE PROCEDURE MyTableProc (#myTable MyTableType READONLY)
AS
BEGIN
SELECT * from #myTable
END
this will show how relatively easy it is, it just selects out the values you sent in for demo purposes. I am sure you can easily abstract this away in your case.
using System;
using System.Collections.Generic;
using System.Data;
using System.Data.SqlClient;
namespace TVPSample
{
class Program
{
static void Main(string[] args)
{
//setup some data
var dict = new Dictionary<string, int>();
for (int x = 0; x < 10; x++)
{
dict.Add(x.ToString(),x+100);
}
//convert to DataTable
var dt = ConvertToDataTable(dict);
using (SqlConnection conn = new SqlConnection("[Your Connection String here]"))
{
conn.Open();
using (SqlCommand comm = new SqlCommand("MyTableProc",conn))
{
comm.CommandType=CommandType.StoredProcedure;
var param = comm.Parameters.AddWithValue("myTable", dt);
//this is the most important part:
param.SqlDbType = SqlDbType.Structured;
var reader = comm.ExecuteReader(); //or NonQuery, etc.
while (reader.Read())
{
Console.WriteLine("{0} {1}", reader["ID"], reader["OrderNumber"]);
}
}
}
}
//I am sure there is a more elegant way of doing this.
private static DataTable ConvertToDataTable(Dictionary<string, int> dict)
{
var dt = new DataTable();
dt.Columns.Add("ID",typeof(string));
dt.Columns.Add("OrderNumber", typeof(Int32));
foreach (var pair in dict)
{
var row = dt.NewRow();
row["ID"] = pair.Key;
row["OrderNumber"] = pair.Value;
dt.Rows.Add(row);
}
return dt;
}
}
}
Produces
0 100
1 101
2 102
3 103
4 104
5 105
6 106
7 107
8 108
9 109
Stored procedures do not support arrays as inputs. Googling gives a couple of hacks using XML or comma separated strings, but those are hacks.
A more SQLish way to do this is to create a temporary table (named e.g. #Orders) and insert all the data into that one. Then you can call the sp, using the same open Sql Connection and insie the SP use the #Orders table to read the values.
Another solution is to use Table-Valued Parameters but that requires some more SQL to setup so I think it is probably easier to use the temp table approach.