Inserting VARBINARY(MAX) columns with MSSQL/SQLAlchemy - sql-server

It seems that the answer to this question should already be out there, but after some hours of experimentation I have yet to find a solution that works. What I'm looking to do is insert into an MSSQL database a record that includes a VARBINARY(MAX) column. The source of the data is a BLOB column from an SQLite database, also ready using SQLAlchemy and which appears to be rendered as a Python string. Whatever I try I still seem to received the following message:
'Implicit conversion from data type varchar to varbinary(max) is not
allowed. Use the CONVERT function to run this query.
I can see that what is required (or at least what seems to work if I try it manually) at the SQL level is to wrap the bound column in CONVERT(VARBINARY(MAX), ...) but making SQLALchemy do this has so far frustrated me.
Thanks in advance as always.

Related

error in bulk upload in Sql server from Notepad

I have a notepad file
While uploading data from import and export wizard ,
I face below error
And also while creation of the table i have edited the table structure to make every field to be ntext data type still i am getting error.
From errors you might have 2 issues:
Probably you have some characters not fitting into code page
Field is too short and you have too long data
Probably 2nd error is causing the other. Unfortunally weak error reporting is one of side effects using SSIS.
And little advise, don't use NTEXT rather NVARCHAR instead. It easier to work with NVARCHAR:
nvarchar(max) vs NText
Make your table to use nvarchar(max) instead.
Instead of using import/export wizard, write a SSIS package and add a data conversion transformation in the data flow task between the source and destination. Use unicode string in the conversion transformation.
--
Sumit

How do I count the number of characters in SQL server ntext (i.e. memo) field in an access query?

I want to write an access query to count the characters in an ntext field in a linked SQL server table.
In SQL server, I would just use this command (which wont work in access):
select datalength(nTextFieldName) //this command works on sql server but not in access
In access, I can only find the len command, which wont work on ntext fields:
select len(nTextFieldName) // access says nText is not a valid argument to this function.
Googling around, I've found a bunch of posts saying to use len, which is giving me an error.
What is the command?
ntext type doesn't work with LEN. This specific type as well as a few others are deprecated:
ntext, text, and image data types will be removed in a future version of Microsoft SQL
Server. Avoid using these data types in new development work, and plan to modify applications
that currently use them. Use nvarchar(max), varchar(max), and varbinary(max) instead. For more
information, see Using Large-Value Data Types.
The best way to handle this is to convert/cast the datatype to one that works such as varchar(max)/nvarchar(max) and only then get the LEN.
SELECT LEN(CAST(nTextFieldName As nvarchar(max)))
select LENGTH(nTextFieldName) from table_name;

SSIS getting wrong column type with OLEDB connector

Halfway through a SSIS project certain table fields changed from char(30) to nvarchar(30)
However, when running the SSIS packages, an error stating cannot convert from unicode to non-unicode appears.
I am trying to transfer data directly from a database source to its destination.
Both connections use the same database schema, so there should be no conversion.
When checking the external column data type it shows D_STR, which is not the case anymore.
I tried deleting both source and destination in hope that it would clean any sort of cached data, but it did not work.
Any ideas?
Sounds to me like the metadata in your data flow task is cached and needs to be refreshed to reflect the new type.
Open the source, go to columns, and uncheck the column, then check the column. Click ok. The metadata should refresh now.
nvarchar and nchar are unicode. Conversely, varchar and char are non-unicode.
http://msdn.microsoft.com/en-us/library/ms187752.aspx
As a result if you are moving data from one data type to another you will have to perform some additional transformation (CAST or CONVERT). The other option is to look at your adapters such that char will use SSIS DataType of DT-STR and nvarchar will use SSIS DataType DT-WSTR
http://msdn.microsoft.com/en-us/library/ms141036.aspx
Without knowing how your packages work I cannot be much more specific but hopefully this will get you going.

How can I generate an INSERT script for a table with a VARBINARY(MAX) field?

I have a table with a VARBINARY(MAX) field (SQL Server 2008 with FILESTREAM)
My requirement is that when I go to deploy to production, I can only supply my IT team with a group of SQL scripts to be executed in a certain order. A new table I am making in production has this VARBINARY(MAX) field. Usually with new tables, I will script out the CREATE TABLE script. And, if I have data I need to go with it, I will then script out the INSERT scripts. Not too complicated.
But with VARBINARY(MAX), the Stored Procedure I was using to generate the INSERT statements fails on that table. I tried selecting that field, printing it, copying it, converting to hex, etc. The main issue I have with that is that it doesn't select all the data in the field. I do a check DATALENGTH([FileColumn]) and if the source row contains 1,004,382 bytes, the max I can get the copied or selected data when inserting again is 8000. So basically it is truncated (i.e. invalid) data.....
How can I do this better? I tried Googling this like crazy but I must be missing something. Remember, I can't access the filesystem. This has to be all scripted.
If this is a one time (or seldom) thing to do, you can try scripting the data out from the SSMS Wizard as described here:
http://sqlblog.com/blogs/eric_johnson/archive/2010/03/08/script-data-in-sql-server-2008.aspx
Or, if you need to do this frequently and want to automate it, you can try the SQL# SQLCLR library (which I wrote and while most of it is free, the function you need here is not). The function to do this is DB_DumpData and it also generates INSERT statements.
But again, if this is a one time or infrequent task, then try the data export wizard that is built into Management Studio. That should allow you to then create the SQL script that you can run in Production. I just tested this on a table with a VARBINARY(MAX) field containing 3,365,964 bytes of data and the Generate Scripts wizard generated an INSERT statement with the entire hex string of 6.73 million characters for that one value.
UPDATE:
Another quick and easy way to do this in a manner that would allow you to copy / paste the entire INSERT statement into a SQL script and not have to bother with BCP or SSMS Export Wizard is to just convert the value to XML. First you would CONVERT the VARBINARY to VARCHAR(MAX) using the optional style of "1" which gives you a hex string starting with "0x". Once you have the hex string of the binary data you can concatenate that into an INSERT statement and that entire thing, when converted to XML, can contain the entire VARBINARY field. See the following example:
DECLARE #Binary VARBINARY(MAX) = CONVERT(VARBINARY(MAX),
REPLICATE(
CONVERT(NVARCHAR(MAX), 'test string'),
100000)
)
SELECT 'INSERT INTO dbo.TableName (ColumnName) VALUES ('+
CONVERT(VARCHAR(MAX), #Binary, 1) + ')' AS [Insert]
FOR XML RAW;
Don't script from SSMS
bcp the data out/in, or use something like SSMS tools to generate INSERT statements
It more than a bit messed up, but in the past and on the web I've seen this done using a base64-encoded string. You use an xml value to wrap the string and from there you can convert it to a varbinary. Here's an example:
http://blogs.msdn.com/b/sqltips/archive/2008/06/30/converting-from-base64-to-varbinary-and-vice-versa.aspx
I can't speak personally to how effective or performant this is, though, especially for large values. Because it is at best an ugly hack, I'd tuck it away inside a UDF somewhere, so that if a better method is found you can update it easily.
I have never tried anything like this before, but from the documentation for SQL Server 2008 R2, it sounds like using SUBSTRING will work to get the entire varbinary value, although you may have to work with it in chunks, using UPDATEs with the .WRITE clause to append the data.
Updating Large Value Data Types
Use the .WRITE (expression, #Offset, #Length) clause to perform a partial or full update of varchar(max), nvarchar(max), and varbinary(max) data types. For example, a partial update of a varchar(max) column might delete or modify only the first 200 characters of the column, whereas a full update would delete or modify all the data in the column.
For best performance, we recommend that data be inserted or updated in chunk sizes that are multiples of 8040 bytes.
Hope this helps.

XML column in SSIS has byte-order-mark

I'm using an oledb data source in an SSIS package to pull a column from a database. The column is XML data type. In SSIS, it is automatically recognized as data type DT_NTEXT. It's going to a script component where I'm trying to load it into a System.Xml.XmlDocument. This is the code that I'm using to get the xml data into a string:
System.Text.Encoding.Default.GetString(Row.Data.GetBlobData(0, Row.Data.Length))
Is this the correct way?
One odd thing that I'm seeing is that on one server, I get a byte-order-mark in the resulting string, and another server I don't. I wouldn't mind knowing why that is the case, but my real desire is how to get this string without the BOM.
Help me, Stack Overflow, you're my only hope...
This is the only way I was able to get it to work:
System.Text.UnicodeEncoding.Unicode.GetString(...).Trim()
The .Trim() removes the BOM. I'm not sure if this is the "right" way, but it's the only thing that's worked so far.

Resources