Problem inserting data into a varchar field using a stored procedure - sql-server

I am uploading some data from DB2 to SQL2005. The table contains one text field that is 2000 characters long in DB2 but is set up as a varchar(2000) in SQL.
When I run my insert in query browser it processes fine and the data is copied correctly, however when the same statement runs in my stored procedure the data for this field only is copied incorrectly and is shown as a series of strange characters.
The field can occasionally contain control characters however the problem occurs even when it doesn't.
Is there a setting i need to apply to my stored procedure in order to get the insert to work correctly?
Or do i need to use a cast or convert on the data in order to get it appearing correctly.
Thanks.
Update: Thanks for your suggestions. It now appears that the problem was caused by the software that we used to create the linked server containing the DB2 database. This could not handle a field 2000 characters long when run via a stored procedure but could when run interactively.
I ultimately got around the problem by splicing the field into 10 200 character long fields for the upload and then re-joined them again when they were in the SQL database.

It sounds like the data coming from db2 is in a different character set. You should make sure it isn't EBCDIC or something like that.
Also, try calling your stored procedure from the SQL Server Management Studio (query browser), to see if it works at all.

You might want to change your varchar(2000) to an nvarchars(2000) (in the stored procedure as well as the table - i assume it exists as a parameter). This would allow them to hold two byte characters. It'll depend on the DB2 configuration but it may be that it's exporting UTF-16 (or similar) rather than UTF-8.

This problem was caused by the 3rd party software that we used to create the linked server to the DB2 database. This could not handle a field 2000 characters long when run via a stored procedure but could when run interactively.
I ultimately got around the problem by splicing the field into 10 200 character long fields for the upload and then re-joined them again when they were in the SQL database.

Related

Losing one byte when storing data in SQL Server Image column

We have a legacy Classic ASP app that is using ADODB to store a document in an image column in SQL Server. The problem we are running into is that a single byte is getting lost from most files. We have one instance where a file won't lose a byte so I'm thinking it's potentially size related.
So far we have..
Verified that the correct byte length is being passed to the adodb CreateParameter call.
Used the SQL Server BCP tool to verify that the file is corrupt in SQL Server and not by our extraction. When we extracted the file with BCP it is missing the byte AND we did use the -w parameter.
Does anyone have any advice on what we could try to do next?
UPDATE:
We have done some more research and it appears the byte is disappearing in adodb. We do this..
cmd.Parameters.Append cmd.CreateParameter("SupportDocImage",adLongVarBinary,adParamInput,lSize,sFilePath)
Where lSize would be 123,139 and sFilePath is the binary data (yes.. bad naming)
This actually calls off to a stored procedure and in that stored procedure we will do a DATALENGTH(#Variable) and we instead get 123,138.
Also from what we have seen it is the last byte that gets lost.
Check out this article.
The solution is for Oracle, but it is using ADO all the same.
http://forums.devshed.com/visual-basic-programming-52/inserting-blob-s-via-adodb-stream-using-a-stored-procedure-103030.html
Try adding +1 to the size.
Also, did you check this property off when uploading the image.
cmdSQL.Properties("SPPrmsLOB") = True
Regardless, you still have to write a stored procedure that takes a varbinary(max) or image data type and inserts it into the table.

ColdFusion 8 + MSSQL 2005 and CLOB datatype on resultset

The environment I am working with is CF8 and SQL 2005 and the datatype CLOB is disabled on the CF administrator. My concern is, will there be a performance ramification by enabling the CLOB datatype in the CF Administrator.
The reason I want/need to enable it is, SQL is building the AJAX XML response. When the response is large, the result is either truncated or returned with multiple rows (depending on how the SQL developer created the stored proc). Enabling CLOB allows the entire result to be returned. The other option I have is to have SQL always return the XML result in multiple rows and have CF join the string for each result row.
Anyone with some experience with this idea or have any thoughts?
Thanks!
I really think that returning Clob data is likely to be less expensive then concatenating multiple rows of data into an XML string and then parsing it (ick!). What you are trying to do is what CLOB is designed for. JDBC handles it pretty well. The performance hit is probably negligible. After all - you have to return the same amount of character data either way, whether in multiple rows or a single field. And to have to "break it up" on the SQL side and then "reassemble" it on the CF side seems like reinventing the wheel to be sure.
I would add that questions like this sometimes mystify me. A modest amount of testing would seem to be able to answer this question to your own satisfaction - no?
I would just have the StoredProc return the data set, or multiple data sets, and just build the XML the way you need it via CF.
I've never needed to use CLOB. I almost always stick to the varchar datatype, and it seems to do the job just fine.
There are also options where you could call the Stored Proc, which triggers MSSQL to generate an actual xml file (not just a string) and simply return you the file name. Then you can use CFFILE action="read" to grab the xml string and parse it accrodingly. Assuming your web server and db have a common file storage area.

How can I generate an INSERT script for a table with a VARBINARY(MAX) field?

I have a table with a VARBINARY(MAX) field (SQL Server 2008 with FILESTREAM)
My requirement is that when I go to deploy to production, I can only supply my IT team with a group of SQL scripts to be executed in a certain order. A new table I am making in production has this VARBINARY(MAX) field. Usually with new tables, I will script out the CREATE TABLE script. And, if I have data I need to go with it, I will then script out the INSERT scripts. Not too complicated.
But with VARBINARY(MAX), the Stored Procedure I was using to generate the INSERT statements fails on that table. I tried selecting that field, printing it, copying it, converting to hex, etc. The main issue I have with that is that it doesn't select all the data in the field. I do a check DATALENGTH([FileColumn]) and if the source row contains 1,004,382 bytes, the max I can get the copied or selected data when inserting again is 8000. So basically it is truncated (i.e. invalid) data.....
How can I do this better? I tried Googling this like crazy but I must be missing something. Remember, I can't access the filesystem. This has to be all scripted.
If this is a one time (or seldom) thing to do, you can try scripting the data out from the SSMS Wizard as described here:
http://sqlblog.com/blogs/eric_johnson/archive/2010/03/08/script-data-in-sql-server-2008.aspx
Or, if you need to do this frequently and want to automate it, you can try the SQL# SQLCLR library (which I wrote and while most of it is free, the function you need here is not). The function to do this is DB_DumpData and it also generates INSERT statements.
But again, if this is a one time or infrequent task, then try the data export wizard that is built into Management Studio. That should allow you to then create the SQL script that you can run in Production. I just tested this on a table with a VARBINARY(MAX) field containing 3,365,964 bytes of data and the Generate Scripts wizard generated an INSERT statement with the entire hex string of 6.73 million characters for that one value.
UPDATE:
Another quick and easy way to do this in a manner that would allow you to copy / paste the entire INSERT statement into a SQL script and not have to bother with BCP or SSMS Export Wizard is to just convert the value to XML. First you would CONVERT the VARBINARY to VARCHAR(MAX) using the optional style of "1" which gives you a hex string starting with "0x". Once you have the hex string of the binary data you can concatenate that into an INSERT statement and that entire thing, when converted to XML, can contain the entire VARBINARY field. See the following example:
DECLARE #Binary VARBINARY(MAX) = CONVERT(VARBINARY(MAX),
REPLICATE(
CONVERT(NVARCHAR(MAX), 'test string'),
100000)
)
SELECT 'INSERT INTO dbo.TableName (ColumnName) VALUES ('+
CONVERT(VARCHAR(MAX), #Binary, 1) + ')' AS [Insert]
FOR XML RAW;
Don't script from SSMS
bcp the data out/in, or use something like SSMS tools to generate INSERT statements
It more than a bit messed up, but in the past and on the web I've seen this done using a base64-encoded string. You use an xml value to wrap the string and from there you can convert it to a varbinary. Here's an example:
http://blogs.msdn.com/b/sqltips/archive/2008/06/30/converting-from-base64-to-varbinary-and-vice-versa.aspx
I can't speak personally to how effective or performant this is, though, especially for large values. Because it is at best an ugly hack, I'd tuck it away inside a UDF somewhere, so that if a better method is found you can update it easily.
I have never tried anything like this before, but from the documentation for SQL Server 2008 R2, it sounds like using SUBSTRING will work to get the entire varbinary value, although you may have to work with it in chunks, using UPDATEs with the .WRITE clause to append the data.
Updating Large Value Data Types
Use the .WRITE (expression, #Offset, #Length) clause to perform a partial or full update of varchar(max), nvarchar(max), and varbinary(max) data types. For example, a partial update of a varchar(max) column might delete or modify only the first 200 characters of the column, whereas a full update would delete or modify all the data in the column.
For best performance, we recommend that data be inserted or updated in chunk sizes that are multiples of 8040 bytes.
Hope this helps.

Increasing Message Size in SQL 2005

I am currently writing a script that intelligently strips a database down into a series of ordered INSERT statements with seeded identity columns that will allow me to place the records into a new database without destroying any keys or relationships.
I am using the PRINT function to write the finished insert statements to the message window and saving to query to a file. However it seems like there is a max character limit for the message window, is there any way to change that?
This is a database with 120k users and I will end up with hundreds of thousands of insert statements so the file is gonna be pretty big.
I think we've all had this problem at some point, I'll tell you how I ended up fixing it. Every message I wanted to output was inserted into a TEXT column on a separate table (in fact a separate database in my case). Once there you can export it to text, etc.
Unfortunately no,
From http://msdn.microsoft.com/en-us/library/ms176047.aspx
A message string can be up to 8,000
characters long if it is a non-Unicode
string, and 4,000 characters long if
it is a Unicode string. Longer strings
are truncated. The varchar(max) and
nvarchar(max) data types are truncated
to data types that are no larger than
varchar(8000) and nvarchar(4000).
I had to do something similar a few months back. I wrote a c# application to write the sql for me.
Regards,
why not use:
bcp Utility
or
How to: Run the SQL Server Import and Export Wizard
it would be incredibly slow to populate a large database with an insert for every row.
EDIT based on OP's comments
You could create staging tables with a single varchar(max) column that contain the actual INSERT statements. Instead of printing the INSERT, insert it into a staging table. You can then use BCP to export the INSERTs out from the staging tables to a file, and then just run that file (now full of INSERTs).
I think it can be done by printing it in a chunks
below code does it for a NVARCHAR variable named #query
PRINT SUBSTRING(#query, 1, 4000)
PRINT SUBSTRING(#query, 4001,8000)
PRINT SUBSTRING(#query, 8001, 12000)
PRINT SUBSTRING(#query, 12001, LEN(#query))

How can I recover Unicode data which displays in SQL Server as?

I have a database in SQL Server containing a column which needs to contain Unicode data (it contains user's addresses from all over the world e.g. القاهرة‎ for Cairo)
This column is an nvarchar column with a collation of database default (Latin1_General_CI_AS), but I've noticed data inserted into it via SQL statements containing non English characters and displays as ?????.
The solution seems to be that I wasn't using the n prefix e.g.
INSERT INTO table (address) VALUES ('القاهرة')
Instead of:
INSERT INTO table (address) VALUES (n'القاهرة')
I was under the impression that Unicode would automatically be converted for nvarchar columns and I didn't need this prefix, but this appears to be incorrect.
The problem is I still have some data in this column which appears as ????? in SQL Server Management Studio and I don't know what it is!
Is the data still there but in an incorrect character encoding preventing it from displaying but still salvageable (and if so how can I recover it?), or is it gone for good?
Thanks,
Tom
To find out what SQL Server really stores, use
SELECT CONVERT(VARBINARY(MAX), 'some text')
I just tried this with umlauted characters and Arabic (copied from Wikipedia, I have no idea) both as plain strings and as N'' Unicode strings.
The results are that Arabic non-Unicode strings really end up as question marks (0x3F) in the conversion to VARCHAR.
SSMS sometimes won't display all characters, I just tried what you had and it worked for me, copy and paste it into Word and it might display it corectly
Usually if SSMS can't display it it should be boxes not ?
Try to write a small client that will retrieve these data to a file or web page. Check ALL your code if there are no other inserts or updates that might convertthe data to varchar before storing them in tables.

Resources