I am currently writing a script that intelligently strips a database down into a series of ordered INSERT statements with seeded identity columns that will allow me to place the records into a new database without destroying any keys or relationships.
I am using the PRINT function to write the finished insert statements to the message window and saving to query to a file. However it seems like there is a max character limit for the message window, is there any way to change that?
This is a database with 120k users and I will end up with hundreds of thousands of insert statements so the file is gonna be pretty big.
I think we've all had this problem at some point, I'll tell you how I ended up fixing it. Every message I wanted to output was inserted into a TEXT column on a separate table (in fact a separate database in my case). Once there you can export it to text, etc.
Unfortunately no,
From http://msdn.microsoft.com/en-us/library/ms176047.aspx
A message string can be up to 8,000
characters long if it is a non-Unicode
string, and 4,000 characters long if
it is a Unicode string. Longer strings
are truncated. The varchar(max) and
nvarchar(max) data types are truncated
to data types that are no larger than
varchar(8000) and nvarchar(4000).
I had to do something similar a few months back. I wrote a c# application to write the sql for me.
Regards,
why not use:
bcp Utility
or
How to: Run the SQL Server Import and Export Wizard
it would be incredibly slow to populate a large database with an insert for every row.
EDIT based on OP's comments
You could create staging tables with a single varchar(max) column that contain the actual INSERT statements. Instead of printing the INSERT, insert it into a staging table. You can then use BCP to export the INSERTs out from the staging tables to a file, and then just run that file (now full of INSERTs).
I think it can be done by printing it in a chunks
below code does it for a NVARCHAR variable named #query
PRINT SUBSTRING(#query, 1, 4000)
PRINT SUBSTRING(#query, 4001,8000)
PRINT SUBSTRING(#query, 8001, 12000)
PRINT SUBSTRING(#query, 12001, LEN(#query))
Related
It's a repeated issue i guess, but couldn't find a proper solution yet. Basically I am trying to insert bit huge XML i.e. 32000+ characters in to a CLOB column through DB2 procedure. Insertion is failing with the below error looks DB2 is considering the input as String rather than CLOB datatype. Can you please suggest what needs to be done?
SP
CREATE OR REPLACE PROCEDURE logging (IN HEADERDATA CLOB(10M))
LANGUAGE SQL
BEGIN
INSERT INTO Logging(Header) VALUES (HEADERDATA);
COMMIT;
END
Error
The string constant beginning with
"'<?xml version="1.0" encoding="UTF-8"?><XXXXXXXX xmlns:xsi="http:" is too long..
SQLCODE=-102, SQLSTATE=54002, DRIVER=XXXXXX
Character literals in DB2 are limited to about 32K bytes. To handle larger LOBs you need to avoid having to use SQL literal values.
One way to do this without extra programming is write your [future] CLOB contents to a file and use IMPORT or LOAD to insert its contents into a column.
Alternatively, you could wrap a simple Java program around your procedure call where you would use PreparedStatement.setClob() to handle your large XML document.
I would like to insert 500 000 characters to sql server single cell. But, I can't insert more than 43679 characters to single cell.
I tried to create table:
create table sample1(name nvarchar(max)
create table sample2(address ntext)
But I didn't succeed
The issue does not come from the column type:
nvarchar(max) can hold 2GB of data (more than 1 billion chars)
ntext should not be used as it is deprecated and can hold 1GB of data (more than 500 million chars)
The issue seems to be when you get the data. If you are copy/pasting, just paste in a notepad to see if your data is complete.
Note that there is a known issue when you copy/paste in Grid Mode with SSMS 2008+:
SSMS - Can not paste more than 43679 characters from a column in Grid Mode
So you could install SSMS 2005 or see if the workarounds listed here could fit your requirements:
SSMS - Allow large text to be displayed in as a link
I have a table with a VARBINARY(MAX) field (SQL Server 2008 with FILESTREAM)
My requirement is that when I go to deploy to production, I can only supply my IT team with a group of SQL scripts to be executed in a certain order. A new table I am making in production has this VARBINARY(MAX) field. Usually with new tables, I will script out the CREATE TABLE script. And, if I have data I need to go with it, I will then script out the INSERT scripts. Not too complicated.
But with VARBINARY(MAX), the Stored Procedure I was using to generate the INSERT statements fails on that table. I tried selecting that field, printing it, copying it, converting to hex, etc. The main issue I have with that is that it doesn't select all the data in the field. I do a check DATALENGTH([FileColumn]) and if the source row contains 1,004,382 bytes, the max I can get the copied or selected data when inserting again is 8000. So basically it is truncated (i.e. invalid) data.....
How can I do this better? I tried Googling this like crazy but I must be missing something. Remember, I can't access the filesystem. This has to be all scripted.
If this is a one time (or seldom) thing to do, you can try scripting the data out from the SSMS Wizard as described here:
http://sqlblog.com/blogs/eric_johnson/archive/2010/03/08/script-data-in-sql-server-2008.aspx
Or, if you need to do this frequently and want to automate it, you can try the SQL# SQLCLR library (which I wrote and while most of it is free, the function you need here is not). The function to do this is DB_DumpData and it also generates INSERT statements.
But again, if this is a one time or infrequent task, then try the data export wizard that is built into Management Studio. That should allow you to then create the SQL script that you can run in Production. I just tested this on a table with a VARBINARY(MAX) field containing 3,365,964 bytes of data and the Generate Scripts wizard generated an INSERT statement with the entire hex string of 6.73 million characters for that one value.
UPDATE:
Another quick and easy way to do this in a manner that would allow you to copy / paste the entire INSERT statement into a SQL script and not have to bother with BCP or SSMS Export Wizard is to just convert the value to XML. First you would CONVERT the VARBINARY to VARCHAR(MAX) using the optional style of "1" which gives you a hex string starting with "0x". Once you have the hex string of the binary data you can concatenate that into an INSERT statement and that entire thing, when converted to XML, can contain the entire VARBINARY field. See the following example:
DECLARE #Binary VARBINARY(MAX) = CONVERT(VARBINARY(MAX),
REPLICATE(
CONVERT(NVARCHAR(MAX), 'test string'),
100000)
)
SELECT 'INSERT INTO dbo.TableName (ColumnName) VALUES ('+
CONVERT(VARCHAR(MAX), #Binary, 1) + ')' AS [Insert]
FOR XML RAW;
Don't script from SSMS
bcp the data out/in, or use something like SSMS tools to generate INSERT statements
It more than a bit messed up, but in the past and on the web I've seen this done using a base64-encoded string. You use an xml value to wrap the string and from there you can convert it to a varbinary. Here's an example:
http://blogs.msdn.com/b/sqltips/archive/2008/06/30/converting-from-base64-to-varbinary-and-vice-versa.aspx
I can't speak personally to how effective or performant this is, though, especially for large values. Because it is at best an ugly hack, I'd tuck it away inside a UDF somewhere, so that if a better method is found you can update it easily.
I have never tried anything like this before, but from the documentation for SQL Server 2008 R2, it sounds like using SUBSTRING will work to get the entire varbinary value, although you may have to work with it in chunks, using UPDATEs with the .WRITE clause to append the data.
Updating Large Value Data Types
Use the .WRITE (expression, #Offset, #Length) clause to perform a partial or full update of varchar(max), nvarchar(max), and varbinary(max) data types. For example, a partial update of a varchar(max) column might delete or modify only the first 200 characters of the column, whereas a full update would delete or modify all the data in the column.
For best performance, we recommend that data be inserted or updated in chunk sizes that are multiples of 8040 bytes.
Hope this helps.
I have a database in SQL Server containing a column which needs to contain Unicode data (it contains user's addresses from all over the world e.g. القاهرة for Cairo)
This column is an nvarchar column with a collation of database default (Latin1_General_CI_AS), but I've noticed data inserted into it via SQL statements containing non English characters and displays as ?????.
The solution seems to be that I wasn't using the n prefix e.g.
INSERT INTO table (address) VALUES ('القاهرة')
Instead of:
INSERT INTO table (address) VALUES (n'القاهرة')
I was under the impression that Unicode would automatically be converted for nvarchar columns and I didn't need this prefix, but this appears to be incorrect.
The problem is I still have some data in this column which appears as ????? in SQL Server Management Studio and I don't know what it is!
Is the data still there but in an incorrect character encoding preventing it from displaying but still salvageable (and if so how can I recover it?), or is it gone for good?
Thanks,
Tom
To find out what SQL Server really stores, use
SELECT CONVERT(VARBINARY(MAX), 'some text')
I just tried this with umlauted characters and Arabic (copied from Wikipedia, I have no idea) both as plain strings and as N'' Unicode strings.
The results are that Arabic non-Unicode strings really end up as question marks (0x3F) in the conversion to VARCHAR.
SSMS sometimes won't display all characters, I just tried what you had and it worked for me, copy and paste it into Word and it might display it corectly
Usually if SSMS can't display it it should be boxes not ?
Try to write a small client that will retrieve these data to a file or web page. Check ALL your code if there are no other inserts or updates that might convertthe data to varchar before storing them in tables.
I am uploading some data from DB2 to SQL2005. The table contains one text field that is 2000 characters long in DB2 but is set up as a varchar(2000) in SQL.
When I run my insert in query browser it processes fine and the data is copied correctly, however when the same statement runs in my stored procedure the data for this field only is copied incorrectly and is shown as a series of strange characters.
The field can occasionally contain control characters however the problem occurs even when it doesn't.
Is there a setting i need to apply to my stored procedure in order to get the insert to work correctly?
Or do i need to use a cast or convert on the data in order to get it appearing correctly.
Thanks.
Update: Thanks for your suggestions. It now appears that the problem was caused by the software that we used to create the linked server containing the DB2 database. This could not handle a field 2000 characters long when run via a stored procedure but could when run interactively.
I ultimately got around the problem by splicing the field into 10 200 character long fields for the upload and then re-joined them again when they were in the SQL database.
It sounds like the data coming from db2 is in a different character set. You should make sure it isn't EBCDIC or something like that.
Also, try calling your stored procedure from the SQL Server Management Studio (query browser), to see if it works at all.
You might want to change your varchar(2000) to an nvarchars(2000) (in the stored procedure as well as the table - i assume it exists as a parameter). This would allow them to hold two byte characters. It'll depend on the DB2 configuration but it may be that it's exporting UTF-16 (or similar) rather than UTF-8.
This problem was caused by the 3rd party software that we used to create the linked server to the DB2 database. This could not handle a field 2000 characters long when run via a stored procedure but could when run interactively.
I ultimately got around the problem by splicing the field into 10 200 character long fields for the upload and then re-joined them again when they were in the SQL database.