I want to store values which contains special character like ', " etc in database table filed which have datatype varchar. Like I want to store 'Sale'sStore' value in table.
How can I store it. Currently I was not able to store it.
it depends on where from are you trying to insert these strings. If manualy, so just insert \ before ' and it should be fine. If you are creating your query programatically, you should consider using binding function to build the query(they will solve it automatically) and not build them just with concatenating strings
Related
I have a client who inserted some Arabic text into SQL Server database column. Now the text displays as ????? only.
I know that this is related to collation and that he should have modified the collation of SQL Server before entering his data. Also I know that he should have used nVarchar, instead of Varchar in his column.
How can I retrieve the data entered in his column or convert it into Arabic from ???? since the data is already entered and we need to convert it.
Thanks in advance.
Here are some basic rules which you may want to keep in mind. While storing Unicode data, the column must be of Unicode data type (nchar, nvarchar, ntext). Another rule is that the value must be prefixed with N while insertion like the below.
INSERT INTO TBL_LANG VALUES ('English',N'I am American')
INSERT INTO TBL_LANG VALUES ('Tamil',N'நான் தமிழன்')
If the data inserted in the correct manner, then you will get the correct response.
As well have a look at the link.
Convert “???? ??????” in to arabic language
I have a column which was set to Varchar and the database set to SQL_Latin1_General_CP1_CI_AS.
When a user entered their name into our web front end and save the data, it was not saving accented characters correctly.
The web user was entering the following, "Béala" but this was being saved on the database as the following, "Béala".
I believe that changing the column from Varchar to NVarchar should prevent this from happening going forward(?), however, I have two questions.
1) How do I perform a select on the existing data in the column and display it correctly?
select CONVERT(NVARCHAR(100),strAddress1) from [dbo].[tblCustomer]
This still shows the data incorrectly.
2) How do I update the data in the column once converted to NVarchar to save the accented characters correctly?
Many thanks,
Ray.
The only idea that came to my mind is that you have to prepare an update that will fool this badly loaded data, i.e. a sign
'é' will always match exactly one character (in this case 'é'), you have to catch all special characters and this
as have been changed (just a simple update with cases and replace). Of course, the first column must be of the nvarchar type.
It solves the problem 1 and 2 (the data will be correct in the table, the data will be displayed correctly, I described the update above)
Here is way to get it in normal characters scheme.
select 'Réunion', cast('Réunion' as varchar(100)) COLLATE SQL_Latin1_General_CP1253_CI_AI
Moreover to check all possible collations in SQL Server you can try this query
SELECT name, description
FROM sys.fn_helpcollations();
I have an old SQL Server 2000 database from which I read data. My problem is every query involving a String returns a value the size of that column filled with blank spaces.
e.g: let's say we have a column called NAME CHAR(20). All queries would return:
"John "
instead of just "John".
Is there a configuration or parameter in my database that causes this, or anything at all that can be changed to avoid it? Thank you.
EDIT:
I'd like to clarify, I'm reading my DB using JPA Repositories. I don't want to physically remove the whitespaces from the columns, or trim the values manually using RTRIM/LTRIM/REPLACE. I'm just trying to retrieve the column without trailing spaces, without adding any extra strain to the query or trimming the fields programatically.
you can use REPLACE/RTRIM/LTRIM
select RTRIM(LTRIM(column_name)) from table_name
or
select replace(column_name, ' ', '') from table_name
I have a table with a VARBINARY(MAX) field (SQL Server 2008 with FILESTREAM)
My requirement is that when I go to deploy to production, I can only supply my IT team with a group of SQL scripts to be executed in a certain order. A new table I am making in production has this VARBINARY(MAX) field. Usually with new tables, I will script out the CREATE TABLE script. And, if I have data I need to go with it, I will then script out the INSERT scripts. Not too complicated.
But with VARBINARY(MAX), the Stored Procedure I was using to generate the INSERT statements fails on that table. I tried selecting that field, printing it, copying it, converting to hex, etc. The main issue I have with that is that it doesn't select all the data in the field. I do a check DATALENGTH([FileColumn]) and if the source row contains 1,004,382 bytes, the max I can get the copied or selected data when inserting again is 8000. So basically it is truncated (i.e. invalid) data.....
How can I do this better? I tried Googling this like crazy but I must be missing something. Remember, I can't access the filesystem. This has to be all scripted.
If this is a one time (or seldom) thing to do, you can try scripting the data out from the SSMS Wizard as described here:
http://sqlblog.com/blogs/eric_johnson/archive/2010/03/08/script-data-in-sql-server-2008.aspx
Or, if you need to do this frequently and want to automate it, you can try the SQL# SQLCLR library (which I wrote and while most of it is free, the function you need here is not). The function to do this is DB_DumpData and it also generates INSERT statements.
But again, if this is a one time or infrequent task, then try the data export wizard that is built into Management Studio. That should allow you to then create the SQL script that you can run in Production. I just tested this on a table with a VARBINARY(MAX) field containing 3,365,964 bytes of data and the Generate Scripts wizard generated an INSERT statement with the entire hex string of 6.73 million characters for that one value.
UPDATE:
Another quick and easy way to do this in a manner that would allow you to copy / paste the entire INSERT statement into a SQL script and not have to bother with BCP or SSMS Export Wizard is to just convert the value to XML. First you would CONVERT the VARBINARY to VARCHAR(MAX) using the optional style of "1" which gives you a hex string starting with "0x". Once you have the hex string of the binary data you can concatenate that into an INSERT statement and that entire thing, when converted to XML, can contain the entire VARBINARY field. See the following example:
DECLARE #Binary VARBINARY(MAX) = CONVERT(VARBINARY(MAX),
REPLICATE(
CONVERT(NVARCHAR(MAX), 'test string'),
100000)
)
SELECT 'INSERT INTO dbo.TableName (ColumnName) VALUES ('+
CONVERT(VARCHAR(MAX), #Binary, 1) + ')' AS [Insert]
FOR XML RAW;
Don't script from SSMS
bcp the data out/in, or use something like SSMS tools to generate INSERT statements
It more than a bit messed up, but in the past and on the web I've seen this done using a base64-encoded string. You use an xml value to wrap the string and from there you can convert it to a varbinary. Here's an example:
http://blogs.msdn.com/b/sqltips/archive/2008/06/30/converting-from-base64-to-varbinary-and-vice-versa.aspx
I can't speak personally to how effective or performant this is, though, especially for large values. Because it is at best an ugly hack, I'd tuck it away inside a UDF somewhere, so that if a better method is found you can update it easily.
I have never tried anything like this before, but from the documentation for SQL Server 2008 R2, it sounds like using SUBSTRING will work to get the entire varbinary value, although you may have to work with it in chunks, using UPDATEs with the .WRITE clause to append the data.
Updating Large Value Data Types
Use the .WRITE (expression, #Offset, #Length) clause to perform a partial or full update of varchar(max), nvarchar(max), and varbinary(max) data types. For example, a partial update of a varchar(max) column might delete or modify only the first 200 characters of the column, whereas a full update would delete or modify all the data in the column.
For best performance, we recommend that data be inserted or updated in chunk sizes that are multiples of 8040 bytes.
Hope this helps.
I have a database in SQL Server containing a column which needs to contain Unicode data (it contains user's addresses from all over the world e.g. القاهرة for Cairo)
This column is an nvarchar column with a collation of database default (Latin1_General_CI_AS), but I've noticed data inserted into it via SQL statements containing non English characters and displays as ?????.
The solution seems to be that I wasn't using the n prefix e.g.
INSERT INTO table (address) VALUES ('القاهرة')
Instead of:
INSERT INTO table (address) VALUES (n'القاهرة')
I was under the impression that Unicode would automatically be converted for nvarchar columns and I didn't need this prefix, but this appears to be incorrect.
The problem is I still have some data in this column which appears as ????? in SQL Server Management Studio and I don't know what it is!
Is the data still there but in an incorrect character encoding preventing it from displaying but still salvageable (and if so how can I recover it?), or is it gone for good?
Thanks,
Tom
To find out what SQL Server really stores, use
SELECT CONVERT(VARBINARY(MAX), 'some text')
I just tried this with umlauted characters and Arabic (copied from Wikipedia, I have no idea) both as plain strings and as N'' Unicode strings.
The results are that Arabic non-Unicode strings really end up as question marks (0x3F) in the conversion to VARCHAR.
SSMS sometimes won't display all characters, I just tried what you had and it worked for me, copy and paste it into Word and it might display it corectly
Usually if SSMS can't display it it should be boxes not ?
Try to write a small client that will retrieve these data to a file or web page. Check ALL your code if there are no other inserts or updates that might convertthe data to varchar before storing them in tables.