Is it possible to have a varchar column as a primary key with values like 'a ' and 'a', is gives always this error "Violation of PRIMARY KEY constraint" in MS SQL Server 2008.
In Oracle dons't give any error.
BTW I'm not implementing this way I'm only trying to migrate the data from oracle to sql server.
Regards
The SQL-92 standard dictates that for character string comparison purposes, the strings are padded to be the same length prior to comparison: typically the pad character is a space.
Therefore 'a' and 'a ' compare EQUAL and this violates the PK constraint.
http://support.microsoft.com/kb/316626
I could find nothing to indicate this behaviour has changed since then.
You may get away with using varbinary instead of varchar but this may not do what you want either.
You can use a text or ntext column, which one depends on the kind of data you are importing and its length - this will preserve spaces. char will pad spaces, so may not be suitable.
I thought this might have something to do with ANSI_PADDING: but my testing here, indicates that for PKs (possibly UNIQUE INDEXES as well, not tried) this still doesn't help unfortunately.
So:
SET ANSI_PADDING ON
Works for non-PK fields - that is, it preserves the trailing space on the insert, but for some reason not on PKs...
See :
http://support.microsoft.com/kb/154886/EN-US/
use a datatype that doesn't strip trailing spaces.
You might try storing as a varbinary, and then converting to varchar when you select.
You could add another column to your primary key constraint which holds the length of the data in the oracle column. This will allow you to import the data and to reconstruct the oracle data when you need to - with a view that uses the length of the oracle data along with the length in the microsoft table to add back the missing spaces for display in reports etc.
Related
Here is may code
Create table dbo.EXA
(
Name VARCHAR,
ID INT
)
How many character will hold data type VARCHAR as I am not defining the size?
or defining
NAME VARCHAR
it self wrong?
You will get EXACTLY ONE CHARACTER - which is typically not what you want. This is in the case where you define a SQL Server variable, a parameter on a stored procedure, or a table column.
If you don't specify any length in VARCHAR in the context of a conversion using CONVERT or CAST, then the default is 30 characters.
My recommendation would be to ALWAYS explicitly define a length - then it's clear (to you, to T-SQL, to a poor guy who needs to maintain your code in a year) what you wanted/needed
I connected to SQL Server 2005, ran the above and checked the table after completion and it set it automatically to one character.
varchar is equivalent to varchar(1)
I tried this code check this out
Demo
I'm new to Microsoft SQL. I'm planning to store text in Microsoft SQL server and there will be special international characters. Is there a "Data Type" specific to Unicode or I'm better encoding my text with a reference to the unicode number (i.e. \u0056)
Use Nvarchar/Nchar (MSDN link). There used to be an Ntext datatype as well, but it's deprecated now in favour of Nvarchar.
The columns take up twice as much space over the non-unicode counterparts (char and varchar).
Then when "manually" inserting into them, use N to indicate it's unicode text:
INSERT INTO MyTable(SomeNvarcharColumn)
VALUES (N'français')
When you say special international characters, what do you mean? If special means they aren't common and just occasional, then the overhead of nvarchar might not make sense in your situation on a table with a very large number of rows or a lot of indexing.
I'm all for using Unicode where appropriate, but understanding when it is appropriate is important.
If you are mixing data with different implied code pages (Japanese and Chinese in same database) or you just want to be forward-looking for internationalization and localization, then you want the column to be Unicode and use nvarchar data type and that's perfectly fine. Unicode is not going to magically solve all sorting problems for you.
If you are know that you will always be storing mainly ASCII but some occasional foreign characters, just store your UTF-8 data or HTML encoded data in varchar. If your data is all in Japanese and code page 932 (or any other single code page), you can still store double-byte characters in varchar, they still take up two bytes. My point is, that when you are already in a DBCS collation, international characters are no longer "special". It's not just the data storage, but any indexes as well as the working set when dealing with such a column in queries and in other dataflows.
And do not make a blanket rule that all character data should be nvarchar - it's a waste for many columns which are codes or identifiers.
Any time you have a column, go through the same questions:
What is the type of data?
What is the range?
Are NULLs allowed?
What is the limit of the size?
Are there any constraints I should apply now to stop bad data getting in from the beginning?
People have had success with using the following code to force Unicode at insert data manipulation.
INSERT INTO <table> (text) values (N'<text here>)
1
Character set features of tables and string inside them are specified for the database and if your database has a Unicode collation, strings inside the tables are unicode. As well for string columns you have to use nvarchar or nchar data types to make them able to store unicode strings. But this feature works if your database has a utf8 or unicode characterset or collation. Read this link for more information. Unicode and SQL Server
we have WinForms app which stores data in SQL Server (2000, we are working on porting it in 2008) through ADO.NET (1.1, working on porting to 4.0). Everything works fine if I read data previsouly written in Western-European locale (E.g.: "test", "test ù"), but now we have to be able to mix Western and non-Western alphabets as well (E.g.: "test - ۓےۑ" - these are just random arabic chars).
On the SQL Server side, database has been set with the Latin1_General collation, the field is a nvarchar(80). If I run a SQL SELECT statement (E.g.: "SELECT * FROM MyTable WHERE field = 'test - ۓےۑ'", don't mind about the "*" or the actual names) from Query Analyzer, I get no results; the same happens if I pass the Sql statement to an ADO.NET DataAdapter to fill a DataTable. My guess is that it has something to do with collation, but I don't know how to correct this: do I have to change to collation (SQL Server) to a different one? Or do I have to set the locale on the DataAdaoter/DataTable (ADO.NET)?
Thanks in advance to anyone who will help
Shouldn't you use N when comparing nvarchar with extended char. set?
SELECT * From TestTable WHERE GreekColCaseInsensitive = N'test - ۓےۑ'
Yes, the problem is most likely the collation. The Latin1_General collation does not include the rules to sort and compare non latin characters.
MSDN claims:
If you must store character data that reflects multiple languages, you can minimize collation compatibility issues by always using the Unicode nchar, nvarchar, and ntext data types instead of the char, varchar, text data types. Using the Unicode data types eliminates code page conversion issues.
Since you have already complied with this, you should read further on the info about Mixed Collation Environments here.
Additionally I want to add that just changing a collation is not something done easy, check the MSDN for SQL 2000:
When you set up SQL Server 2000, it is important to use the correct collation settings. You can change collation settings after running Setup, but you must rebuild the databases and reload the data. It is recommended that you develop a standard within your organization for these options. Many server-to-server activities can fail if the collation settings are not consistent across servers.
You can specify a collation on a per column bases however:
CREATE TABLE TestTable (
id int,
GreekColCaseInsensitive nvarchar(10) collate greek_ci_as,
LatinColCaseSensitive nvarchar(10) collate latin1_general_cs_as
)
Have a look at the different binary multilingual collations here. Depending on the charset you use, you should find one that fits your purpose.
If you are not able or willing to change the collation of a column you can also just specify the collation to be used in the query like:
SELECT * From TestTable
WHERE GreekColCaseInsensitive = N'test - ۓےۑ'
COLLATE latin1_general_cs_as
As jfrobishow pointed out the use of N in front of the string you want to use to compare is essential. What does it do:
It denotes that the subsequent string is in Unicode (the N actually stands for National language character set). Which means that you are passing an NCHAR, NVARCHAR or NTEXT value, as opposed to CHAR, VARCHAR or TEXT. See Article #2354 for a comparison of these data types.
You can find a quick rundown here.
I have a database in SQL Server containing a column which needs to contain Unicode data (it contains user's addresses from all over the world e.g. القاهرة for Cairo)
This column is an nvarchar column with a collation of database default (Latin1_General_CI_AS), but I've noticed data inserted into it via SQL statements containing non English characters and displays as ?????.
The solution seems to be that I wasn't using the n prefix e.g.
INSERT INTO table (address) VALUES ('القاهرة')
Instead of:
INSERT INTO table (address) VALUES (n'القاهرة')
I was under the impression that Unicode would automatically be converted for nvarchar columns and I didn't need this prefix, but this appears to be incorrect.
The problem is I still have some data in this column which appears as ????? in SQL Server Management Studio and I don't know what it is!
Is the data still there but in an incorrect character encoding preventing it from displaying but still salvageable (and if so how can I recover it?), or is it gone for good?
Thanks,
Tom
To find out what SQL Server really stores, use
SELECT CONVERT(VARBINARY(MAX), 'some text')
I just tried this with umlauted characters and Arabic (copied from Wikipedia, I have no idea) both as plain strings and as N'' Unicode strings.
The results are that Arabic non-Unicode strings really end up as question marks (0x3F) in the conversion to VARCHAR.
SSMS sometimes won't display all characters, I just tried what you had and it worked for me, copy and paste it into Word and it might display it corectly
Usually if SSMS can't display it it should be boxes not ?
Try to write a small client that will retrieve these data to a file or web page. Check ALL your code if there are no other inserts or updates that might convertthe data to varchar before storing them in tables.
I want to add G:tech work in my table columm but my system is giving special characters are not allowed,
We are using SQL server could you please help me who to insert this word in DB and why it is not taking specail characters??
In order to support special characters and multiple languages you should use nvarchar and nchar, at least in MSSQL.