We planned to migrate Sybase ASE database to Azure SQL managed instance. Our source Sybase database collation is Binary as like below
Character Set = 1, iso_1
ISO 8859-1 (Latin-1) - Western European 8-bit character set.
Sort Order = 50, bin_iso_1
Binary ordering, for the ISO 8859/1 or Latin-1 character set ( iso_1).
But there are 9 types of collation in SQL as below
Latin1_General_BIN
Latin1_General_BIN2
Latin1_General_100_BIN
Latin1_General_100_BIN2
Latin1_General_100_BIN2_UTF8
SQL_Latin1_General_CP437_BIN
SQL_Latin1_General_CP437_BIN2
SQL_Latin1_General_CP850_BIN
SQL_Latin1_General_CP850_BIN2
Any one please assist , which SQL collation suitable for above source Sybase collation.
Related
I'm trying to figure out if my SQL Database setup is ready to store any language in the world (also Japanese). Having read a lot of documentation (Microsofts own) I'm still unsure if further specification of collation is needed.
Database: SQL Server Standard 2016
Collation: SQL_Latin1_General_CP1_CI_AS
Question: I have a table with a MovieTitle column, defined as nvarchar(2048). Will this be able to store the movie title of any language in the world? According to the documentation it seems that any nvarchar column is UTF-8 from SQL Server version 2009.
I'm asking because recently I searched for
WHERE MovieTitle = ''
and it returns several results with different Arabic titles.
I am trying to get my head around Unicode and Collations and how to use collations properly in ms sql server 2014.
Microsoft states:
"Windows Unicode-only collations can only be used with the COLLATE clause to apply collations to the nchar, nvarchar, and ntext data types on column level and expression-level data. They cannot be used with the COLLATE clause to change the collation of a database or server instance."
What Windows unicode-only collations are? I want to convert my database to support unicode so now I use only nvarchar, nchar and ntext. I did SELECT * FROM sys.fn_helpcollations() and I got a list of collations. None of them is described as Unicode-only collation. That's where I am getting confused, if there is a unicode only collation as microsoft states how can I find it and what's the logic behind it?
use this to get collations with codepage is 0, should be the unicode-only collations.
select name, COLLATIONPROPERTY(name, 'CodePage') as CodePage, description
from sys.fn_HelpCollations() order by code_page
go
I'm running SQL Server 2008R2 Standard edition on and RDS instance. I need to change the server's collation.So how can i change?
Based on documentation:
Amazon RDS creates a default server collation for character sets when
a SQL Server DB instance is created. This default server collation is
currently English (United States), or more precisely,
SQL_Latin1_General_CP1_CI_AS.
You can change the default collation at the database, table, or column level by overriding the collation when creating a new database or database object. For example, you can change from the default collation SQL_Latin1_General_CP1_CI_AS to Japanese_CI_AS for Japanese collation support. Even arguments in a query can be type-cast to use a different collation if necessary.
So change to desired collation on
database
ALTER DATABASE db_name
COLLATE collate_name;
column
ALTER TABLE dbo.table_name ALTER COLUMN col_name
type COLLATE collate_name;
The problem is the following: I have a column of type nvarchar, collation is set to Vietnamese_CI_AS. I insert some data containing vietnamese letters (using MS SQL Server Management studio). And then it is not displayed correctly in SQL Server Management studio and application.
Data:
Result
Any suggestions?
You must always declare N before inserting any values. N stands for national language character set.
Have a look at this post to get a better understanding.
What is the difference between varchar and nvarchar?
I need to upload some data from an Oracle table to a SQL Server table. The data will be uploaded to the SQL server using a Java processing utilising JDBC facilities.
Is there any benefit in creating the SQL server columns using nvarchar instead of varchar?
Google suggests that nvarchar is used when UNICODE characters are involved but i am wondering does nvarchar provide any benefit in this situation? (i.e. when the source data of the SQL Server table comes from an Oracle database running a Unix environment?)
Thanks in advance
As you have found out, nvarchar stores unicode characters - the same as nvarchar2 within Oracle. It comes down to whether your source data is unicode - or whether you anticipate having to store unicode values in future (e.g. internationalized software)