nchar, nvarchar in Postgres - sql-server

I read that so many old postings about converting nchar, nvarchar to some postgres data type. SQL server uses UTF-16 and unicode for nchar and nvarchar and according to:
How can I store UTF-16 characters in a Postgres database?
the conversion is risky business. While creating a psql table the character set can be specified with 'UTF-8' so this could be an approximated solution. However, for latest versions of Postgres:
Is Postgres now supporting UTF-16 or something similar?
If this is happening, from which psql version?
Thanks in advance

Related

MSSQL & SQL data type alternatives?

I have to work on a project connecting to a SQL Server DB, while working with PHP and a Laravel framework.
My issue is with the data types and where I would be able to change them into fully functional and more 'conventional' SQL data types. So let's take NVARCHAR for example, would I be able to change into a normal VARCHAR?
The types I have are:
NCHAR
NVARCHAR
GEOGRAPHY
I've read over here that :
Laravel uses db-library (if it's available) to connect to Sql Server
which cannot receive unicode data from MSSQL. (1,2)
Is there anyone in the crowd that work with Laravel and preformed such a task?
You can use following convention I found from MSSQL data types to MYSQL data types
NCHAR => CHAR/LONGTEXT
NVARCHAR => VARCHAR/MEDIUMTEXT/LONGTEXT
Still couldn't find a solution for GEOGRAPHY type. I'll keep you posted.
Found this on GEOGRAPHY but it clearly doesn't mention a counterpart to it.

When I imported a tinyint column from SQL Server to Oracle 10g, why did I get negative values?

I recently encountered a problem where we were running a data migration script to move data from SQL Server to Oracle 10g through an Oracle DBLink. Everything worked fine until we ran the script in our production Oracle environment. For certain columns defined as tinyint in SQL Server, we found that values above 127 in the SQL Server database were now negative values (256 less than the original). Why did the script work in the development and test databases, but not in production?
I'm asking and answering my own question because Google and StackOverflow were unable to help me with this one, at least with the search terms I was using. As we started researching, we found that SQL Server treats tinyint as an unsigned byte (0 to 255), while Oracle treats it as a signed byte (-128 to 127). But we were importing into a NUMBER(3) column, which is appropriate. The guy who wrote the data migration script needed to use Oracle's to_number function to read the SQL Server tinyint columns for some reason. So this query returned some curious characters in the second column if you ran it on our dev and test environments, but it returned the same negative number in both columns in our production environment.
SELECT to_number("SomeTinyIntColumn"), "SomeTinyIntColumn"
FROM MySQLServerDBLink#mydomain.com
We eventually discovered that the reason it worked in the dev and test environments was because the character set was UTF-8 there, but in production it was a Western European 8-bit character set:
SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET';
-- Dev and Test: AL32UTF8
-- Prod: WE8ISO8859P1
So it seems that reading a SQL Server tinyint column as a single UTF-8 character through a DBLink and converting that to an Oracle NUMBER(3) column works, provided you are using a UTF-8 character set for your Oracle database. It would have been nice if the DBLink had handled the conversion itself (making the to_number conversion unnecessary), but it seems it doesn't really know what to do with a SQL Server tinyint.
I hope this helps someone else someday!

nvarchar & varchar with Oracle & SQLServer

I need to upload some data from an Oracle table to a SQL Server table. The data will be uploaded to the SQL server using a Java processing utilising JDBC facilities.
Is there any benefit in creating the SQL server columns using nvarchar instead of varchar?
Google suggests that nvarchar is used when UNICODE characters are involved but i am wondering does nvarchar provide any benefit in this situation? (i.e. when the source data of the SQL Server table comes from an Oracle database running a Unix environment?)
Thanks in advance
As you have found out, nvarchar stores unicode characters - the same as nvarchar2 within Oracle. It comes down to whether your source data is unicode - or whether you anticipate having to store unicode values in future (e.g. internationalized software)

Storing unicode strings to SQL Server via ActiveRecord

I am using Castle ActiveRecord as my ORM. When I try to store unicode strings, I get question marks instead.
Saving unicode strings worked perfectly when I was using mysql, but when I recently switch to SQL Server it broke. How should I go about fixing this?
You're most likely using the incorrect SQL Server data type. varchar is meant for a plain-old character while nvarchar is meant for Unicode characters. The same applies for char & nchar and text and ntext.
MSDN for SQL Server data type
MSDN for SQL Server Unicode data

How do I get SQL Server 2005 data stored as windows-1252 as UTF-8?

I have a client database with English and French data in windows-1252 encoding. I need to fetch this data as part of an AJAX call and send it in UTF-8 format.
Is there a way I can pass the data through a stored proc to perform this conversion?
My web app cannot be altered to perform this conversion itself.
Microsoft has published a UTF-8 CLR UDT for SQL Server 2008 that can be installed on SQL Server 2005. See here: msdn.microsoft.com/en-us/library/ms160893.aspx .
try to cast as nvarchar
or even better use nvarchar columns

Resources