Missing nvarchar columns when reading SQL Server database table from Oracle - sql-server

I have a SQL Server database with a table that has a column of nvarchar(4000) data type. When I try to read the data from Oracle through a dblink, I don't see the nvarchar(4000) column. All the other column's data is displayed properly.
Can anyone help me to find the issue here and how to fix it?

Appendix A-1 ...
ODBC Oracle Comment
SQL_WCHAR NCHAR -
SQL_WVARCHAR
NVARCHAR - SQL_WLONGVARCHAR LONG if Oracle DB Character Set = Unicode.
Otherwise, it is not supported
Commonly nvarchar(max) is mapped to SQL_WLONGVARCHAR and this data type can only be mapped to Oracle if the Oracle database character set is unicode.
To check the database character set, please excuet:
select * from nls_parameters;
and have a look at: NLS_CHARACTERSET
UPDATE
NLS_CHARACTERSET needs to be a unicode character set - for example AL32UTF8(Do this if you know what you are doing or ask you r DBA to do it.)
NCHAR character set isn't used as the mapping is to Oracle LONG which uses the normal database character set.
A 2nd solution would be to create on the SQL Server side a view that splits the nvarchar(max) to several nvarchar(xxx) and then to select from the view and to concatenate the content again in Oracle.(If you have problem with changing the character set to unicode then this approach is the beset way to go.)

Related

DB collation VS Column collation when INSERTing

I've create 2 demo DB's.
Server Collation - Hebrew_CI_AS
DB1 Collation - Hebrew_CI_AS
DB2 Collation - Latin1_General_CS_AS.
In DB2 I have one column with Hebrew_CI_AS Collation. I'm trying to insert Hebrew text into that column. The Datatype is nvarchar(250).
This is the sample script:
INSERT INTO [Table] (HebCol)
VALUES('1בדיקה')
When I run this on DB1, everything works fine.
On DB2, Although the column has Hebrew Collation, I get question marks instead of the Hebrew text.
Why is the result different if the collation is identical?
P.S: I cannot add N before the text. In the real world an app is doing the inserts.
When using literal strings the collation used is that of the database, not the destination column. As the collation of the database you are inserting into is Latin1_General_CS_AS then for the literal string '1בדיקה' most of the characters are outside of the code page of the collation; thus you get ? for those characters as they are unknown.
As such there are only 2 solutions to stop the ? appearing in the column:
Fix your application and define your literal string(s) as an nvarchar not a varchar; you are after all storing an nvarchar so it makes sense to pass a literal nvarchar.
Change the collation of your database to be the same as your other database, Hebrew_CI_AS.
Technically there is a 3rd, which is use a UTF-8 collation if you are on SQL Server 2019, but such collations come with caveats that I don't think are in scope of this question.

characters appearing incorrectly even with Unicode source and destination (SSIS)

I am having a codepage unicode/non unicode problem and need expertise to understand it.
In SSIS I am reading data in from a UTF8 encoded text file. The datatypes are all DT_WSTR (unicode string). The destination is NVARCHAR which is also unicode.
Non standard characters such as Ú are not being encoded correctly )appearing as a black box question mark).
If the character appears correctly in the input file, the source is set to DT_WSTR & the destination is nvarchar, why is the character not rendering correctly?
I have tried setting the codepage of the source column to 65001, but in SSIS its only possible to change the codepage on a STR (non unicode) type.
Id appreciate any help in understanding why all unicode fields still cant store a unicode value correctly.
Update from the OP comments
It seems my output is ok if i use Unicode types end to end (input is DT_WSTR, destination column is nvarchar & when extracting again to text, output column is DW_WSTR. The only issue is sql server management studio, which does not seem to be able to render unicode characters correctly in the results of a query, when setting output to grid or text. this is a red herring and the process overall works without issue if this is ignored
Trying to figure out the issue
There is not problem importing unicode characters from flat files to SQL Server destination, the only thing you have to do is the set the flat file encoding as unicode, and the result columns must be NVARCHAR. Based on your question, it looks like you have met the requirements so i can say that:
Unicode Character are imported successfully to SQL Server, but for some reasons SQL Server Management Studio cannot show unicode characters in a grid Results, to check that data is imported correctly, change change the result view to Result To Text.
GoTo Tools >> Options >> Query Results >> Results To Text
In the second reference link i provided they mentioned that:
If you use SSMS for your queries, change to output type from "Grid" to "Text", because depending on the font the grid can't show unicode.
Or you can try to change the Grid Results font, (on my machine, i use Tahoma font and it shows unicode characters normally)
Experiments
You can perform the following test (taken from the links below)
SET NOCOUNT ON;
CREATE TABLE #test
( id int IDENTITY(1, 2) NOT NULL Primary KEY
,Uni nvarchar(20) NULL);
INSERT INTO #test (Uni) VALUES (N'DE: äöüßÖÜÄ');
INSERT INTO #test (Uni) VALUES (N'PL: śćźłę');
INSERT INTO #test (Uni) VALUES (N'JAP: 言も言わずに');
INSERT INTO #test (Uni) VALUES (N'CHN: 玉王瓜瓦甘生用田由疋');
SELECT * FROM #test;
GO
DROP TABLE #test;
Try the following query using Result as Grid and Result as Text options.
References
SQL Server 2012 not showing unicode character in results
sql server 2008 not showing and inserting unicode characters!
Import UTF-8 Unicode Special Characters with SQL Server Integration Services
Microsoft SQL Server Management Studio - query result as text

2 different collations conflict when merging tables with Sql Server?

I have DB1 which has a Hebrew collation
I also have DB2 which has latin general collation.
I was asked to merge a table (write a query) between DB1.dbo.tbl1 and DB2.dbo.tbl2
I could write in the wuqery
insert into ...SELECT Col1 COLLATE Latin1_General_CI_AS...
But I'm sick of doing it.
I want to make both dbs/tables to the same collation so I don't have to write every time COLLATE...
The question is -
Should I convert latin->hebrew or Hebrew->latin ?
we need to store everything from everything. ( and all our text column are nvarachr(x))
And if so , How do I do it.
If you are using Unicode data types in resulted database - nvarchar(x), then you are to omit COLLATE in INSERT. SQL Server will convert data from your source collation to Unicode automatically. So you should not convert anything if you are inserting to nvarchar column.

SQL Server Text Datatype Maxlength = 65,535?

Software I'm working with uses a text field to store XML. From my searches online, the text datatype is supposed to hold 2^31 - 1 characters. Currently SQL Server is truncating the XML at 65,535 characters every time. I know this is caused by SQL Server, because if I add a 65,536th character to the column directly in Management Studio, it states that it will not update because characters will be truncated.
Is the max length really 65,535 or could this be because the database was designed in an earlier version of SQL Server (2000) and it's using the legacy text datatype instead of 2005's?
If this is the case, will altering the datatype to Text in SQL Server 2005 fix this issue?
that is a limitation of SSMS not of the text field, but you should use varchar(max) since text is deprecated
Here is also a quick test
create table TestLen (bla text)
insert TestLen values (replicate(convert(varchar(max),'a'), 100000))
select datalength(bla)
from TestLen
Returns 100000 for me
MSSQL 2000 should allow up to 2^31 - 1 characters (non unicode) in a text field, which is over 2 billion. Don't know what's causing this limitation but you might wanna try using varchar(max) or nvarchar(max). These store as many characters but allow also the regular string T-SQL functions (like LEN, SUBSTRING, REPLACE, RTRIM,...).
If you're able to convert the column, you might as well, since the text data type will be removed in a future version of SQL Server. See here.
The recommendation is to use varchar(MAX) or nvarchar(MAX). In your case, you could also use the XML data type, but that may tie you to certain database engines (if that's a consideration).
You should have a look at
XML Support in Microsoft SQL Server
2005
Beginning SQL Server 2005 XML
Programming
So I would rather try to use the data type appropriate for the use. Not make a datatype fit your use from a previous version.
Here's a little script I wrote for getting out all data
SELECT #data = N'huge data';
DECLARE #readSentence NVARCHAR (MAX) = N'';
DECLARE #dataLength INT = ( SELECT LEN (#data));
DECLARE #currIndex INT = 0;
WHILE #data <> #readSentence
BEGIN
DECLARE #temp NVARCHAR (MAX) = N'';
SET #temp = ( SELECT SUBSTRING (#data, #currIndex, 65535));
SELECT #temp;
SET #readSentence += #temp;
SET #currIndex += 65535;
END;

How to convert chinese characters to AL16UTF16 or WE8ISO8859P1?

I have inserted into database some chinese characters. (Column name is NAME, data type is VARCHAR2)
My project name is: 中文版测试 and I need to select project by this name.
But.
In oracle database are inserted 中文版测试 with name : ÖÐÎÄ°æ²âÊÔ (If I understand right my database has a set with the name WE8ISO8859P1)
I want to convert this characters from database (ÖÐÎÄ°æ²âÊÔ) to chinese characters (中文版测试) or to a same values to compare.
I try this:
select DIRNAME from MILLENNIUM.PROJECTINFO where UPPER(convert(NAME, 'AL32UTF8', 'we8iso8859p1')) = UPPER(convert('中文版测试', 'WE8MSWIN1252', 'AL32UTF8'));
I need to compare values from oracle with the name of the project.
Oracle settings:
NLS_CHARACTERSET WE8ISO8859P1 0
NLS_NCHAR_CHARACTERSET AL16UTF16 0
AS Michael O'Neill already pointed out it is not possible to store Chinese characters in character set WE8ISO8859P1. All unsupported characters are automatically replaced by ¿ (or any other place holder)
BTW, WE8ISO8859P1 is different to WE8MSWIN1252 (see What is the exact difference between Windows-1252(1/3/4) and ISO-8859-1?), so your conversion does not work anyway.
Solution is to change data type of column NAME to NVARCHAR2 or migrate your database to UTF-8, see Character Set Migration and Database Migration Assistant for Unicode Guide. In any case you should consider your data being lost, resp. corrupted.
However, in case your client application was configured wrongly then in certain circumstances it is possible to insert unsupported characters, see If we have US7ASCII characterset why does it let us store non-ascii characters?.
In such case you can try to repair your data as this:
ALTER TABLE PROJECTINFO ADD NAME_CN NVARCHAR2(100);
UPDATE PROJECTINFO SET NAME_CN = UTL_I18N.RAW_TO_NCHAR(UTL_I18N.STRING_TO_RAW(NAME), 'ZHS16CGB231280');
ALTER TABLE PROJECTINFO DROP COLUMN NAME;
ALTER TABLE PROJECTINFO RENAME COLUMN NAME_CN TO NAME;
select DIRNAME from MILLENNIUM.PROJECTINFO where NAME = '中文版测试';
but it may not work for all of your data.
Hence a (not recommended) workaround for your problem could be
select DIRNAME
from MILLENNIUM.PROJECTINFO
where UTL_I18N.RAW_TO_NCHAR(UTL_I18N.STRING_TO_RAW(NAME), 'ZHS16CGB231280') = '中文版测试';
You cannot take Chinese characters, insert them into a column that is bound by the WE8ISO8859P1 character set and then select them ever again as Chinese characters. You have lost information on your insert. That lost information cannot be reconstituted.
In your case, the NAME column if it were defined as NVARCHAR2, you could do a AL16UTF16 to AL16UTF16 comparison in a subsequent SELECT. Or, even better, not need to convert and compare with AL16UTF16 at all if your client tool is up to the task.

Resources