writing unicode characters to Database using ODBC in MFC Application - database

I have connected to a database using ODBC in an unicode MFC Application and I am filling the database with some unicode data with the CDatabase::ExecuteSQL(CStringW ...) like below:
CStringW sSql;
sSql.Format(L"INSERT INTO Reports ( %s, '%s')", sField1, sValue1);
m_db.ExecuteSQL(sSql);
but what i actually write into database is some ? characters instead of unicode ones.
Is there any solution to this.
Regards

Make sure your database field is of type NVARCHAR(x).

Related

Configure charset for ODBC Driver 17 for SQL Server

I'm running a Windows application on Linux under Wine, that accesses a SQL Server using the ODBC Driver 17 for SQL Server, for Linux.
It runs fine except that I see incorrectly represented the varchars with non-Ascii characters. The nvarchar fields (unicode strings) have no problem.
Example:
select rtrim('Presentación ')
Returns: Presentación
My database has the encoding for varchars defined as iso8859-1, and Wine seems to use the cp1252 page code.
My guess is that the ODBC driver for Linux retrieves correctly the data and transforms them to UTF8, which runs fine (I can see the values correctly if I run my queries directly through isql), but when those strings are passed to my application, under Wine, they must be considered as cp1252 and that's when I see them incorrectly.
Has anyone had the same problem? what could I try?
Thank you.

Unable to store special characters in Oracle 11g Database

I am unable to store special characters like ÐǶĄ§å in oracle 11g database, it shows the stored information as ??????. I have used UTF-8 encoding through out, and also checked the database for the supported charactersets using
select * from v$nls_parameters where parameter like '%CHARACTERSET%';
it gives the output
NLS_CHARACTERSET AL32UTF8
NLS_NCHAR_CHARACTERSET AL16UTF16
Is there any way of storing those special characters in database?
Any help would be highly appreciated.
convert them into binary and then store them in bfile

Sybase ASE 12.5 database : arabic data shown in latin letters

Good day,
i have a Sybase ASE 12.5 database on windows NT server
the database default charachterset is CP850
i'm trying to connect to it using "TOAD for sybase" ,which is on my windows 7 machine
whatever character set i choose for TOAD (utf8,cp1256..), the data are shown in latin letters instead of arabic
i tried disabling the "server character set conversion" ,and disabling the client side conversion,but still no hope
any ideas how to solve this?
CP850 is the character set for Western Europe, so that would explain the latin. If the character set used by the client does not match what is used in the server, then it defaults to English.
You need to change the character set of the server to match what you wish to use for the client, or install the UTF character set in the Server to allow Unicode use.
The Sybase ASE documentation explains the details of charactersets.
the problem were in the server itself, it was corrupted during cloning.
thanks for all the answers

Incorrect encoding using jruby with datamapper

I'm trying to get data form a 2008r2 MSSql server using jruby and datamapper.
The only problem I've got this far is correct character coding in jruby.
Database uses Polish_CI_AS collation, testing field is populated with: "ą ę ś ć".
Fetching that field from within jruby results in: "uFFFD uFFFD uFFFD uFFFD" which are default replacement strings for utf-8.
I've tried setting the -E variable to windows-1250, it changes the characters displayed but as in Utf-8 they are displayed in the same manner. Also tried to include # encoding: Windows-1250, but it doesn’t help either.
I’m pretty sure it has something to do with datamapper or the db connection but jdbc does not supports (AFAIK) encoding variables.
UPDATE
My connection string: DataMapper.setup(:default, 'sqlserver://servername/database;instance=InstanceName;domain=DOMAIN')
The connection works well with MS JDBC, datamapper uses jTDS which uses UTF8 encoding as default.
I've checked the jTDS documentation and found that I needed to add: charset=cp1250; property at the end of my connection string. It all works well now.

SQL Server 2000 charset issues

Once again with the charset issues when talking to DB's :)
I have two enviroments running Zend Server. Bot of these communicate to a SQL Server 2000 using the mssql extension. None of them has any value given for the charset in the settings of the extension. For one it works and for the other one it returns data in the wrong encoding.
The problem became noticed when this data was beeing inserted into a MySQL database and it screamed with SQLSTATE[HY000]: General error: 1366 Incorrect string value: '\xF6m' for column 'cust_lastname' at row 1.
I tried using SET NAMES utf8 to get the SQL Server connection to return the correct data, but it complains and says that NAMES is not a recognized SET statement. Looking around most people even recommend using this but it doesn't seem to be part of SQL Server 2000 :)
So, what should I do? How do I, WITHOUT fiddling with the SQL Server database/tables, tell it to send me the data in UTF-8 encoded format?
EDIT:
Some more info...
SQL Server uses the Finnish_Swedish_CI_AS collation
MySQL has every table in UTF-8 format and uses utf8_unicode_ci
I didn't find a good solution and ended up converting to and from utf8 in my application. If this is encapsulated within a class it doesn't riddle the code. But a way to actually tell the SQL server which encoding to use during communication would be better.

Resources