We're running SQL 6.5 though ADO and we have the oddest problem.
This sentence will start generating deadlocks
insert clinical_notes ( NOTE_ID, CLIENT, MBR_ID, EPISODE, NOTE_DATE_TIME,
NOTE_TEXT, DEI, CARE_MGR, RELATED_EVT_ID, SERIES, EAP_CASE, TRIAGE, CATEGORY,
APPOINTMENT, PROVIDER_ID, PROVIDER_NAME )
VALUES ( 'NTPR3178042', 'HUMANA/PR', '999999999_001', 'EPPR915347',
'03-28-2011 11:25', 'We use á, é, í, ó, ú and ü (this is the least one we
use, but there''s a few words with it, like the city: Mayagüez).', 'APK', 'APK',
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
The trigger are the characters ú and ü. If they are in the NOTE_TEXT column.
NOTE_TEXT is a text column.
There are indexes on
UNC_not_id
NT_CT_MBR_NDX
NT_REL_EVT_NDX
NT_SERIES_NDX
idx_clinical_notes_date_time
nt_ep_idx
NOTE_ID is the primary key.
What happens is after we issue this statement, if we issue an identical one, but with a new NOTE_ID value, we receive the deadlock.
As mentioned, this only happens when ú or ü is in NOTE_TEXT.
This is a test server and there is generally only one session accessing this table when the error occurs.
I'm sure it has something to so with character sets and such, but for the life of me I can't work it out.
Is the column (var)char-based or n(var)char-based? Are the values using unicode above 255 or are they ascii 255 or below (250 and 252)?
Try changing the column to a binary collation, just to see if that helps (it may shed light on the issue). I do NOT know if this works in SQL 2000 (though I can check on Monday), but you can try this to find what collations are available on your server:
SELECT * FROM ::fn_helpcollations()
Latin General BIN should be in there somewhere.
Assuming you find a collation to try, you change the collation like so:
ALTER TABLE TableName ALTER COLUMN ColumnName varchar(8000) NOT NULL COLLATE Collation_Name_Here
Script out your table to learn the collation it's using now so you can set it back if that doesn't work or causes problems. Or use a backup. :)
One additional note is that if you're using unicode you do need an N before literal strings, for example:
SELECT N'String'
Related
I am trying to create a database containing Kurdish Sorani Letters.
My Database fields has to be varchar cause of project is started that vay.
First I create database with Arabic_CI_AS
I can store all arabic letters on varchar fields but when it comes to kurdish letters for example
ڕۆ these special letters are show like ?? on the table after entering data, I think my collation is wrong. Have anybody got and idea for collation ?
With that collation, no, you need to use nvarchar and always prefix such strings with the N prefix:
CREATE TABLE dbo.floo
(
UseNPrefix bit,
a varchar(32) collate Arabic_CI_AS,
b nvarchar(32) collate Arabic_CI_AS
);
INSERT dbo.floo(UseNPrefix,a,b) VALUES(0,'ڕۆ','ڕۆ');
INSERT dbo.floo(UseNPrefix,a,b) VALUES(1,N'ڕۆ',N'ڕۆ');
SELECT * FROM dbo.floo;
Output:
UseNPrefix
a
b
False
??
??
True
??
ڕۆ
Example db<>fiddle
In SQL Server 2019, you can use a different SC + UTF-8 collation with varchar, but you will still need to prefix string literals with N to prevent data from being lost:
CREATE TABLE dbo.floo
(
UseNPrefix bit,
a varchar(32) collate Arabic_100_CI_AS_KS_SC_UTF8,
b nvarchar(32) collate Arabic_100_CI_AS_KS_SC_UTF8
);
INSERT dbo.floo(UseNPrefix,a,b) VALUES(0,'ڕۆ','ڕۆ');
INSERT dbo.floo(UseNPrefix,a,b) VALUES(1,N'ڕۆ',N'ڕۆ');
SELECT * FROM dbo.floo;
Output:
UseNPrefix
a
b
False
??
??
True
ڕۆ
ڕۆ
Example db<>fiddle
Basically, even if you are on SQL Server 2019, your requirements of "I need to store Sorani" and "I can't change the table" are incompatible. You will need to either change the data type of the column or at least change the collation, and you will need to adjust any code that expects to pass this data to SQL Server without an N prefix on strings.
Using Microsoft SQL Server Management Studio. I am having an issue with inserting some characters into a database.
I read that you have to add a collate to the column to allow for some characters.
I need some characters from the Czech language. so I added the Czech collate (Czech_100_CI_AS) to the column but then some French characters were removed and can not be entered.
Can I not have multiple collates on a column? this seems would be a weird limitation
I tried this, with a "," but this gives an error on the comma
ALTER TABLE dbo.TestingNames
ALTER COLUMN NameTesting VARCHAR(50) COLLATE Czech_100_CI_AS|, French_CS_AS
Edit:
Ah I misunderstood what collate was meant to do, I didn't realize it was a codepage, i thought it was just an include.
Thanks, Changing it to Nvarchar seemed to have worked :) I actually thought I was using nvarchar /facepalm Thank you for pointing that out to me.
I got a little surprised as I was able to store an Ukrainian string in a varchar column .
My table is:
create table delete_collation
(
text1 varchar(100) collate SQL_Ukrainian_CP1251_CI_AS
)
and using this query I am able to insert:
insert into delete_collation
values(N'використовується для вирішення квитки')
but when I am removing 'N' it is showing ?????? in the select statement.
Is it okay or am I missing something in understanding unicode and non-unicode with collate?
From MSDN:
Prefix Unicode character string constants with the letter N. Without
the N prefix, the string is converted to the default code page of the
database. This default code page may not recognize certain characters.
UPDATE:
Please see a similar questions::
What is the meaning of the prefix N in T-SQL statements?
Cyrillic symbols in SQL code are not correctly after insert
sql server 2012 express do not understand Russian letters
To expand on MegaTron's answer:
Using collate SQL_Ukrainian_CP1251_CI_AS, SQL server is able to store ukrainian characters in a varchar column by using CodePage 1251.
However, when you specify a string without the N prefix, that string will be converted to the default non-unicode codepage before it is sent to the database, and that is why you see ??????.
So it is completely fine to use varchar and collate as you do, but you must always include the N prefix when sending strings to the database, to avoid the intermediate conversion to default (non-ukrainian) codepage.
I want to use a non-European language collation on every column of a specific table. So:
ALTER TABLE dbo.MyTable
ALTER COLUMN MyColumn varchar(10)COLLATE Latin1_General_CI_AS NOT NULL;
But Ç and Ü chars still stands in the cells.
I wish to change them into C and U
How can I do this?
Latin-1 contains Ç (199) and Ü (220). Even if it didn't, it would still contain characters with value 199 and 220 - so you'd just see different tokens instead.
Frankly, if your data is now or could ever be non-ASCII, you'd do well to consider nvarchar(...) instead of varchar(...)
I need to insert chinese characters in my database but it always show ???? ..
Example:
Insert this record.
微波室外单元-Apple
Then it became ???
Result:
??????-Apple
I really Need Help...thanks in regard.
I am using MSSQL Server 2008
Make sure you specify a unicode string with a capital N when you insert like:
INSERT INTO Table1 (Col1) SELECT N'微波室外单元-Apple' AS [Col1]
and that Table1 (Col1) is an NVARCHAR data type.
Make sure the column you're inserting to is nchar, nvarchar, or ntext. If you insert a Unicode string into an ANSI column, you really will get question marks in the data.
Also, be careful to check that when you pull the data back out you're not just seeing a client display problem but are actually getting the question marks back:
SELECT Unicode(YourColumn), YourColumn FROM YourTable
Note that the Unicode function returns the code of only the first character in the string.
Once you've determined whether the column is really storing the data correctly, post back and we'll help you more.
Try adding the appropriate languages to your Windows locale setings. you'll have to make sure your development machine is set to display Non-Unicode characters in the appropriate language.
And ofcourse u need to use NVarchar for foreign language feilds
Make sure that you have set an encoding for the database to one that supports these characters. UTF-8 is the de facto encoding as it's ASCII compatible but supports all 1114111 Unicode code points.
SELECT 'UPDATE table SET msg=UNISTR('''||ASCIISTR(msg)||''') WHERE id='''||id||''' FROM table WHERE id= '123344556' ;