'LIKE' keyword is not working in sql server 2008 - sql-server

I have bulk-data in SQL-Server table. One of the fields contains following data :
'(اے انسان!) کیا تو نہیں جانتا)'
Tried:
SELECT * from Ayyat where Data like '%انسان%' ;
but it is showing no-result.

Plese use N before string if language is not a english:
SELECT * from Ayyat where Data like N'%انسان%' ;

If you're storing urdu, arabic or other language except english in your database first you should convert your database into another format and then also table.
first you've to convert your database charset and also collation alter your database
ALTER DATABASE database_name CHARACTER SET utf8 COLLATE utf8_general_ci
then convert your table
ALTER TABLE table_name CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci
after this execute normal your query
SELECT * FROM table_name WHERE Datas LIKE '%انسان%'
Note: if you not convert your database and table other languages and special characters will be changed into question marks.

Related

Kurdish Sorani Letters sql server

I am trying to create a database containing Kurdish Sorani Letters.
My Database fields has to be varchar cause of project is started that vay.
First I create database with Arabic_CI_AS
I can store all arabic letters on varchar fields but when it comes to kurdish letters for example
ڕۆ these special letters are show like ?? on the table after entering data, I think my collation is wrong. Have anybody got and idea for collation ?
With that collation, no, you need to use nvarchar and always prefix such strings with the N prefix:
CREATE TABLE dbo.floo
(
UseNPrefix bit,
a varchar(32) collate Arabic_CI_AS,
b nvarchar(32) collate Arabic_CI_AS
);
INSERT dbo.floo(UseNPrefix,a,b) VALUES(0,'ڕۆ','ڕۆ');
INSERT dbo.floo(UseNPrefix,a,b) VALUES(1,N'ڕۆ',N'ڕۆ');
SELECT * FROM dbo.floo;
Output:
UseNPrefix
a
b
False
??
??
True
??
ڕۆ
Example db<>fiddle
In SQL Server 2019, you can use a different SC + UTF-8 collation with varchar, but you will still need to prefix string literals with N to prevent data from being lost:
CREATE TABLE dbo.floo
(
UseNPrefix bit,
a varchar(32) collate Arabic_100_CI_AS_KS_SC_UTF8,
b nvarchar(32) collate Arabic_100_CI_AS_KS_SC_UTF8
);
INSERT dbo.floo(UseNPrefix,a,b) VALUES(0,'ڕۆ','ڕۆ');
INSERT dbo.floo(UseNPrefix,a,b) VALUES(1,N'ڕۆ',N'ڕۆ');
SELECT * FROM dbo.floo;
Output:
UseNPrefix
a
b
False
??
??
True
ڕۆ
ڕۆ
Example db<>fiddle
Basically, even if you are on SQL Server 2019, your requirements of "I need to store Sorani" and "I can't change the table" are incompatible. You will need to either change the data type of the column or at least change the collation, and you will need to adjust any code that expects to pass this data to SQL Server without an N prefix on strings.

How to idetify the phonetic alphabets in MS SQL database tables

We have inserted more than 100000 records through Import flat file functionality in sql server management studio. It was inserted successfully.
But some of column values contained characters like é and ö .
It got converted into while storing in sql column for all above characters like(ö,é).
Moreover the below SQL statements is not giving any results.
select * from Temp where column1 like '%%'
The data with these characters in the tables are being displayed with a symbol(question mark in a diamond).
Please help as to how can I insert the data keeping the phoentic symobols intact.
Your data contains some characters like é and ö. But when you see in the database, it's stored "?" instead of that, right?
I think, your database does not support all characters. I would recommend to change it to something like this:
character set: utf8
collation: utf8_general_ci
Hope to help, my friend :))
character set: utf8
collation: utf8_general_ci
SELECT
*
FROM TEMP
WHERE SOUNDEX(COLUMN1) LIKE SOUNDEX('A')
strong text

Missing nvarchar columns when reading SQL Server database table from Oracle

I have a SQL Server database with a table that has a column of nvarchar(4000) data type. When I try to read the data from Oracle through a dblink, I don't see the nvarchar(4000) column. All the other column's data is displayed properly.
Can anyone help me to find the issue here and how to fix it?
Appendix A-1 ...
ODBC Oracle Comment
SQL_WCHAR NCHAR -
SQL_WVARCHAR
NVARCHAR - SQL_WLONGVARCHAR LONG if Oracle DB Character Set = Unicode.
Otherwise, it is not supported
Commonly nvarchar(max) is mapped to SQL_WLONGVARCHAR and this data type can only be mapped to Oracle if the Oracle database character set is unicode.
To check the database character set, please excuet:
select * from nls_parameters;
and have a look at: NLS_CHARACTERSET
UPDATE
NLS_CHARACTERSET needs to be a unicode character set - for example AL32UTF8(Do this if you know what you are doing or ask you r DBA to do it.)
NCHAR character set isn't used as the mapping is to Oracle LONG which uses the normal database character set.
A 2nd solution would be to create on the SQL Server side a view that splits the nvarchar(max) to several nvarchar(xxx) and then to select from the view and to concatenate the content again in Oracle.(If you have problem with changing the character set to unicode then this approach is the beset way to go.)

Inserting Unicode character using asp.net mvc

I have a database field with nVarchar(30). I am using asp.net MVC. When i insert the record in Unicode, i get ?????.
Any one can tell me how can i convert a string to unicode and insert into database.
I am using SQL Server 2008 R2.
Try to change your database collation to Latin1_General_BIN2.
http://msdn.microsoft.com/en-us/library/ms175835.aspx
Make sure:
You Use N' at the start of string literals containing such strings, e.g. N'enović'
If you want to query and ignore accents, then you can add a COLLATE clause to your select. E.g.:
SELECT * FROM Account
WHERE Name = 'enovic' COLLATE Latin1_General_CI_AI

2 different collations conflict when merging tables with Sql Server?

I have DB1 which has a Hebrew collation
I also have DB2 which has latin general collation.
I was asked to merge a table (write a query) between DB1.dbo.tbl1 and DB2.dbo.tbl2
I could write in the wuqery
insert into ...SELECT Col1 COLLATE Latin1_General_CI_AS...
But I'm sick of doing it.
I want to make both dbs/tables to the same collation so I don't have to write every time COLLATE...
The question is -
Should I convert latin->hebrew or Hebrew->latin ?
we need to store everything from everything. ( and all our text column are nvarachr(x))
And if so , How do I do it.
If you are using Unicode data types in resulted database - nvarchar(x), then you are to omit COLLATE in INSERT. SQL Server will convert data from your source collation to Unicode automatically. So you should not convert anything if you are inserting to nvarchar column.

Resources