Cyrillics in LINQ to SQL database - database

I have comma separated file with cyrillics characters. When I read from it using StreamReader the characters are OK. I'm writing the file in LINQ to SQL database, however the column with cyrillics is written with ?????? .The column is nvarchar type. Does somebody had the same problem?

i think that is a encoding problem, maybe the SQL DB has not the properly library to encoding the Cyrillics.
take a look here: http://dev.mysql.com/doc/refman/5.0/en/charset-cyrillic-sets.html

try Encoding.Utf8:
new StreamReader("THE PATH", Encoding.UTF8);

You may have to change the collattion of you database
http://msdn.microsoft.com/en-us/library/ms180175.aspx

Related

My sql file contains nonsense content

I have an sql file which was being used for SQLServer, my problem is that when I open the file it contains nothing but nonsense like so:
Iک}d9ے(csيIط)ص\‡)dٹ2_ئO§(µïu.=‹i’BWىأï¹1³ف™hىbژ/‚w&ح‚îِ†“zْàˆIQT)”edC¦e³´
¹ط!‎إ-Œ¤{^#.œ£HE÷ھ¸ٌqqœoة_`T‍kذم.°t/قژط½¾إé„t¾%م¶كىظq،f29ؤ
ھَl¤iٌ¾gi›¶¯ک3»«sـ°“c^r#^ـآg›çAچ/إ/±aNüط
Could you help me how I can fix it please?
It might be binary data.
Check if there is any BLOB column.

Convert varchar to hexadecimal in sql server

I want a function to convert a varchar to hexadecimal in sql server. Please help me.
P.S. I will use unhex() in Hive to try to get the original value back. This is because my data contains special characters and backslash and the HIVE external table does not recognise it
You can try to use CONVERT function:
SELECT CONVERT(VARBINARY(MAX), 'myText')
Output:
0x6D7954657874
you can use
select try_convert(varbinary,varcharcolumn)
You should use try_convert to avoid errors when the conversion fails.. Also, varbinary(n) is a better alternative to varbinary(max) as the latter would set the page size to 2GB, which might be excessive.
Hope this helps!

_x000D_ appearing when importing into SQL

I am importing some Excel spreadsheets into a MS SQL Server. I load the spreadsheets, cleanse the data and then export it to SQL using Alteryx. Some files have text columns where the cells span multiple lines (i.e. with new line characters, like when you press ALT + ENTER in Excel). When I export the tables to SQL and then query the table, I see lots of '_x000D_' which are not in the original file.
Is it some kind of newline character encoding? How do I get rid of it?
I haven't been able to replicate the error. The original file contains some letters with accents (à á etc); I created multi-line spreadsheets with accented letters, but I managed to export these to SQL just fine, with no 'x000D'.
If these were CSV files I would think of character encoding, but Excel spreadsheets? Any ideas? Thanks!
I know this is old, but: if you're using Alteryx, just run it through the "Data Cleansing" tool as the last thing prior to your export to SQL. For the field in question, tell the tool to remove new lines by checking the appropriate checkbox.
If that still doesn't work... 0x000D is basically ASCII 13; (Hex "D" = Int 13)... so try running your data through a regular Formula tool, and for the [field] in question, just use the expression Replace([field],CharFromInt(13),""), which should remove that character by replacing it with the empty string.
This worked for me:
REGEX_REPLACE([field],"_x000D_","")

How to read Arabic characters from varchar datatype?

I have an old system that uses varchar datatype in its database to store Arabic names, now the names appear in the database like this:
"ãíÓÇÁ ÇáãÈíÖíä"
Now I am building a new system using VB.NET, how can I read these names to appear in Arabic characters?
Also I need to point out here that the old system even it stores the data as I mentioned earlier it converts the characters in a correct format.
How to display it properly in the new system and in the SQL Server Management Studio?
have you tried nvarchar? you may find some usefull information at the link below
When must we use NVARCHAR/NCHAR instead of VARCHAR/CHAR in SQL Server?
I faced the same Problem, and I solved it by two steps:
1.change the datatype of the column in DB into nvarchar
2.use the encoding to change the data into Arabic
I used the following function
private string GetDataWithArabic(string srcData)
{
Encoding iso = Encoding.GetEncoding("iso-8859-1");
Encoding unicode = Encoding.Default;
byte[] unicodeBytes = iso.GetBytes(srcData);
return unicode.GetString(unicodeBytes);
}
but make sure you use this method once on DB data, because it will corrupt the data if used twice
I think your answer is here: "storing and retrieving non english characters" http://aalamrangi.wordpress.com/2012/05/13/storing-and-retrieving-non-english-unicode-characters-hindi-czech-arabic-etc-in-sql-server/

How to export data from an ancient SQL Anywhere?

I'm tasked with exporting data from an old application that is using SQL Anywhere, apparently version 5, maybe 5.6. I never worked with this database before so I'm not sure where to start here. Does anybody have a hint?
I'd like to export it in more or less any text representation that then I can work with. Thanks.
I ended up exporting the data by using isql and these commands (where #{table} is each of the tables, a list I built manually):
SELECT * FROM #{table};
OUTPUT TO "C:\export\#{table}.csv" FORMAT ASCII DELIMITED BY ',' QUOTE '"' ALL;
SELECT * FROM #{table};
OUTPUT TO "C:\export\#{table}.txt" FORMAT TEXT;
I used the CVS to import the data itself and the txt to pick up the name of the fields (only parsing the first line). The txt can become rather huge if you have a lot of data.
Have a read http://www.lansa.com/support/tips/t0220.htm

Resources