Ok I have 2 machines. When I open SSMS and write queries in sql file cyrillic works. When I transfer the same sql file to another machine the cyrillic looks like "Âàëåðè". If this problem is related to encoding, how to configure encoding on both machines to be the same ? How to fix it ?
Did you look (in SSMS) under Tools > Options > Environment > International Settings and look to see if there are differences there? If the machines have different options enabled this won't help you as that's a Windows setting, but you can change the SQL setting here to use the Windows setting, choose Same as Microsoft Windows. I'm looking at this in SQL 2014 but I'm fairly sure it's in the same place going back a few editions...
Try to save your file to UNICODE:
"Save as" -> Save with Encoding -> Unicode 1200
Related
Running on my machine SQL SERVER 2016 SP2.
Every bulk insert I'm trying is giving me an error:
The code page 65001 is not supported by the server.
Even if I'm trying to BULK INSERT from a file that doesn't exist, it is giving me the same error.
The origin file is encoded in UCS-2 LE BOM.
Same issue happen in my env especially EVEN the file doesn't exist! I found that Panagiotis Kanavos and link solved my case.
You may try to unclick
ControlPanel -> Region -> Administrative -> Change System Locale
Beta: Use Unicode UTF-8 for worldwide language support
MyEnv:
Windows 10, SQL Server 2016 SP2(13.0.5102.14)
I'm trying to insert new row into a database containing text with special characters like long dash (—). When I do this manually in my SSMS - it works okay, but when I commit the script into my version management tool (Github desktop), these symbols show up as �. In Visual Studio special characters show up normally as well. What should i do so that I can add the script properly and it could be executed potentially against any SQL Server 2016 database?
How my changes appear in Github Desktop:
It appears that the behavior is caused by the encoding type for the *.sql file to where I put my script with special characters. The file uses UTF-8 encoding, while it should be saved using UTF-8-BOM encoding to be able to display those characters correctly.
While saving .sql files from SQL Server Management Studio in to my local windows folder, it looks to be including some binary characters making AccuRev comparisons impossible. I looked for possible save options and couldn't locate any. and couldn't find any. Any suggestions please?
If you can't tell AccuRev to handle this as UTF-8 files (this sucks - these days, all software should really know about UTF-8 and handle it correctly!), then you might need to do something in SQL Server Management Studio instead.
When you have a SQL statement open and you click on "File > Save", in the "Save" dialog, there is a little down-arrow to the right of the Save button:
If you click that (instead of just clicking on the button itself), you can select "Save with Encoding", which allows you to pick what encoding to use for your files - pick something like the Windows-1252 Western European - that should not have any UTF-8 Byte-Order Mark bytes at the start:
AccuRev does handle UTF-8 character encoding. However, older versions may not have that capability.
Make sure that the file is being saved using UTF-8. Anything else will have binary content and should be typed as such.
When you export sql files from MS SQL Server Management Studio in unicode (by default), it puts a "FF FE BOM" at the front of the file which forces programs to treat it as binary. Exporting as ANSI solved it. Choose "Save as ANSI Text".
I have a MSSQL database, which contains Unicode (utf8) data. My workstation is linux (currently Ubuntu) and looking for a tool to work with mssql database I found SQSH.
The problem is - when I select data in the sqsh console I get jibberish instead of unicode characters. Using switch "-J utf8" or "-J utf-8" didn't change anything.
The question is - how to set up sqsh to work with utf-8 data?
If it is not possible, do you know any alternative tools usable from linux for work with mssql databases filled with utf-8 data. I need to execute all kinds of T-SQL, run previsously prepared SQL script files, and pipe out results for processing afterwards. A good GUI (open source) could also be used, not limited to shell clients.
Are you using freetds with sqsh? If you are, edit your freetds.conf to set the charset.
http://www.freetds.org/userguide/localization.htm
Use Azure Data Studio to avoid data troubleshooting issue. it is a great SSMS alternative for Linux.
If you need command line tool I suggest to use official sqlcmd from mssql-tools.
It is available for all major Linux distributions including Ubuntu.
Connecting with sqlcmd
Another shell tool is mssql-cli
Features
Mssql-cli is a new and interactive command line tool that provides the
following key enhancements over sqlcmd in the Terminal environment:
T-SQL IntelliSense
Syntax highlighting
Pretty formatting for query results, including Vertical Format
Multi-line edit mode
Configuration file support
I had the same problem and it seems it has nothing to do with charchters encoding, but the problem was there are control character, unprintable characters, in the script.
I removed them from the sql script and everything works fine.
When I go to File > Open > File and select a .sql script, or even when I drag a .sql file into the SQL Management Studio Express window, it opens the script in Notepad which is totally useless when I want to run the script.
Since this is on an external server (Windows 2003 Server), I end up having to disconnect from RDP, disable the local clipboard, re-connect and then copy-paste the script's contents from Textpad in order to run it.
I've checked the options menus but can't see anything relating to Notepad, not even in the "external tools" section. Any ideas why it would be doing this?
Please note: I have checked the file association for SQL scripts and it is set to SQL Management Studio Express.
Ran into this this morning. Turned out to be an encoding issue for me. I opened the script up in UltraEdit and I noticed that it was showing the encoding to be U-DOS instead of DOS. I ran the Unicode to ASCII conversion (also in UltraEdit), saved the file, and now Management Studio is opening up the files correctly.
I encountered this too - thanks NFrank for spotting the issue:
This was caused by opening the script in TextPad and accidentally saving as Unicode. The issue is not related to file associations.
The solution: Open in Notepad (or TextPad) - Save As... and select Encoding: ANSI
UPDATE:
In SQL Management Studio,
Go to File > Open > File,
Highlight a SQL file,
Click the down arrow on the Open button.
Select Open With...
Select SQL Query Editor
Press the Set as Default button.
Matt
The first thing I'd check is to see if the application associated with SQL files on that box is Notepad.