While saving .sql files from SQL Server Management Studio in to my local windows folder, it looks to be including some binary characters making AccuRev comparisons impossible. I looked for possible save options and couldn't locate any. and couldn't find any. Any suggestions please?
If you can't tell AccuRev to handle this as UTF-8 files (this sucks - these days, all software should really know about UTF-8 and handle it correctly!), then you might need to do something in SQL Server Management Studio instead.
When you have a SQL statement open and you click on "File > Save", in the "Save" dialog, there is a little down-arrow to the right of the Save button:
If you click that (instead of just clicking on the button itself), you can select "Save with Encoding", which allows you to pick what encoding to use for your files - pick something like the Windows-1252 Western European - that should not have any UTF-8 Byte-Order Mark bytes at the start:
AccuRev does handle UTF-8 character encoding. However, older versions may not have that capability.
Make sure that the file is being saved using UTF-8. Anything else will have binary content and should be typed as such.
When you export sql files from MS SQL Server Management Studio in unicode (by default), it puts a "FF FE BOM" at the front of the file which forces programs to treat it as binary. Exporting as ANSI solved it. Choose "Save as ANSI Text".
Related
I'm trying to insert new row into a database containing text with special characters like long dash (—). When I do this manually in my SSMS - it works okay, but when I commit the script into my version management tool (Github desktop), these symbols show up as �. In Visual Studio special characters show up normally as well. What should i do so that I can add the script properly and it could be executed potentially against any SQL Server 2016 database?
How my changes appear in Github Desktop:
It appears that the behavior is caused by the encoding type for the *.sql file to where I put my script with special characters. The file uses UTF-8 encoding, while it should be saved using UTF-8-BOM encoding to be able to display those characters correctly.
Ok I have 2 machines. When I open SSMS and write queries in sql file cyrillic works. When I transfer the same sql file to another machine the cyrillic looks like "Âàëåðè". If this problem is related to encoding, how to configure encoding on both machines to be the same ? How to fix it ?
Did you look (in SSMS) under Tools > Options > Environment > International Settings and look to see if there are differences there? If the machines have different options enabled this won't help you as that's a Windows setting, but you can change the SQL setting here to use the Windows setting, choose Same as Microsoft Windows. I'm looking at this in SQL 2014 but I'm fairly sure it's in the same place going back a few editions...
Try to save your file to UNICODE:
"Save as" -> Save with Encoding -> Unicode 1200
I have a DACPAC file that was built in Visual Studio 2013, for an SSDT project. This SSDT project defines a post-deploy script designed to merge some static data into the published tables, and one piece of data contains a copyright symbol.
Now, when I publish the database through Visual Studio, the copyright symbol is preserved, and merged correctly into the target table. When I publish the same database (with the same dacpac and publish profile) using MSDeploy, the copyright symbol is merged into the target database as a "?" symbol. Likewise, when I use Action:Script instead of Action:Publish, the generated SQL script contains a "?" rather than the copyright symbol.
It seems as if the script Visual Studio is generating is UTF8 encoded, but the script that gets baked into the dacpac loses the UTF8 encoding. Does anyone have any ideas of how to work around this issue?
I had the same issue. Open this file in notepad and 'save as' UTF-8 encoding in the same folder to replace the old one. Then publish again. It should work.
Do you prefix the string literal with N to denote that it contains a Unicode string? Is your column defined as nchar or nvarchar? The process of creating the dacpac may perform a conversion based on the distinction that your data is declared as a non-unicode string. It is not surprising that the copyright character would not survive this conversion.
See https://msdn.microsoft.com/en-us/library/ms179899.aspx for details about unicode vs. character strings.
Finally I got the answer. I need to put character N before the unicode string inside insert statement.
INSERT INTO [Library] ([DisplayNameCN])
VALUES (N'鴨脷洲公共圖書館')
I am facing an issue with SSIS where a customer wants a (previously delivered file in UTF-8) to be delivered in ANSI-1252. No big deal i thought. change the file connection manager and done... unfortunately it wasn't that simple. Been stuck on this for a day and clueless on what to try next.
the package itself
IN - OLE DB source with a query. Source database fields are NVARCHAR.
Next i have created a Data conversion block where i convert the incoming DT_WSTR to DT_STR using 1252 codepage.
After that is a outbound file connection destination. The flat file connection is tab delimited using codepage 1252. I have mapped the converted columns to the columns used in this flat file. Below are some screenshots of the connection manager and destination block
Now when i create a new txt file from explorer it will be ANSI (as detected by Notepad++)
When the package runs the file becomes UTF-8 w/o BOM
I have tried experimenting with the checkbox for overwriting as suggested in SSIS - Flat file always ANSI never UTF-8 encoded
as building the project from scratch and experimenting with the data conversion.
Does anyone have a suggestion on what I am missing here? The strange thing is we have a different package with exact the same blocks build previously and it does output an ANSI file (checked the package from top to bottom). However we are getting mixed results on different machines. Some machines will give an ANSI file other the UTF-8 file.
Is this solved already? My idea is to delete the whole Data Flow Task and re-create it. I suppose the metadata is stuck and overwritten at each execution.
I believe you need not to change anything in your ssis package just check your editor setting (notepad++). Go to settings --> Preferences --> new document setting
You need to uncheck the 'Apply to opened ANSI files' checkbox.
Kindly check and let me know if it works for you.
I have a MSSQL database, which contains Unicode (utf8) data. My workstation is linux (currently Ubuntu) and looking for a tool to work with mssql database I found SQSH.
The problem is - when I select data in the sqsh console I get jibberish instead of unicode characters. Using switch "-J utf8" or "-J utf-8" didn't change anything.
The question is - how to set up sqsh to work with utf-8 data?
If it is not possible, do you know any alternative tools usable from linux for work with mssql databases filled with utf-8 data. I need to execute all kinds of T-SQL, run previsously prepared SQL script files, and pipe out results for processing afterwards. A good GUI (open source) could also be used, not limited to shell clients.
Are you using freetds with sqsh? If you are, edit your freetds.conf to set the charset.
http://www.freetds.org/userguide/localization.htm
Use Azure Data Studio to avoid data troubleshooting issue. it is a great SSMS alternative for Linux.
If you need command line tool I suggest to use official sqlcmd from mssql-tools.
It is available for all major Linux distributions including Ubuntu.
Connecting with sqlcmd
Another shell tool is mssql-cli
Features
Mssql-cli is a new and interactive command line tool that provides the
following key enhancements over sqlcmd in the Terminal environment:
T-SQL IntelliSense
Syntax highlighting
Pretty formatting for query results, including Vertical Format
Multi-line edit mode
Configuration file support
I had the same problem and it seems it has nothing to do with charchters encoding, but the problem was there are control character, unprintable characters, in the script.
I removed them from the sql script and everything works fine.