I'm trying to upload files containing special characters on our platform via the exec command but the characters are always interpreted and it fails.
For example if I try to upload a mémo.txt file I get the following error:
/bin/cp: cannot create regular file `/path/to/dir/m\351mo.txt': No such file or directory
The UTF8 is correctly configured on the system and if I run the command on the shell it works fine.
Here is the TCL code:
exec /bin/cp $tmp_filename $dest_path
How can I make it work?
The core of the problem is what encoding is being used to communicate with the operating system. For exec and filenames, that encoding is whatever is returned by the encoding system command (Tcl has a pretty good guess at what the correct value for that is when the Tcl library starts up, but very occasionally gets it wrong). On my computer, that command returns utf-8 which says (correctly!) that strings passed to (and received from) the OS are UTF-8.
You should be able to use the file copy command instead of doing exec /bin/cp, which will be helpful here as that's got less layers of trickiness (it avoids going through an external program which can impose its own problems). We'll assume that that's being done:
set tmp_filename "foobar.txt"; # <<< fill in the right value, of course
set dest_path "/path/to/dir/mémo.txt"
file copy $tmp_filename $dest_path
If that fails, we need to work out why. The most likely problems relate to the encoding though, and can go wrong in multiple ways that interact horribly. Alas, the details matter. In particular, the encoding for a path depends on the actual filesystem (it's formally a parameter when the filesystem is created) and can vary on Unix between parts of a path when you have a mount within another mount.
If the worst comes to the worst, you can put Tcl into ISO 8859-1 mode and then do all the encoding yourself (as ISO 8859-1 is the “just use the bytes I tell you” encoding); encoding convertto is also useful in this case. Be aware that this can generate filenames that cause trouble for other programs, but it's at least able to let you get at it.
encoding system iso98859-1
file copy $tmp_filename [encoding convertto utf-8 $dest_path]
Care might be needed to convert different parts of the path correctly in this case: you're taking full responsibility for what's going on.
If you're on Windows, please just let Tcl handle the details. Tcl uses the Wide (Unicode) Windows API directly so you can pretend that none of these problems exist. (There are other problems instead.)
On macOS, please leave encoding system alone as it is correct. Macs have a very opinionated approach to encodings.
I already tried the file copy command but it says error copying
"/tmp/file7k5kqg" to "/path/to/dir/mémo.txt": no such file or
directory
My reading of your problem is that, for some reason, your Tcl is set to iso8859-1 ([encoding system]), while the executing environment (shell) is set to utf-8. This explains why Donal's suggestion works for you:
encoding system iso8859-1
file copy $tmp_filename [encoding convertto utf-8 $dest_path]
This will safely pass utf-8 encoded bytearray down to any syscall: é or \xc3\xa9 or \u00e9. Watch:
% binary encode hex [encoding convertto utf-8 é]
c3a9
% encoding system iso8859-1; exec xxd << [encoding convertto utf-8 é]
00000000: c3a9 ..
This is equivalent to [encoding system] also being set to utf-8 (as to be expected in an otherwise utf-8 environment):
% encoding system
utf-8
% exec xxd << é
00000000: c3a9 ..
What you are experiencing (without any intervention) seems to be a re-coding of the Tcl internal encoding to iso8859-1 on the way out from Tcl (because of [encoding system], as Donal describes), and a follow-up (and faulty) re-coding of this iso8859-1 value into the utf-8 environment.
Watch the difference (\xe9 vs. \xc3\xa9):
% encoding system iso8859-1
% encoding system
iso8859-1
% exec xxd << é
00000000: e9
The problem it then seems is that \xe9 is to be interpreted in your otherwise utf-8 env, like:
$ locale
LANG="de_AT.UTF-8"
...
$ echo -ne '\xe9'
?
$ touch `echo -ne 'm\xe9mo.txt'`
touch: m?mo.txt: Illegal byte sequence
$ touch mémo.txt
$ ls mémo.txt
mémo.txt
$ cp `echo -ne 'm\xe9mo.txt'` b.txt
cp: m?mo.txt: No such file or directory
But:
$ cp `echo -ne 'm\xc3\xa9mo.txt'` b.txt
$ ls b.txt
b.txt
Your options:
(1) You need to find out why Tcl picks up iso8859-1, to begin with. How did you obtain your installation? Self-compiled? What are the details (version)?
(2) You may proceed as Donal suggests, or alternatively, set encoding system utf-8 explicitly.
encoding system utf-8
file copy $tmp_filename $dest_path
I am trying to execute a query, after reading the content from a SQL Script file, assigning it to a variable and then executing the content. Then I get this error saying Could not find stored procedure 'ÿþ'. Please help me understand the issue. Thank you.
Info:
SQL Server 2014
SSMS Version - 12.0.4100.1
ÿþ is one way to interpret the two bytes of the the UTF-16 byte order mark, which is \xFF and \xFE.
You get those two letters when you read a file that has been saved in the UTF-16 encoding with a tool that is unaware of—or, more likely, was not configured to use—Unicode.
For example, when you edit a text file with Windows Notepad and select "Unicode" as file encoding when you save it, Notepad will use UTF-16 to save the file and will mark it with the said two bytes at the start.
If whatever thing you use to read the file is unaware of the fact that the file is Unicode, then it will use the default byte encoding of your computer to decode that text file.
Now, if that default encoding happens to be Windows-1252, like in your case, then ÿþ is what you get, because there \xFF is ÿ and \xFE is þ.
Consequently, when presented with ÿþ, SQL Server thinks it must be the name of a stored procedure, because stored procedures are the only statements that you can run by mentioning just their name. And it dutifully reports that it can't find a procedure of that name.
I am trying to load a table from a txt file which has Chinese characters in it using the \copy command in PostgreSQL. I have a test table with only one columns Names Varchar(25) in it. When I run an insert statement from PSQL or PgAdmin like
insert into test values ('康狀態概');
it works and inserts the value correctly. When I put the same value in a txt file (say text.txt) and try to use the copy command and load the contents of test.txt
\copy test from 'D:/database/test.dat';
It throws an error ERROR: character with byte sequence 0xa6 0x82 in encoding "GBK" has no equivalent in encoding "UTF8" on WIndows only. I changed the locale and keyboard settings of the Windows server to Chinese BUT it does not work with or without changing the locale settings.
The same exercise works fine on Linux without any changes.
Can someone please suggest what needs to be done here?
Thanks
I'm not certain when this first occured.
I have a new drop-shipping affiliate website, and receive an exported copy of the product catalog from the wholesaler. I format and import this into Prestashop 1.4.4.
The front end of the website contains combinations of strange characters inside product text: Ã, Ã, ¢, â‚ etc. They appear in place of common characters like , - : etc.
These characters are present in about 40% of the database tables, not just product specific tables like ps_product_lang.
Another website thread says this same problem occurs when the database connection string uses an incorrect character encoding type.
In /config/setting.inc, there is no character encoding string mentioned, just the MySQL Engine, which is set to InnoDB, which matches what I see in PHPMyAdmin.
I exported ps_product_lang, replaced all instances of these characters with correct characters, saved the CSV file in UTF-8 format, and reimported them using PHPMyAdmin, specifying UTF-8 as the language.
However, after doing a new search in PHPMyAdmin, I now have about 10 times as many instances of these bad characters in ps_product_lang than I started with.
If the problem is as simple as specifying the correct language attribute in the database connection string, where/how do I set this, and what to?
Incidently, I tried running this command in PHPMyAdmin mentioned in this thread, but the problem remains:
SET NAMES utf8
UPDATE: PHPMyAdmin says:
MySQL charset: UTF-8 Unicode (utf8)
This is the same character set I used in the last import file, which caused more character corruptions. UTF-8 was specified as the charset of the import file during the import process.
UPDATE2
Here is a sample:
people are truly living untetheredâ€ïâ€Â
Ã‚ï† buying and renting movies online, downloading software, and
sharing and storing files on the web.
UPDATE3
I ran an SQL command in PHPMyAdmin to display the character sets:
character_set_client utf8
character_set_connection utf8
character_set_database latin1
character_set_filesystem binary
character_set_results utf8
character_set_server latin1
character_set_system utf8
So, perhaps my database needs to be converted (or deleted and recreated) to UTF-8. Could this pose a problem if the MySQL server is latin1?
Can MySQL handle the translation of serving content as UTF8 but storing it as latin1? I don't think it can, as UTF8 is a superset of latin1. My web hosting support has not replied in 48 hours. Might be too hard for them.
If the charset of the tables is the same as it's content try to use mysql_set_charset('UTF8', $link_identifier). Note that MySQL uses UTF8 to specify the UTF-8 encoding instead of UTF-8 which is more common.
Check my other answer on a similar question too.
This is surely an encoding problem. You have a different encoding in your database and in your website and this fact is the cause of the problem. Also if you ran that command you have to change the records that are already in your tables to convert those character in UTF-8.
Update: Based on your last comment, the core of the problem is that you have a database and a data source (the CSV file) which use different encoding. Hence you can convert your database in UTF-8 or, at least, when you get the data that are in the CSV, you have to convert them from UTF-8 to latin1.
You can do the convertion following this articles:
Convert latin1 to UTF8
http://wordpress.org/support/topic/convert-latin1-to-utf-8
This appears to be a UTF-8 encoding issue that may have been caused by a double-UTF8-encoding of the database file contents.
This situation could happen due to factors such as the character set that was or was not selected (for instance when a database backup file was created) and the file format and encoding database file was saved with.
I have seen these strange UTF-8 characters in the following scenario (the description may not be entirely accurate as I no longer have access to the database in question):
As I recall, there the database and tables had a "uft8_general_ci" collation.
Backup is made of the database.
Backup file is opened on Windows in UNIX file format and with ANSI encoding.
Database is restored on a new MySQL server by copy-pasting the contents from the database backup file into phpMyAdmin.
Looking into the file contents:
Opening the SQL backup file in a text editor shows that the SQL backup file has strange characters such as "sÃ¥". On a side note, you may get different results if opening the same file in another editor. I use TextPad here but opening the same file in SublimeText said "så" because SublimeText correctly UTF8-encoded the file -- still, this is a bit confusing when you start trying to fix the issue in PHP because you don't see the right data in SublimeText at first. Anyways, that can be resolved by taking note of which encoding your text editor is using when presenting the file contents.
The strange characters are double-encoded UTF-8 characters, so in my case the first "Ã" part equals "Ã" and "Â¥" = "¥" (this is my first "encoding"). THe "Ã¥" characters equals the UTF-8 character for "å" (this is my second encoding).
So, the issue is that "false" (UTF8-encoded twice) utf-8 needs to be converted back into "correct" utf-8 (only UTF8-encoded once).
Trying to fix this in PHP turns out to be a bit challenging:
utf8_decode() is not able to process the characters.
// Fails silently (as in - nothing is output)
$str = "så";
$str = utf8_decode($str);
printf("\n%s", $str);
$str = utf8_decode($str);
printf("\n%s", $str);
iconv() fails with "Notice: iconv(): Detected an illegal character in input string".
echo iconv("UTF-8", "ISO-8859-1", "så");
Another fine and possible solution fails silently too in this scenario
$str = "så";
echo html_entity_decode(htmlentities($str, ENT_QUOTES, 'UTF-8'), ENT_QUOTES , 'ISO-8859-15');
mb_convert_encoding() silently: #
$str = "så";
echo mb_convert_encoding($str, 'ISO-8859-15', 'UTF-8');
// (No output)
Trying to fix the encoding in MySQL by converting the MySQL database characterset and collation to UTF-8 was unsuccessfully:
ALTER DATABASE myDatabase CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE myTable CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
I see a couple of ways to resolve this issue.
The first is to make a backup with correct encoding (the encoding needs to match the actual database and table encoding). You can verify the encoding by simply opening the resulting SQL file in a text editor.
The other is to replace double-UTF8-encoded characters with single-UTF8-encoded characters. This can be done manually in a text editor. To assist in this process, you can manually pick incorrect characters from Try UTF-8 Encoding Debugging Chart (it may be a matter of replacing 5-10 errors).
Finally, a script can assist in the process:
$str = "så";
// The two arrays can also be generated by double-encoding values in the first array and single-encoding values in the second array.
$str = str_replace(["Ã","Â¥"], ["Ã","¥"], $str);
$str = utf8_decode($str);
echo $str;
// Output: "så" (correct)
I encountered today quite a similar problem : mysqldump dumped my utf-8 base encoding utf-8 diacritic characters as two latin1 characters, although the file itself is regular utf8.
For example : "é" was encoded as two characters "é". These two characters correspond to the utf8 two bytes encoding of the letter but it should be interpreted as a single character.
To solve the problem and correctly import the database on another server, I had to convert the file using the ftfy (stands for "Fixes Text For You). (https://github.com/LuminosoInsight/python-ftfy) python library. The library does exactly what I expect : transform bad encoded utf-8 to correctly encoded utf-8.
For example : This latin1 combination "é" is turned into an "é".
ftfy comes with a command line script but it transforms the file so it can not be imported back into mysql.
I wrote a python3 script to do the trick :
#!/usr/bin/python3
# coding: utf-8
import ftfy
# Set input_file
input_file = open('mysql.utf8.bad.dump', 'r', encoding="utf-8")
# Set output file
output_file = open ('mysql.utf8.good.dump', 'w')
# Create fixed output stream
stream = ftfy.fix_file(
input_file,
encoding=None,
fix_entities='auto',
remove_terminal_escapes=False,
fix_encoding=True,
fix_latin_ligatures=False,
fix_character_width=False,
uncurl_quotes=False,
fix_line_breaks=False,
fix_surrogates=False,
remove_control_chars=False,
remove_bom=False,
normalization='NFC'
)
# Save stream to output file
stream_iterator = iter(stream)
while stream_iterator:
try:
line = next(stream_iterator)
output_file.write(line)
except StopIteration:
break
Apply these two things.
You need to set the character set of your database to be utf8.
You need to call the mysql_set_charset('utf8') in the file where you made the connection with the database and right after the selection of database like mysql_select_db use the mysql_set_charset. That will allow you to add and retrieve data properly in whatever the language.
The error usually gets introduced while creation of CSV. Try using Linux for saving the CSV as a TextCSV. Libre Office in Ubuntu can enforce the encoding to be UTF-8, worked for me.
I wasted a lot of time trying this on Mac OS. Linux is the key. I've tested on Ubuntu.
Good Luck
I am looking for either a NAnt Task for SQL Server bcp, or the file format for bcp native output.
I supposed I could build a NAntContrib Task for bcp but I don't have time at the moment (Do we ever?).
Has anybody strolled this path before? Advice?
Thanks - Jon
bcp doesn't have a native file output format as such, but can produce a binary file where the fields are prefixed by a 1-4 byte header that contains the length of the field. The length of the header or the row/column delimiter formats are specified in the control file (format described here),
If using prefixed files, SQL Server bcp uses a -1 length in the header to denote nulls.
'Native' in bcp-speak refers to binary representations of the column data. This question has some discussion of these formats.
You could always use NAnt's <exec> task to drive bcp...