With libcurl after a GET request I get raw JSON data response as a c string (char *data) that when printed to screen with printf I gives me the following output on screen:
screen output:
{"name":"\u0391\u03a0\u039f\u03a4\u0395\u039b\u0395\u03a3\u039c\u0391\u03a4\u0399\u039a\u039f\u03a4\u0397\u03a4\u0391"}
Apparently the data stored in the char *data is in c string notation a
"{\"name\":\"\\u0391\\u03a0\\u039f\\u03a4\\u0395\\u039b\\u0395\\u03a3\\u039c\\u0391\\u03a4\\u0399\\u039a\\u039f\\u03a4\\u0397\\u03a4\\u0391\"}"
I copied the screen output \uxxx sequences and tested in c code that if I printf("\u0391\u03a0\u039f\u03a4\u0395\u039b\u0395\u03a3\u039c\u0391\u03a4\u0399\u039a\u039f\u03a4\u0397\u03a4\u0391"; it gives me on screen the expected ΑΠΟΤΕΛΕΣΜΑΤΙΚΟΤΗΤΑ
How do I convert the GET respone c string to a c string that when I print it to screen with printf, has the following output?:
screen output:
{"name":"ΑΠΟΤΕΛΕΣΜΑΤΙΚΟΤΗΤΑ"}
In essence I want to convert the memory data from a c string like
"{\"name\":\"\\u0391\\u03a0\\u039f\\u03a4\\u0395\\u039b\\u0395\\u03a3\\u039c\\u0391\\u03a4\\u0399\\u039a\\u039f\\u03a4\\u0397\\u03a4\\u0391\"}"
to a c string like
"{\"name\":\"\u0391\u03a0\u039f\u03a4\u0395\u039b\u0395\u03a3\u039c\u0391\u03a4\u0399\u039a\u039f\u03a4\u0397\u03a4\u0391\"}"
I can guess it has to do with character encoding but I can't find me a c library to do the conversion. After an hour of googling I gave up. So can anyone point me to a conversion library please?
(I'm using gcc compiler on ubuntu)
You may want to try locale.h.
Code:
#include <stdio.h>
#include <stdlib.h>
#include <locale.h>
int main(void) {
setlocale(LC_ALL,"");
printf("\u0391\u03a0\u039f\u03a4\u0395\u039b\u0395\u03a3\u039c\u0391\u03a4\u0399\u039a\u039f\u03a4\u0397\u03a4\u0391");
return 0;
}
Another day, another problem with strings in C. Let's say I have a text file named fileR.txt and I want to print its contents. The file goes like this:
Letter á
Letter b
Letter c
Letter ê
I would like to read it and show it on the screen, so I tried the following code:
#include <stdlib.h>
#include <locale.h>
#include <clocale>
#include <stdio.h>
#include <conio.h>
#include <wchar.h>
int main()
{
FILE *pF;
char line[512]; // Current line
setlocale(LC_ALL, "");
pF = fopen("Aulas\\source\\fileR.txt", "r");
while (!feof(pF))
{
fgets(line, 512, pF);
fputs(line, stdout);
}
return 0;
}
And the output was:
Letter á
Letter b
Letter c
Letter ê
I then attempted to use wchar_t to do it:
#include <stdlib.h>
#include <locale.h>
#include <clocale>
#include <stdio.h>
#include <conio.h>
#include <wchar.h>
int main()
{
FILE *pF;
wchar_t line[512]; // Current line
setlocale(LC_ALL, "");
pF = fopen("Aulas\\source\\fileR.txt", "r");
while (!feof(pF))
{
fgetws(line, 512, pF);
fputws(line, stdout);
}
return 0;
}
The output was even worse:
Letter ÃLetter b
Letter c
Letter Ã
I have seen people suggesting the use of an unsigned char array, but that simply results in an error, as the stdio functions made for input and output take signed char arrays, and even if i were to write my own funtion to print an array of unsigned chars, I would not know how to be able to read something from a file as unsigned.
So, how can I read and print a file with accented characters in C?
The problem you are having is not in your code, it's in your expectations. A text character is really just a value that has been associated with some form of glyph (symbol). There are different schemes for making this association, generally referred to as encodings. One early and still common encoding is known as ASCII (American Standard Code for Information Interchange). As the name implies it is American English centric. Originally this was a 7 bit encoding (128 values), but later was extended to include other symbols using 8 bits. Other encoding were developed for other languages. This was non-optimal. The Unicode standard was developed to address this. It's a relatively complicated standard designed to include any symbols one might want to encode. Unicode has various schemes that trade off data size for character size, for example UTF7, UTF8, UTF16 and UTF32. Because of this there will not necessarily be a one to one relationship between a byte and a character.
So different character representations have different values and those values can be greater than a single byte. The next problem is that to display the associated glyphs you need to have a system that correctly maps the value to the glyph and is able to display said glyph. A lot of "terminal" applications don't support Unicode by default. They use ASCII or Extended ASCII. It looks like that is what you may be using. The terminal is making the assumption that each byte it needs to display corresponds a single character (which as discussed isn't necessarily true in Unicode).
One thing to try is to redirect your output to a file and use a Unicode aware editor (like notepad++) to view the file using a UTF8 (for example) encoding. You can also hex dump the input file to see how it has been encoded. Sometimes Unicode files are written with BOM (Byte Order Mark) to help identify the Unicode encoding and byte order in play.
I'm trying to make a simple card game but after switching from Win 7 to Xubuntu 14.04 even the most simple things do not work anymore. I tried this for 3 days and still can't solve it.
What happens is that the console is giving me 3 diamond question marks, for the code below.
#include <stdio.h>
#include <stdlib.h>
#define herz "\xe2\x99\xa5"
#define karo "\xe2\x99\xa6"
#define kreuz "\xe2\x99\xa3"
#define pik "\xe2\x99\xa0"
int main()
{
char ch = '0';
printf("%c%c%c%c",herz,karo,kreuz,pik);
return 0;
}
I tried this with the code:blocks console and the xubuntu one.
(xterm -T $TITLE -e and xfce4-terminal -T $TITLE -x)
Console LANG is en_US.UTF-8.
I tried several fonts and it didn't change a thing. I can type in special characters manually in the console but when C tries to print them it does not work.
%c is used to print single characters. Since you are trying to print strings, use %s instead. Your print statement will be
wprintf("%s%s%s%s",herz,karo,kreuz,pik);
You have defined literal constant character strings where you need literal constant wide characters.
const wchar_t herz = L'\ue299a5';
You then need to print using wprintf() with the %lc format specifier:
wprintf("%lc%lc%lc%lc", herz, karo, kreuz, pikl) ;
I am trying to write a C program to read a CSV file and calculate something and printing a line to the screen. However, the values I am storing in my array do not seem to match up with my input file.
For 1,2,2,3
I get an average of 50.0000000 printed to the screen. Can anyone offer some advice? Thank you.
#include <stdio.h>
#include <string.h>
int main (void) {
...
fclose(input);
}
*p is a character, so you are putting ASCII codes into data. You want the values these represent, or you can (which your later use of atof suggests) declare data to be an array of strings.
Edit:
I can only use stdio.h and stdlib.h
I would like to iterate through a char array filled with chars.
However chars like ä,ö take up twice the space and use two elements.
This is where my problem lies, I don't know how to access those special chars.
In my example the char "ä" would use hmm[0] and hmm[1].
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main()
{
char* hmm = "äö";
printf("%c\n", hmm[0]); //i want to print "ä"
printf("%i\n", strlen(hmm));
return 0;
}
Thanks, i tried to run my attached code in Eclipse, there it works. I assume because it uses 64 bits and the "ä" has enough space to fit. strlen confirms that each "ä" is only counted as one element.
So i guess i could somehow tell it to allocate more space for each char (so "ä" can fit)?
#include <stdio.h>
#include <stdlib.h>
int main()
{
char* hmm = "äüö";
printf("%c\n", hmm[0]);
printf("%c\n", hmm[1]);
printf("%c\n", hmm[2]);
return 0;
}
A char always used one byte.
In your case you think that "ä" is one char: Wrong.
Open your .c source code with an hexadecimal viewer and you will see that ä is using 2 char because the file is encoded in UTF8
Now the question is do you want to use wide character ?
#include <stdio.h>
#include <stdlib.h>
#include <wchar.h>
#include <locale.h>
int main()
{
const wchar_t hmm[] = L"äö";
setlocale(LC_ALL, "");
wprintf(L"%ls\n", hmm);
wprintf(L"%lc\n", hmm[0]);
wprintf(L"%i\n", wcslen(hmm));
return 0;
}
Your data is in a multi-byte encoding. Therefore, you need to use multibyte character handling techniques to divvy up the string. For example:
#include <stdio.h>
#include <string.h>
#include <locale.h>
int main(void)
{
char* hmm = "äö";
int off = 0;
int len;
int max = strlen(hmm);
setlocale(LC_ALL, "");
printf("<<%s>>\n", hmm);
printf("%zi\n", strlen(hmm));
while (hmm[off] != '\0' && (len = mblen(&hmm[off], max - off)) > 0)
{
printf("<<%.*s>>\n", len, &hmm[off]);
off += len;
}
return 0;
}
On my Mac, it produced:
<<äö>>
4
<<ä>>
<<ö>>
The call to setlocale() was crucial; without that, the program runs in the "C" locale instead of my en_US.UTF-8 locale, and mblen() mishandled things:
<<äö>>
4
<<?>>
<<?>>
<<?>>
<<?>>
The questions marks appear because the bytes being printed are invalid single bytes as far as the UTF-8 terminal is concerned.
You can also use wide characters and wide-character printing, as shown in benjarobin's answer..
Sorry to drag this on. Though I think its important to highlight some issues. As I understand it OS-X has the ability to have the default OS code page to be UTF-8 so the answer is mostly in regards to Windows that under the hood uses UTF-16, and its default ACP code page is dependent on the specified OS region.
Firstly you can open Character Map, and find that
äö
Both reside in the code page 1252 (western), so this is not a MBCS issue. The only way it could be a MBCS issue is if you saved the file using MBCS (Shift-JIS,Big5,Korean,GBK) encoding.
The answer, of using
setlocale( LC_ALL, "" )
Does not give insight into the reason why, äö was rendered in the command prompt window incorrectly.
Command Prompt does use its own code pages, namely OEM code pages. Here is a reference to the following (OEM) code pages available with their character map's.
Going into command prompt and typing the following command (Chcp) Will reveal the current OEM code page that the command prompt is using.
Following Microsoft documentation by using setlocal(LC_ALL,"") it details the following behavior.
setlocale( LC_ALL, "" );
Sets the locale to the default, which is the user-default ANSI code page obtained from the operating system.
You can do this manually, by using chcp and passing your required code page, then run your application and it should output the text perfectly fine.
If it was a multie byte character set problem then there would be a whole list of other issues:
Under MBCS, characters are encoded in either one or two bytes. In two-byte characters, the first, or "lead-byte," signals that both it and the following byte are to be interpreted as one character. The first byte comes from a range of codes reserved for use as lead bytes. Which ranges of bytes can be lead bytes depends on the code page in use. For example, Japanese code page 932 uses the range 0x81 through 0x9F as lead bytes, but Korean code page 949 uses a different range.
Looking at the situation, and that the length was 4 instead of 2. I would say that the file format has been saved in UTF-8 (It could in fact been saved in UTF-16, though you would of run into problems sooner than later with the compiler). You're using characters that are not within the ASCII range of 0 to 127, UTF-8 is encoding the Unicode code point to two bytes. Your compiler is opening the file and assuming its your default OS code page or ANSI C. When parsing your string, it's interpreting the string as a ANSI C Strings 1 byte = 1 character.
To sove the issue, under windows convert the UTF-8 string to UTF-16 and print it with wprintf. Currently there is no native UTF-8 support for the Ascii/MBCS stdio functions.
For Mac OS-X, that has the default OS code page of UTF-8 then I would recommend following Jonathan Leffler solution to the problem because it is more elegant. Though if you port it to Windows later, you will find you will need to covert the string from UTF-8 to UTF-16 using the example bellow.
In either solution you will still need to change the command prompt code page to your operating system code page to print the characters above ASCII correctly.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <Windows.h>
#include <locale>
// File saved as UTF-8, with characters outside the ASCII range
int main()
{
// Set the OEM code page to be the default OS code page
setlocale(LC_ALL, "");
// äö reside outside of the ASCII range and in the Unicode code point Western Latin 1
// Thus, requires a lead byte per unicode code point when saving as UTF-8
char* hmm = "äö";
printf("UTF-8 file string using Windows 1252 code page read as:%s\n",hmm);
printf("Length:%d\n", strlen(hmm));
// Convert the UTF-8 String to a wide character
int nLen = MultiByteToWideChar(CP_UTF8, 0,hmm, -1, NULL, NULL);
LPWSTR lpszW = new WCHAR[nLen];
MultiByteToWideChar(CP_UTF8, 0, hmm, -1, lpszW, nLen);
// Print it
wprintf(L"wprintf wide character of UTF-8 string: %s\n", lpszW);
// Free the memory
delete[] lpszW;
int c = getchar();
return 0;
}
UTF-8 file string using Windows 1252 code page read as:äö
Length:4
wprintf wide character of UTF-8 string: äö
i would check your command prompt font/code page to make sure that it can display your os single byte encoding. note command prompt has its own code page that differs to your text editor.