How can I print card suit characters in C Win32 console application? - c

I have seen a few questions on how to print these characters but none of the methods appear to be working. I suspect it is because I making a Win32 console application based on some of the comments I read.
Here is an example of what I have tried in my code currently. It only prints question mark boxes, or if I change it around I get question marks or random symbols.
I have tried defining these at the top.
#define SPADE '\x06'
#define CLUB '\x05'
#define HEART '\x03'
#define DIAMOND '\x04'
inside function, these are some of the things I've tried. I have left S,D,H,C in case I can't figure it out.
printf("%lc", SPADE);
//printf("♠");
//printf("S");
printf("%lc", HEART);
//printf("♥");
//printf("H");
printf("%lc", DIAMOND);
//printf("♦");
//printf("D");
printf("%lc", CLUB);
//printf("♣");
//printf("C");

UTF-16 wchar_t and wide characters functions are needed in Windows.
#include <windows.h>
int main()
{
DWORD n;
HANDLE hout = GetStdHandle(STD_OUTPUT_HANDLE);
const wchar_t *buf = L"♠♥♦♣\n";
WriteConsoleW(hout, buf, wcslen(buf), &n, 0);
return 0;
}
The following code will compile with Visual Studio:
#include <stdio.h>
#include <io.h> //for _setmode
#include <fcntl.h> //for _O_U16TEXT
int main()
{
_setmode(_fileno(stdout), _O_U16TEXT);
wprintf(L"♠♥♦♣\n");
return 0;
}
After setting the mode to UTF-16, you have to call _setmode(_fileno(stdout), _O_TEXT) if you wish to use printf again.

Related

how to use unicode blockelements in C?

I want to use unicode blocks in my C program to display them in the console like ▇, ░ and so on. However, whenever I try to use the escape sequence for unicode characters, I only get weird letters like:
printf("/u259A"); //259A is the unicode for ▚
Output: ÔûÜ
I looked up how to include unicode charactes then tried to use wchar_t:
#include <locale.h>
#include <stdio.h>
int main(int argc, char const *argv[]) {
setlocale(LC_ALL, "");
wchar_t c = "\u259A";
printf("%c",c);
return 0;
}
but that only gave me ☺ as the output instead of ▚. Removing setlocale() would give me a blank output. I dont know what do to from this point on. The only thing I saw was using printf("\xB2"); which gave you ▓. But I dont understand where the B2 comes from or what it stands for.
So what worked for me was the following:
#include <stdio.h>
#include <fcntl.h>
#include <io.h>
int main(int argc, char const *argv[]) {
_setmode(_fileno(stdout), _O_U16TEXT);
wprintf(L"\x2590 \x2554 \x258c \x2592"); //Output : ▐ ╔ ▌ ▒
return 0;
}
the function _setmode() is apparently for setting the console on u16 text encoding. wprintf() allows you to print wide characters (unicode aswell). The L"" before the string indicates to the compiler, that the following string is a unicode string. Thanks to everyone for their time and answers!

Output angle symbol in C

I want to create a console application for phasor* addition in C.
I want C to display the ∠ symbol. I don't know how to make C to display UNICODE.
I simply tried printf("∠"); but it printed a question mark in a box.
I'm using MinGW.
Thanks in advance.
*phasor is an AC electric topic.
to print the phasor symbol use "\u2220"
if you are using c99
#include<stdio.h>
int main()
{
puts("\u2220");
}
as r3mainer pointed out Printing a Unicode Symbol in C
below code works with c89
#include <stdio.h>
#include <wchar.h>
#include <locale.h>
int main() {
setlocale(LC_CTYPE, "");
wchar_t phasor = 0x2220;
wprintf(L"%lc\n", phasor);
}
check this for encoding is it UTF-16 or UTF-32

How to change console program for unicode support in windows?

The following program can be compiled using msvc or mingw. However, the mingw version cannot display unicode correctly. Why? How can I fix that?
Code:
#include <stdio.h>
#include <windows.h>
#include <io.h>
#include <fcntl.h>
int wmain(void)
{
_setmode(_fileno(stdout), _O_U16TEXT);
_putws(L"哈哈哈");
system("pause");
return 0;
}
Mingw64 Compile Command:
i686-w64-mingw32-gcc -mconsole -municode play.c
MSVC Compiled:
Mingw Compiled:
Edit:
After some testing, the problem seems not causing by mingw. If I run the program directly by double clicking the app. The unicode string cannot be displayed correct either. The code page however, is the same, 437.
It turns out the problem is related to console font instead of the compiler. See the following demo code for changing console font.
This is happening because of missing #define UNICODE & #define _UNICODE . You should try adding it along with other headers. The _UNICODE symbol is used with headers such as tchar.h to direct standard C functions such as printf() and fopen() to the Unicode versions.
Please Note - The -municode option is still required when linking if Unicode mode is used.
After doing some research, it turns out the default console font does not support chainese glyphs. One can change the console font by using SetCurrentConsoleFontEx function.
Demo Code:
#ifdef _MSC_VER
#define _CRT_SECURE_NO_WARNINGS
#endif
#include <stdio.h>
#include <io.h>
#include <fcntl.h>
#include <windows.h>
#define FF_SIMHEI 54
int main(int argc, char const *argv[])
{
CONSOLE_FONT_INFOEX cfi = {0};
cfi.cbSize = sizeof(CONSOLE_FONT_INFOEX);
cfi.nFont = 0;
cfi.dwFontSize.X = 8;
cfi.dwFontSize.Y = 16;
cfi.FontFamily = FF_SIMHEI;
cfi.FontWeight = FW_NORMAL;
wcscpy(cfi.FaceName, L"SimHei");
SetCurrentConsoleFontEx(GetStdHandle(STD_OUTPUT_HANDLE), FALSE, &cfi);
/* UTF-8 String */
SetConsoleOutputCP(CP_UTF8); /* Thanks for Eryk Sun's notice: Remove this line if you are using windows 7 or 8 */
puts(u8"UTF-8你好");
/* UTF-16 String */
_setmode(_fileno(stdout), _O_U16TEXT);
_putws(L"UTF-16你好");
system("pause");
return 0;
}

vswprintf keeps prefixing a Byte Order Mark character

I am still a rookie with C, and even newer to wide chars in C.
The below code should show
4 points to Smurfs
but it shows
4 points to Smurfs
In gdb I see this:
(gdb) p buffer
$1 = L" 4 points to Smurfs",
But when I copy paste from the console, the spaces are magically gone:
(gdb) p buffer
$1 = L"4 points to Smurfs",
Also, buffer[0] contains this according to gdb:
65279 L' '
Apparently the character in question &#65279 is the Unicode Character 'ZERO WIDTH NO-BREAK SPACE' (U+FEFF). I retyped the code making sure I did not enter this. I don't know where this comes from. I also opened the code in notepad per https://stackoverflow.com/a/9691839/7602 and there is no extra chars there.
I wouldn't care if ncurses would stop showing this as a space.
Code (heavily cut down):
#include <time.h>
#include <stdio.h>
#include <errno.h>
#include <wchar.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <locale.h>
#define NCURSES_WIDECHAR 1
#include <ncursesw/ncurses.h>
#include "types.h"
#include "defines.h"
#include "externs.h"
WINDOW * term;
/*row column color n arguments */
void rccn(int row, int col, const wchar_t *fmt, ...)
{
wchar_t buffer[80];
int size;
va_list args;
va_start(args, fmt);
size = vswprintf(buffer, 80, fmt, args);
va_end( args );
if(size >= 80){
mvaddwstr(row, col, L"Possible hacker detected!");
}else{
mvaddwstr(row, col, buffer);
}
}
int main(void)
{
int ch;
setlocale(LC_ALL,"");
term = initscr();
rccn(1,1,L"%i points to %ls",4,L"Smurfs");
ch = getch();
return EXIT_SUCCESS;
}
The problem goes 'away' with
rccn(1,1,L"%i points to %ls",4,L"Smurfs"+1);
As if the wide encoding of the constant adds that char in front..
Found it..
I had followed a tutorial where it was advised to add this compiler flag:
-fwide-exec-charset=utf-32
My code was not running on Cygwin at all, and I read that Windows is utf-16 centered, so I removed that compiler flag and it started working on Cygwin.
Then out of curiosity I removed the compiler flag on Raspbian, and it is now working as expected there as well, no more byte order marks.

Printing a unicode box in C

I'm trying to print this medium shade unicode box in C: ▒
(I'm doing the exercises in K&R and then got sidetracked on the one about making a histogram...). I know my unix term (Mac OSX) can display the box because I saved a text file with the box, and used cat textfilewithblock and it printed the block.
So far I initially tried:
#include <stdio.h>
#include <wchar.h>
int main(){
wprintf(L"▒\n");
return 0;
}
and nothing printed
iMac-2$ ./a.out
iMac-2:clang vik$
I did a search and found this: unicode hello world for C?
And it seems like I still have to set a locale (even though the executing environment in utf8? I'm still trying to figure out why this step is necessary) But anyway, it works! (after a bit of a struggle finally realizing that the proper string was en_US.UTF-8 rather than en_US.utf8 which I had read somewhere...)
#include <stdio.h>
#include <wchar.h>
#include <locale.h>
int main(){
setlocale (LC_ALL, "en_US.UTF-8");
wprintf(L"▒\n");
return 0;
}
Output is as follows:
iMac-2$ ./a.out
▒
iMac-2$
But when I try the following code...putting in the UTF-8 hex (which I got from here: http://www.utf8-chartable.de/unicode-utf8-table.pl?start=9472&unicodeinhtml=dec ) which is 0xe29692 for the box rather than pasting the box in itself, it doesn't work again.
#include <stdio.h>
#include <wchar.h>
#include <locale.h>
int main(){
setlocale (LC_ALL, "en_US.UTF-8");
wchar_t box = 0xe29692;
wprintf(L"%lc\n", box);
return 0;
}
I'm clearly missing something but can't quite figure out what it is.
The unicode value of the MEDIUM SHADE code point is not 0xe29692, it is 0x2592. <E2><96><92> is the 3 byte encoding for this code point in UTF-8.
You can print this thing either using the wide char APIs:
#include <stdio.h>
#include <wchar.h>
#include <locale.h>
int main(void) {
setlocale(LC_ALL, "en_US.UTF-8");
wchar_t box = 0x2592;
wprintf(L"%lc\n", box); // or simply printf("%lc\n", box);
return 0;
}
Or simply by printing the UTF-8 encoding directly:
#include <stdio.h>
int main(void) {
printf("\xE2\x96\x92\n");
return 0;
}
Or if your text editor encodes the source file in UTF-8:
#include <stdio.h>
int main(void) {
printf("▒\n");
return 0;
}
But be aware that this will not work: putchar('▒');
Also for full unicode support and a few more goodies, I recommend using iTerm2 on MacOS.
The box character is U+2592, which translates to 0xE2 0x96 0x92 in UTF-8. This adaptation of your third program mostly works for me:
#include <stdio.h>
#include <wchar.h>
#include <locale.h>
int main(void)
{
setlocale (LC_ALL, "en_US.UTF-8");
wchar_t box = 0xe29692;
wprintf(L"%lc\n", box);
wprintf(L"\n\nX\n\n");
box = L'\u2592'; //0xE2 0x96 0x92 = U+2592
wprintf(L"%lc\n", box);
wprintf(L"\n\n0x%.8X\n\n", box);
box = 0x2592;
wprintf(L"%lc\n", box);
return 0;
}
The output I get is:
X
▒
0x00002592
▒
The first print operation produces nothing of use; the others work.
Testing on Mac OS X 10.10.5. I happen to be compiling with GCC 5.3.0 (which I compiled), but I got the same output with XCode 7.0.2 and clang.

Resources