I use the c program try to count words from a text.But when the text is man malloc >x,the hyphen also is printed.
finally ,i find the hyphen is a multi-character character.
who can tell me the hyphen's ascii.
it's about in line 17 in man malloc >x.
first of all, if you don't give the character, we can't know what it is. A man page is always reformated, and modified to fit the context of displaying.
i find the hyphen is a multi-character character. who can tell me the hyphen's ascii.
if it's a multi-character character, then it's not ASCII, it's unicode. And my guess is that it's:
‐
which is unicode character 8208. Hint, in python3 run:
>>> print(ord('–'))
8208
now to handle that, you need to include wchar.h, use a wchar_t* string and count the characters using wcslen(). As you like to read manuals:
man wcslen
as a snippet:
const wchar_t* s = "This is an hyphen: `–` !";
printf("%d", wcslen(s));
N.B.: to avoid hyphenization of words in your manpage displaying, you may want to setup your COLUMNS env variable to a very large value ;-)
N.B.2: you may also want to use nroff -mandoc /usr/share/man/man3/malloc.3 and look at nroff options to better fit your usage, and avoid hyphenization.
Related
wprintf() takes a wchar_t string as argument and prints the string in the specified locale character encoding.
But I have noticed that when using printf() and passing it a UTF-8 string, the UTF-8 string will always be printed regardless of the specified locale character encoding (for example, if the UTF-8 string contains Arabic characters, and the locale is set to "C" (not "C.UTF-8"), then the Arabic characters will still be printed).
Am I correct that printf() doesn't care about the locale?
True printf doesn't care about locale for c-strings. If you pass it an UTF-8 string, it knows nothing about it, it just see a sequence of bytes (hopefully terminated by ascii NUL). Then, bytes are passed to the output as-is, and are interpreted by the terminal (or whatever is the output). If the terminal is able to interpret UTF-8 sequences it then does so (if not, it tries to interpret it the way it is configured, Latin-1 or alike) and if it is also able to print them correctly then it does so (sometimes it doesn't have the right font/glyph and prints unknown characters as ? or alike).
This is one of the big virtues (perhaps the biggest virtue) of UTF-8: it's just a string of reasonably ordinary bytes. If your code-editing environment knows how to let you type
printf("Cööl!\n");
and if your display environment (e.g. your terminal window) knows how to display it, you can just write that, and run it, and it works (as it sounds like you've discovered).
So you don't need special run-time support, you don't need special header files or libraries or anything, you don't need to write your code in some fancy new Unicodey way -- you can just keep on using ordinary C strings and printf and friends like you're used to, and it all just works.
Of course, those two if's can be big ones. If you can't figure out how to (or your code editing environment won't let you) type the characters, or if your display environment doesn't display them, you may be stuck, or you may have to do some hard work after all. (Display environments that don't properly display UTF-8 output from C programs are evidently quite common, based on the number of times the question gets asked here on SO.)
See also the "UTF-8 Everywhere" manifesto.
(Now, with all of this said, this doesn't mean that printf doesn't care about locale settings at all. There are aspects of the locale that printf may care about, and there may be character sets and encodings that printf might have to treat specially, in a locale-dependent way. But since printf doesn't have to do anything special to make UTF-8 work right, that one aspect of the locale -- although it's a biggie -- doesn't end up affecting printf at all.)
Let's consider the following simple program, which uses printf() to print a wide string if run without command-line arguments, and wprintf() otherwise:
#include <stdlib.h>
#include <locale.h>
#include <stdio.h>
#include <wchar.h>
const wchar_t hello1[] = L"تحية طيبة";
const wchar_t hello2[] = L"Tervehdys";
int main(int argc, char *argv[])
{
if (!setlocale(LC_ALL, ""))
fprintf(stderr, "Warning: Current locale is not supported by the C library.\n");
if (argc <= 1) {
printf("printf 1: %ls\n", hello1);
printf("printf 2: %ls\n", hello2);
} else {
wprintf(L"wprintf: %ls\n", hello1);
wprintf(L"wprintf: %ls\n", hello2);
}
return EXIT_SUCCESS;
}
Using the GNU C library and any UTF-8 locale:
$ ./example
printf 1: تحية طيبة
printf 2: Tervehdys
$ ./example wide
wprintf: تحية طيبة
wprintf: Tervehdys
i.e. both produce the exact same output. However, if we run the example in the C/POSIX locale (that only supports ASCII), we get
$ LANG=C LC_ALL=C ./example
printf 1: printf 2: Tervehdys
i.e., the first printf() stopped at the first non-ASCII character (and that's why the second printf() printed on the same line);
$ LANG=C LC_ALL=C ./example wide
wprintf: ???? ????
wprintf: Tervehdys
i.e. wprintf() replaces wide characters that cannot be represented in the charset used by the current locale with a ?.
So, if we consider the GNU C library (which exhibits this behaviour), then we must say yes, printf cares about the locale, although it actually mostly cares about the character set used by the locale, and not the locale per se:
printf() will stop when trying to print wide strings that cannot be represented by the current character set (as defined by the locale). wprintf() will output question marks for those characters instead.
libc6-2.23-0ubuntu10 on x86-64 (amd64) does some replacements for multibyte characters in the printf format string, but multibyte characters in strings printed with %s are printed as-is. Which means it is a bit complicated to say exactly what gets printed and when the printf() gives up on the first multibyte or wide character it cannot convert, or just prints as-is.
However, wprintf() is pretty rock solid. (It too may choke if you try to print narrow strings with multibyte characters not representable in the character set used by the current locale, but for wide string stuff, it seems to work very well.)
Do note that POSIX.1 C libraries also provide iconv_open(), iconv(), and iconv_close() for converting strings, as well as mbstowcs() and wcstombs() to convert between wide and narrow/multibyte strings. You can also use asprintf() to create a dynamically allocated narrow string out of narrow and/or wide character strings (%s and %ls, respectively).
I was writing a program to input multiple lines from a file.
the problem is i don't know the length of the lines, so i cant use fgets cause i need to give the size of the buffer and cant use fscanf cause it stops at a space token
I saw a solution where he recommended using malloc and realloc for each character taken as input but i think there's an easier way and then i found someone suggesting using
fscanf(file,"%[^\n]",line);
Does anyone have a better solution or can someone explain how the above works?(i haven't tested it)
i use GCC Compiler, if that's needed
You can use getline(3). It allocates memory on your behalf, which you should free when you are finished reading lines.
and then i found someone suggesting using fscanf(file,"%[^\n]",line);
That's practically an unsafe version of fgets(line, sizeof line, file);. Don't do that.
If you don't know the file size, you have two options.
There's a LINE_MAX macro defined somewhere in the C library (AFAIK it's POSIX-only, but some implementations may have equivalents). It's a fair assumption that lines don't exceed that length.
You can go the "read and realloc" way, but you don't have to realloc() for every character. A conventional solution to this problem is to exponentially expand the buffer size, i. e. always double the allocated memory when it's exhausted.
A simple format specifier for scanf or fscanf follows this prototype
%specifier
specifiers
As we know d is format specifier for integers Like this
[characters] is Scanset Any number of the characters specified between the brackets.
A dash (-) that is not the first character may produce non-portable behavior in some library implementations.
[^characters] is
Negated scanset Any number of characters none of them specified as characters between the brackets.
fscanf(file,"%[^\n]",line);
Read any characters till occurance of any charcter in Negated scanset in this case newline character
As others suggested you can use getline() or fgets() and see example
The line fscanf(file,"%[^\n]",line); means that it will read anything other than \n into line. This should work in Linux and Windows, I think. But may not work in OS X format which use \r to end a line.
I'm primarily interested in the Unix-like systems (e.g., portable POSIX) as it seems like Windows does strange things for wide characters.
Do the read and write wide character functions (like getwchar() and putwchar()) always "do the right thing", for example read from utf-8 and write to utf-8 when that is the set locale, or do I have to manually call wcrtomb() and print the string using e.g. fputs()? On my system (openSUSE 12.3) where $LANG is set to en_GB.UTF-8 they do seem to do the right thing (inspecting the output I see what looks like UTF-8 even though strings were stored using wchar_t and written using the wide character functions).
However I am unsure if this is guaranteed. For example cprogramming.com states that:
[wide characters] should not be used for output, since spurious zero
bytes and other low-ASCII characters with common meanings (such as '/'
and '\n') will likely be sprinkled throughout the data.
Which seems to indicate that outputting wide characters (presumably using the wide character output functions) can wreak havoc.
Since the C standard does not seem to mention coding at all I really have no idea who/when/how coding is applied when using wchar_t. So my question is basically if reading, writing and using wide characters exclusively is a proper thing to do when my application has no need to know about the encoding used. I only need string lengths and console widths (wcswidth()), so to me using wchar_t everywhere when dealing with text seems ideal.
The relevant text governing the behavior of the wide character stdio functions and their relationship to locale is from POSIX XSH 2.5.2 Stream Orientation and Encoding Rules:
http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_05_02
Basically, the wide character stdio functions always write in the encoding that's in effect (per the LC_CTYPE locale category) at the time the FILE stream becomes wide-oriented; this means the first time a wide stdio function is called on it, or fwide is used to set the orientation to wide. So as long as a proper LC_CTYPE locale is in effect matching the desired "system" encoding (e.g. UTF-8) when you start working with the stream, everything should be fine.
However, one important consideration you should not overlook is that you must not mix byte and wide oriented operations on the same FILE stream. Failure to observe this rule is not a reportable error; it simply results in undefined behavior. As a good deal of library code assumes stderr is byte oriented (and some even makes the same assumption about stdout), I would strongly discourage ever using wide-oriented functions on the standard streams. If you do, you need to be very careful about which library functions you use.
Really, I can't think of any reason at all to use wide-oriented functions. fprintf is perfectly capable of sending wide-character strings to byte-oriented FILE streams using the %ls specifier.
So long as the locale is set correctly, there shouldn't be any issues processing UTF-8 files on a system using UTF-8, using the wide character functions. They'll be able to interpret things correctly, i.e. they'll treat a character as 1-4 bytes as necessary (in both input and output). You can test it out by something like this:
#include <stdio.h>
#include <locale.h>
#include <wchar.h>
int main()
{
setlocale(LC_CTYPE, "en_GB.UTF-8");
// setlocale(LC_CTYPE, ""); // to use environment variable instead
wchar_t *txt = L"£Δᗩ";
wprintf(L"The string %ls has %d characters\n", txt, wcslen(txt));
}
$ gcc -o loc loc.c && ./loc
The string £Δᗩ has 3 characters
If you use the standard functions (in particular character functions) on multibyte strings carelessly, things will start to break, e.g. the equivalent:
char *txt = "£Δᗩ";
printf("The string %s has %zu characters\n", txt, strlen(txt));
$ gcc -o nloc nloc.c && ./nloc
The string £Δᗩ has 7 characters
The string still prints correctly here because it's essentially just a stream of bytes, and as the system is expecting UTF-8 sequences, they're translated perfectly. Of course strlen is reporting the number of bytes in the string, 7 (plus the \0), with no understanding that a character and a byte aren't equivalent.
In this respect, because of the compatibility between ASCII and UTF-8, you can often get away with treating UTF-8 files as simply multibyte C strings, as long as you're careful.
There's a degree of flexibility as well. It's possible to convert a standard C string (as a multibyte string) to a wide character string easily:
char *stdtxt = "ASCII and UTF-8 €£¢";
wchar_t buf[100];
mbstowcs(buf, stdtxt, 20);
wprintf(L"%ls has %zu wide characters\n", buf, wcslen(buf));
Output:
ASCII and UTF-8 €£¢ has 19 wide characters
Once you've used a wide character function on a stream, it's set to wide orientation. If you later want to use standard byte i/o functions, you'll need to re-open the stream first. This is probably why the recommendation is not to use it on stdout. However, if you only use wide character functions on stdin and stdout (including any code that you link to), you will not have any problems.
Don't use fputs with anything else than ASCII.
If you want to write down lets say UTF8, then use a function who return the real size used by the utf8 string and use fwrite to write the good number of bytes, without worrying of vicious '\0' inside the string.
I've been trying to use regular expressions on scanf, in order to read a string of maximum n characters and discard anything else until the New Line Character. Any spaces should be treated as regular characters, thus included in the string to be read.
I've studied a Wikipedia article about Regular Expressions, yet I can't get scanf to work properly. Here is some code I've tried:
scanf("[ ]*%ns[ ]*[\n]", string);
[ ] is supposed to go for the actual space character, * is supposed to mean one or more, n is the number of characters to read and string is a pointer allocated with malloc.
I have tried several different combinations; however I tend to get only the first word of a sentence read (stops at space character). Furthermore, * seems to discard a character instead of meaning "zero or more"...
Could anybody explain in detail how regular expressions are interpreted by scanf? What is more, is it efficient to use getc repetitively instead?
Thanks in Advance :D
The short answer: scanf does not handle regular expressions literally speaking.
If you want to use regular expressions in C, you could use the regex POSIX library. See the following question for a basic example on this library usage : Regular expressions in C: examples?
Now if you want to do it the scanf way you could try something like
scanf("%*[ ]%ns%*[ ]\n",str);
Replace the n in %ns by the maximal number of characters to read from input stream.
The %*[ ] part asks to ignore any spaces. You could replace the * by a specific number to ignore a precise number of characters. You could add other characters between braces to ignore more than just spaces.
Not sure if the above scanf would work as spaces are also matched with the %s directive.
I would definitely go with a fgets call, then triming the surrounding whitespaces with something like the following: How do I trim leading/trailing whitespace in a standard way?
is it efficient to use getc repetitively instead?
Depends somewhat on the application, but YES, repeated getc() is efficient.
unless I read the question wrong, %[^'\n']s will save everything until the carriage return is encountered.
I'm trying to print out a wchar_t* string.
Code goes below:
#include <stdio.h>
#include <string.h>
#include <wchar.h>
char *ascii_ = "中日友好"; //line-1
wchar_t *wchar_ = L"中日友好"; //line-2
int main()
{
printf("ascii_: %s\n", ascii_); //line-3
wprintf(L"wchar_: %s\n", wchar_); //line-4
return 0;
}
//Output
ascii_: 中日友好
Question:
Apparently I should not assign CJK characters to char* pointer in line-1, but I just did it, and the output of line-3 is correct, So why? How could printf() in line-3 give me the non-ascii characters? Does it know the encoding somehow?
I assume the code in line-2 and line-4 are correct, but why I didn't get any output of line-4?
First of all, it's usually not a good idea to use non-ascii characters in source code. What's probably happening is that the chinese characters are being encoded as UTF-8 which works with ascii.
Now, as for why the wprintf() isn't working. This has to do with stream orientation. Each stream can only be set to either normal or wide. Once set, it cannot be changed. It is set the first time it is used. (which is ascii due to the printf). After that the wprintf will not work due the incorrect orientation.
In other words, once you use printf() you need to keep on using printf(). Similarly, if you start with wprintf(), you need to keep using wprintf().
You cannot intermix printf() and wprintf(). (except on Windows)
EDIT:
To answer the question about why the wprintf line doesn't work even by itself. It's probably because the code is being compiled so that the UTF-8 format of 中日友好 is stored into wchar_. However, wchar_t needs 4-byte unicode encoding. (2-bytes in Windows)
So there's two options that I can think of:
Don't bother with wchar_t, and just stick with multi-byte chars. This is the easy way, but may break if the user's system is not set to the Chinese locale.
Use wchar_t, but you will need to encode the Chinese characters using unicode escape sequences. This will obviously make it unreadable in the source code, but it will work on any machine that can print Chinese character fonts regardless of the locale.
Line 1 is not ascii, it's whatever multibyte encoding is used by your compiler at compile-time. On modern systems that's probably UTF-8. printf does not know the encoding. It's just sending bytes to stdout, and as long as the encodings match, everything is fine.
One problem you should be aware of is that lines 3 and 4 together invoke undefined behavior. You cannot mix character-based and wide-character io on the same FILE (stdout). After the first operation, the FILE has an "orientation" (either byte or wide), and after that any attempt to perform operations of the opposite orientation results in UB.
You are omitting one step and therefore think the wrong way.
You have a C file on disk, containing bytes. You have a "ASCII" string and a wide string.
The ASCII string takes the bytes exactly like they are in line 1 and outputs them.
This works as long as the encoding of the user's side is the same as the one on the programmer's side.
The wide string first decodes the given bytes into unicode codepoints and stored in the program- maybe this goes wrong on your side. On output they are encoded again according to the encoding on the user's side. This ensures that these characters are emitted as they are intended to, not as they are entered.
Either your compiler assumes the wrong encoding, or your output terminal is set up the wrong way.