Wrong char output for Generic print C - c

I have a macro with _Generic for printing
#define PRINT(data) \
_Generic((data), \
char:print_char)(data)
and this is the implementation for print_char
void print_char(char data){
printf("%c\n",data);
}
The problem is that when I call PRINT('t') for example it prints 116 instead of t, the solution I found was to add (char) in PRINT statemen like PRINT((char)'t').
The question is how can I print the char without the cast?

For historical reasons, dating back to the early days when C lacked a type system and treated everything as int, a lot of things we would expect to be char are actually still int. There are many examples in the language:
character literals like 'A' are type int.
getchar returns an int.
EOF is an int.
ctype.h functions use int not char.
And so on. This is a well-known language defect. C++ changed several of these defects very early on, but C insists on remaining broken even to this day.

In C, plain character constants such as 't' have type int.
The PRINT macro could be changed to the following:
define PRINT(data) \
_Generic((data), \
int:print_char)(data)
If it is desired that PRINT should do something else for a "normal" integer value, then that would not be a suitable solution.

Related

C define string as char

Is it possible to define a string as char in C like this? I think C calls it multi character constant.
#define OK '_/'
I want C to treat '_/' as a char from now on, not a string, so this:
printf("%c", OK);
prints _/ and not /
While it is technically valid C to define OK as '_/', the value of a multi-character character constant is implementation defined, so this is probably not something you want to do.
There is no way you will be able to print more than one character without resorting to strings.
Multi character constants are of int type and their value is not strictly defined-- it's platform dependent stuff. So using them as normal letters is not best idea, even though you can use them in every context as normal char there is no guarantee that they will be compiled as you intend (as in your example you get only last char from ur string).
here you have explanation of the topic:
Multiple characters in a character constant

integer to string converter(using macros)

I was doing basics of macros. I define a macro as follows:
#define INTTOSTR(int) #int
to convert integer to string.
Does this macro perfectly converts the integer to string? I mean are there some situations where this macro can fail?
Can I use this macro to replace standard library functions like itoa()?
for example:
int main()
{
int a=56;
char ch[]=INTTOSTR(56);
char ch1[10];
itoa(56,ch1,10);
printf("%s %s",ch,ch1);
return 0;
}
The above program works as expected.
Interestingly this macro can even convert float value to string.
for example:
INTTOSTR(53.5);
works nicely.
Till now I was using itoa function for converting int to string in all my projects. Can I replace itoa confidently in all projects. Because I know there is less overhead in using macro than function call.
Macros execute during (before to be exact) compile time, so you can convert a literal number in your sourcecode to a string but not a number stored in a variable
In your example, INTTOSTR(56) uses the stringification operator of the preprocessor which eventually results in "56". If you called it on a variable, you'd get the variable name but not its content.
In C, you can use itoa or if you are desperate and would like to avoid it, use snprintf for instance:
snprintf(my_str, sizeof(int), "%i", my_int);
The problem with your macro is that you are thinking about constants, but of course, your macro will be broken when you need to use a variable holding an integer. Your macro would try to stringify the macro name as opposed to the value it would be holding.
If you are fine with constants, your macro is "good", otherwise it is b0rked.
Your macro does not convert integers to strings, it converts a literal into a string literal, which is something very different.
Literals are any plain numbers or definitions of values in your code. when you do int x = 10; the numeral 10 in an integer literal, while x is a variable and int is the type. const char* ten = "10"; also defines a literal, in this case a string literal, with value "10" and a variable called ten which points to the address where this literal is defined. What your macro actually does is change the way the literal is represented before any actual compilation goes on, from an integer literal into a string literal.
So, the actual change is being done before any compilation, just at source code level. Macros are not functions and cannot inspect memory, and your convertion would not work with variables. If you try:
int x = 10;
const char* ten = INTTOSTR(x);
You would be very puzzled to find that your variable ten would actually hold the value "x". That's because x is treated as a literal, and not as a variable.
If you want to see what's going on, I recommend asking your compiler to stop at preprocessing, and see the output before your code is acutally compiled. You can do this in GCC if you pass the -E flag.
PS. Regarding the apparent "success" with conversion of float values, it just comes to show the danger of macros: they are not type-safe. It does not look at 53.5 as a float, but as a token represented by characters 5, 3, . and 5 in the source code.

Different behavior in visual c++ versus gcc/clang while stringifying parameter which contains comma

I'm using stringizing operator to convert parameter which may contains comma passed to a macro into string. As I know, some characters cannot be stringified – notably, the comma(,) because it is used to delimit parameters and the right parenthesis()) because it marks the end of the parameter. So I use a variadic macro to pass commas to the stringizing operator like this:
#include <stdio.h>
#define TEST 10, 20
#define MAKE_STRING(...) #__VA_ARGS__
#define STRING(x) MAKE_STRING(x)
int main()
{
printf("%s\n", STRING(TEST) );
return 0;
}
it works fine. But it occurs to me what would happen without variadic macro, so I modify the macro: #define MAKE_STRING(x) #x. It compiles fine unexpectedly in visual c++ 2008/2010, and output 10, 20 while gcc/clang give the compilation error as expected:
macro "MAKE_STRING" passed 2 arguments, but takes just 1
So my question: is the Visual c++ doing additional work or the behavior is undefined?
VS in general allows extra parameters in macros and then just drops them silently:
STRING(10, 20, 30) - still works and prints 10. This is not the case here, but it pretty much means VS don't even have the error gcc threw at you.
It's not any additional work but "merely" a difference in substitution order.
I am not sure if this will answer your question but i hope this will help you solving your problem. When defining a string constant in C, you should include it in double quotes (for spaces). Also, the # macro wrap the variable name inside double quotes so, for example, #a become "a".
#include <stdio.h>
#define TEST "hello, world"
#define MAKE_STRING(x) #x
int main()
{
int a;
printf("%s\n", TEST);
printf("%s\n", MAKE_STRING(a));
return 0;
}
I compiled this code using gcc 4.7.1 and the output is:
hello, world
a
I dunno why this has upvotes, or an answer got downvoted (so the poster deleted it) but I don't know what you expect!
#__VA_ARGS__ makes no sense, suppose I have MACRO(a,b,c) do you want "a,b,c" as the string?
http://gcc.gnu.org/onlinedocs/cpp/Variadic-Macros.html#Variadic-Macros
Read, that became standard behaviour, variable length arguments in macros allow what they do in variable length arguments to functions. The pre-processor operates on text!
The only special case involving # is ##, which deletes a comma before the ## if there are no extra arguments (thus preventing a syntax error)
NOTE:
It is really important you read the MACRO(a,b,c) part and what do you expect, a string "a,b,c"? or "a, b, c" if you want the string "a, b, c" WRITE THE STRING "a, b, c"
Using the # operator is great for stuff like
#define REGISTER_THING(THING) core_of_program.register_thing(THING); printf("%s registered\n",#THING);

K&R C Programming Language 1.5.1 (File Copying) [duplicate]

This question already has answers here:
Difference between int and char in getchar/fgetc and putchar/fputc?
(2 answers)
Closed 3 years ago.
Well, i've read some months ago another "well know" C book(in my language), and i never learn't nothing about this. The way that K&R writes 3 chapters in 20 pages it's simply amazing, and of course that i can't expect huge explanations, but that also rises questions.
I have a question about this point 1.5.1
The book says(pag 16):
main(){
int c;// <-- Here is the question
c=getchar();
while (c != EOF){
putchar(c);
c = getchar();
}
}
[...] The type char is specifically meant for storing such character
data, but any integer type can be used. We used int for a subtle but
important reason.
The problem is distinguishing the end of input from
valid data. The solution is that getchar returns a distinctive value
when there is no more input, a value that cannot be cinfused with any
real character. This value is called EOF, for "end of file". We must
declare c to be a type big enought to hold any value that getchar
returns. We can't use char since c must be big enough to hold EOF in
addition to any possible char. Therefore we use int.[...]
After searching google for another explanation:
EOF is a special macro representing End Of File (Linux: use CTRL+d on
the keyboard to create this, Windows command: use CTRL+z (may have to
be at beginning of new line, followed by RETURN)): Often EOF = -1, but
implementation dependent. Must be a value that is not a valid value
for any possible character. For this reason, c is of type int (not
char as one may have expected).
So i modified source from int to char to see what is the problem, about taking EOF values... but there is no problem. Works the same way.
I also didn't undestrood how does getchar takes every character i write, and prints everything. Int type is 4bytes long, so it can take 4 characters inside a variable.
But i can put any number of characters, it will read and write everything the same way.
And with char, happens the same...
What does really happens? Where are the values stored when there are more than 1-4 characters?
So i modified source from int to char to see what is the problem,
about taking EOF values... but there is no problem. Works the same way
I happens to work the same way. It all depends on the real type of char, i.e. if it's signed or unsigned. There's also a C FAQ about this very subject. You're more likely to see the bug if your chars are unsigned.
The bug can go undetected for a long time, however, if chars are
signed and if the input is all 7-bit characters.
EDIT
The last question is: char type is one byte long, and int is 4bytes
long. So, char will only take one ascii character. But if i type
"stack overflow is over 1byte long", the output will be "stack
overflow is over 1byte long". Where is "tack overflow is over 1byte
long" stored, and how does putchar, puts an entire string
Each character will be stored by c in turn. So the first time, getchar() will return s, and putchar will send it on its way. Then t will come along and so on. At no point will c store more than one character. So although you feed it a large string, it deals with it by eating one character at a time.
Separating into two answers:
Why int and not char
Short and formal answer: if you want to be able to represent all real characters, and another non-real character (EOF), you can't use a datatype that's designed to hold only real characters.
Answer that can be understood but not entirely accurate: The function getchar() returns the ASCII code of the character it reads, or EOF.
Because -1 casted to char equals 255, we can't distinguish between the 255-character and EOF. That is,
char a = 255;
char b = EOF;
a == b // Evaluates to TRUE
but,
int a = 255;
int b = EOF;
a == b // Evaluates to FALSE
So using char won't allow you to distinguish between a character whose ASCII code is 255 (which could happen when reading from a file), and EOF.
How come you can use putchar() with an int
The function putchar() looks at its parameter, sees a number, and goes to the ASCII table and draws the glyph it sees. When you pass it an int, it is implicitly casted to char. If the number in the int fits in the char, all is good and nobody notices anything.
If you are using char to store the result of getchar(), there are two potential problems, which one you'll meet depend on the signedness of char.
if char is unsigned, c == EOF will never be true and you'll get an infinite loop.
if char is signed, c == EOF will be true when you input some char. Which will depend on the charset used; in locale using ISO8859-1 or CP852 it is 'ÿ' if EOF is -1 (the most common value). Some charset, for instance UTF-8, don't use the value (char)EOF in valid codes, but you rarely can guarantee than your problem will stay on signed char implementation and only be used in non problematic locales.

Implicit conversion in C?

What's going on here:
printf("result = %d\n", 1);
printf("result = %f\n", 1);
outputs:
result = 1
result = 0.000000
If I ensure the type of these variables before trying to print them, it works fine of course. Why is the second print statement not getting implicitly converted to 1.00000?
In the second case you have a mismatch between your format string and the argument type - the result is therefore undefined behavio(u)r.
The reason the 1 is not converted to 1.0 is that printf is “just” a C function with a variable number of arguments, and only the first (required) argument has a specified type (const char *). Therefore the compiler “cannot” know that it should be converting the “extra” argument—it gets passed before printf actually reads the format string and determines that it should get a floating point number.
Now, admittedly your format string is a compile-time constant and therefore the compiler could make a special case out of printf and warn you about incorrect arguments (and, as others have mentioned, some compilers do this, at least if you ask them to). But in the general case it cannot know the specific formats used by arbitrary vararg functions, and it's also possible to construct the format string in complex ways (e.g. at runtime).
To conclude, if you wish to pass a specific type as a “variable” argument, you need to cast it.
An undefined behavior. An int is being treated as float
The short answer is that printf isn't really C++. Printf is a C function which takes a variable argument list, and applies the provided arguments to the format string basis the types specified in the format string.
If you want any sort of actual type checking, you should use streams and strings - the actual C++ alternatives to good old C-style printf.
Interesting, presumably it's fine if your put '1.0'
I suppose the printf only gets the address of the variable, it has no way of knowing what it was. But I would have thought the compiler would have the decency to warn you.

Resources