Whether the functions in _generic can use macro functions defined by define? - c

#define a printf("I am int type !!~\n");
#define b printf("I am double type !!~\n");
#define foo(x) _Generic(x, int : a, \
double : b)(x)
int main(){
foo(123); // foo_int(123) // I am int type !!~
system("pause");
return 0;
}
Why does this give an error?and The compiler tells me: Should type ")"
When I change the function call in _Generic to one that is not defined by define, the compiler does not throw an error.

Whether the functions in _generic can use macro functions defined by define?
The identifiers inside _Generic will be expanded before _Generic takes place. In that sense, you can use macros.
Why does this give an error?
foo(123); expands to
_Generic(123, int : printf("I am int type !!~\n");, double : printf("I am double type !!~\n");)(123);
The ; before , inside _Generic are just invalid, and the (x) after _Generic(...)(123) also doesn't make sense. The produced code is not following the syntax of C programming language.

The operands of the _Generic cases must be expressions (specifically an assignment-expression). printf("I am int type !!~\n"); is a statement, not an expression.
When you remove the semicolon, it is an expression. Then _Generic(x, int : a, double : b) would be fine, but you have (x) after it. Why?
After processing of the _Generic, the expression is basically printf("I am int type !!~\n"). That is a complete function call. Putting (x) after it to make printf("I am int type !!~\n")(x) attempts to make another function call, but that does not work since printf returns an int, not a pointer to a function. Remove (x).

Related

How to #define function-like macro of the function with variable arguments in C

I have a function declared as:
int func(int a, int b, ...);
Then I want to #define a function-like macro as:
#define TEST(A,B) func(A,B,0)
But the compiler always complains: "error: expected declaration specifiers or '...', before numeric constant".
So, how can I eliminate this error?
Make sure the function is defined before the #define statement (could be imported too) so the compiler will the data type of a and b. Or you could also try type defining a and b as (int) a.

C11 _Generic() - How can I suppress gcc code evaluation (Error check) for selections which are not matching with selector

P.S.- I have taken int and int * for simplicity purpose, It can also be struct and struct *.
I am trying to implement a macro to copy data present in one variable to other independent of the variable datatype, In the below solution I am using '_Generic' compiler feature.
program 1:
#include<stdio.h>
#include <string.h>
#define copyVar(var,newVar) _Generic((var),int:({memcpy(newVar,(void *)&var,sizeof(int));}),\
int *:({memcpy(newVar,(void *)var,sizeof(int));}),default:newVar=var)
int main() {
int data = 2;
int *copy;copy = (int *)malloc(sizeof(int));
copyVar(data,copy);
printf("copied Data=%i",*copy);
}
Program 2:
#include<stdio.h>
#include <string.h>
#define copyVar(var,newVar) _Generic((var),int:({memcpy(newVar,(void *)&var,sizeof(int));}),\
int *:({memcpy(newVar,(void *)var,sizeof(int));}),default:newVar=var)
int main() {
int data = 2;
int *copy;copy = (int *)malloc(sizeof(int));
copyVar(&data,copy);
printf("copied Data=%i",*copy);
}
Now problem is, 'program 1' get compiled successfully despite some warning.
But while compiling program 2 gcc throwing error:
error: lvalue required as unary '&' operand #define
copyVar(var,newVar) _Generic((var),int:({memcpy(newVar,(void
*)&var,sizeof(int));}),
and I assume this is due to since _Generic int: selection get preprocessed with one more ampersand
(void *)&&var
why is gcc evaluates all selection?
Your code has various problems: you copy data into an uninitialized pointed, you have superfluous void* casts, you treat _Generic as some sort of compound statement instead of an expression, and so on.
But to answer your question, your code doesn't work because the result of &something is not a lvalue. Since the & operator needs a lvalue, you cannot do & &something. (And you cannot do &&something either because that gets treated as the && operator by the "maximum munch rule".)
So your code doesn't work for the same reason as this code doesn't work:
int x;
int**p = & &x;
gcc tells you that &x is not a lvalue:
lvalue required as unary '&' operand
EDIT - clarification
This _Generic macro, like any macro, works like pre-processor text replacement. So when you have this code in the macro:
_Generic((var), ...
int: ... (void *)&var
int*: ... (void)var
It gets pre-processed as
_Generic((&data), ...
int: ... (void *)& &data
int*: ... (void)&data
And all paths of the _Generic expression are pre-processed. _Generic itself is not part of the pre-processor, but gets evaluated later on, like any expression containing operators. The whole expression is checked for syntactic correctness, even though only one part of the expression is evaluated and executed.
The indented original use of _Generic is with function pointers as here
#define copyVar(var,newVar) \
_Generic((var), \
int: function1, \
int*: function2, \
default:function3)(&(var), &(newVar))
Here the generic expression chooses the function and then this function is applied to whatever are the arguments.
You would have to write the three stub functions that correspond to the three different cases.
If you have them small and nice and as inline in your header file, the optimizer will usually ensure that this mechanism does not have a run time overhead.
This can be solved with a two level _Generic
#define copyVar(var,newVar) \
_Generic((var), \
int : ({ __auto_type _v = var; memcpy(newVar, (void *) _Generic((_v), int: &_v , int *: _v) , sizeof(int));}) , \
int *: ({ __auto_type _v = var; memcpy(newVar, (void *) _v , sizeof(int));}) , \
default: newVar=var \
)

How to use one define or function to print any variable type using C?

This started as a joke type of Cython, with me making a lot of silly defines to emulate python using C. Then I realized it was actually sort of convenient for debugging and quickly hacking together programs.
I have a define that mostly works, using sizeof to distinguish types, but a 3 or 7 character char array/string + \0 will be printed as a double or an int. Is there anyway around this? I was considering using a try-exception to subscript it to see if it's a string, but I can't implement it.
#include <stdio.h>
#define print(n) if(sizeof(n)==8) printf("%f\n",n); \
else if(sizeof(n)==4) printf("%i\n",n); \
else printf("%s\n",n);
int main()
{
print("Hello world!") // these all work perfectly
print(8.93)
print(4)
print("abc") // these are interpreted as ints or doubles
print("1234567")
return 0;
}
gcc has a handy built-in for you (also available with clang), which allows to directly compare types:
int __builtin_types_compatible_p (type1, type2)
This built-in function returns 1 if the unqualified versions of the types type1 and type2
(which are types, not expressions) are compatible, 0 otherwise. The result of this built-in
function can be used in integer constant expressions.
http://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html
This built-in is used for type-specific dispatch and type checking in linux kernel, to name one example.
To get the types out of expressions, you can rely on typeof() statement, to the tune:
__builtin_types_compatible_p (typeof(n), int)
Selecting an operation based on the type of the argument can be done with the _Generic construct:
#define print(X) _Generic((0, X), int: print_int, \
double: print_double, \
char *: print_string)(X)
void print_int(int x) { printf("%d\n", x); }
// etc
All _Generic does is select one of the expressions in its list - that's why the argument to the macro appears twice (once to select based on its type, once to actually apply it to whichever function/value was chosen). You can also supply a default expression. Apparently the comma expression (that (0, X)) is necessary to easily support string literals: it decays the string into a char *.
_Generic is available in compilers that support C11 (which includes GCC and Clang). There's a workaround for some C99 compilers in Jens Gustedt's P99, as well.
Modern C, AKA C11, has _Generic for type generic macros. Something like
#define P(x) printf(_Generic(x, unsigned: "%u", signed: "%d", double: "%g"), x)
should do the trick.
The non-C++11 version perhaps, simply overloading a function... It's more than just a macro obviously:
void print_var(const char *s)
{
printf("%s", s);
}
void print_var(unsigned int u)
{
printf("%u", u);
}
void print_var(int d)
{
printf("%d", d);
}
void print_var(double f)
{
printf("%f", f);
}
#define print(v) { print_var(v); \
printf("\n"); }
int main()
{
print("abc")
print(1234)
print(-13)
print(3.141592654)
print("1234567")
print("Hello world!")
print(8.93)
print(4)
if(1)
print("this too")
if(2)
print("and this")
}
Yielding the output:
abc
1234
-13
3.141593
1234567
Hello world!
8.930000
4
this too
and this

Is this function macro safe?

Can you tell me if anything and what can go wrong with this C "function macro"?
#define foo(P,I,X) do { (P)[I] = X; } while(0)
My goal is that foo behaves exactly like the following function foofunc for any POD data type T (i.e. int, float*, struct my_struct { int a,b,c; }):
static inline void foofunc(T* p, size_t i, T x) { p[i] = x; }
For example this is working correctly:
int i = 0;
float p;
foo(&p,i++,42.0f);
It can handle things like &p due to putting P in parentheses, it does increment i exactly once because I appears only once in the macro and it requires a semicolon at the end of the line due to do {} while(0).
Are there other situations of which I am not aware of and in which the macro foo would not behave like the function foofunc?
In C++ one could define foofunc as a template and would not need the macro. But I look for a solution which works in plain C (C99).
The fact that your macro works for arbitrary X arguments hinges on the details of operator precedence. I recommend using parentheses even if they happen not to be necessary here.
#define foo(P,I,X) do { (P)[I] = (X); } while(0)
This is an instruction, not an expression, so it cannot be used everywhere foofunc(P,I,X) could be. Even if foofunc returns void, it can be used in comma expressions; foo can't. But you can easily define foo as an expression, with a cast to void if you don't want to risk using the result.
#define foo(P,I,X) ((void)((P)[I] = (X)))
With a macro instead of a function, all you lose is the error checking. For example, you can write foo(3, ptr, 42) instead of foo(ptr, 3, 42). In an implementation where size_t is smaller than ptrdiff_t, using the function may truncate I, but the macro's behavior is more intuitive. The type of X may be different from the type that P points to: an automatic conversion will take place, so in effect it is the type of P that determines which typed foofunc is equivalent.
In the important respects, the macro is safe. With appropriate parentheses, if you pass syntactically reasonable arguments, you get a well-formed expansion. Since each parameter is used exactly once, all side effects will take place. The order of evaluation between the parameters is undefined either way.
The do { ... } while(0) construct protects your result from any harm, your inputs P and I are protected by () and [], respectively. What is not protected, is X. So the question is, whether protection is needed for X.
Looking at the operator precedence table (http://en.wikipedia.org/wiki/Operators_in_C_and_C%2B%2B#Operator_precedence), we see that only two operators are listed as having lower precedence than = so that the assignment could steal their argument: the throw operator (this is C++ only) and the , operator.
Now, apart from being C++ only, the throw operator is uncritical because it does not have a left hand argument that could be stolen.
The , operator, on the other hand, would be a problem if X could contain it as a top level operator. But if you parse the statement
foo(array, index, x += y, y)
you see that the , operator would be interpreted to delimit a fourth argument, and
foo(array, index, (x += y, y))
already comes with the parentheses it requires.
To make a long story short:
Yes, your definition is safe.
However, your definition relies on the impossibility to pass stuff, more_stuff as one macro parameter without adding parentheses. I would prefer not to rely on such intricacies, and just write the obviously safe
#define foo(P, I, X) do { (P)[I] = (X); } while(0)

Declaring parameters outside the declarator

The C standard states that, for a function definition, if the declarator includes an identifier list, the types of the parameters shall be declared in a following declaration list. Apparently this makes a difference.
extern int max(int a, int b)
{
return a > b ? a : b;
}
extern int max(a, b)
int a, b;
{
return a > b ? a : b;
}
Here int a, b; is the declaration list for the parameters. The
difference between these two definitions is that the first form acts
as a prototype declaration that forces conversion of the arguments of
subsequent calls to the function, whereas the second form does not.
What does this mean for the programmer and does it affect the code the compiler produces?
It means that in the second case, it's the responsibility of the caller to ensure that the arguments provided are of the correct type; no implicit conversion will be provided (other than the default argument promotions). From section 6.5.2.2:
If the expression that denotes the called function has a type that does not include a
prototype, the integer promotions are performed on each argument.
...
If the expression that denotes the called function has a type that does include a prototype, the arguments are implicitly converted, as if by assignment, to the types of the
corresponding parameters.
So calling code like this will be ok:
char x = 3;
char y = 7;
max(x, y); // Equivalent to max((int)x, (int)y)
because x and y are promoted to int before being placed on the stack.
However, code like this will not be ok:
double x = 3.0;
long y = 7;
max(x, y); // Uh-oh
x and y will be placed on the stack as double and long, but max() will attempt to read two ints, which will result in undefined behaviour (in practice, the raw bits will be reinterpreted).
This is one reason not to use the second form; the only reason it's in the standard is to provide backward compatibility with (extremely) legacy code. If you're using GCC, you can enforce this by using the -Wold-style-definition flag; I would hope that other compilers would offer something equivalent.

Resources