int _cdecl f (int x) { return 0; }
int _stdcall f (int y) { return 0; }
After name mangling will be:
_f
_f#4
Which doesn't conflict, is this allowed in c ,if not, why?
The keywords _cdecl and _stdcall are not part of the C language. These are Microsoft extensions which were preceded by similar Borland extensions.
In the standard C language, you can't declare a calling convention. Every function declared is, obviously, equivalent to what the MS compiler refers to as the "_cdecl" convention.
It would be possible to use in-line assembly to distinguish the two functions when you call them. Because you're using a platform-specific vendor extension of C, you might consider using in-line assembly.
First off, that's not mangling, that's decoration. Mangling is something that happens with C++ compilers because C++ was originally designed to to support overloading using C style link tools.
As to your question, you can't have two functions with the same name. For the purposes of applying that rule, the un-decorated name is used.
Why is this so? I'd imagine it is because decoration and calling conventions are not part of the C standard and are specific to each compiler. I'm pretty sure that C compilers supporting multiple calling conventions only came in to being a number of years after C was invented.
Related
Defined in the C standard is an optionally-supported keyword fortran
C99 language standard, section J.5.9:
The fortran function specifier may be used in a function declaration to indicate that calls suitable for FORTRAN should be generated, or that a different representation for the external name is to be generated.
This section remains unchanged in the C11 standard.
Nowhere else in either standard says anything else about this keyword. This section references section 6.7.4, Function Specifiers, which is what it seems to be, but the only one is inline, and the language involved tailors to that, and how one might use fortran isn't apparent.
The keyword is contained in the "Common Extensions" section (!), so universal support is not expected, and indeed, it is not present: my copy of GCC 7.2.0 doesn't recognize it.
Since I can't seem to use it,
a) How would one use the fortran keyword in C code?
b) What compilers support/supported this keyword?
The BC4.5 compiler for DOS/Win16 supports said keyword. It changes the calling convention of the function to FORTRAN's calling convention. Use looks like this:
extern fortran int FUNCTION(int *a, int *b, int *c);
(variables are passed by reference in FORTRAN).
You can also export a function to be called from FORTRAN like so:
fortran int FUNCTION2(int *a, int *b, int *c)
{
/* your code here */
}
I actually used it to call into QBX (BASIC), which uses almost exactly the same calling convention. The pascal keyword was better for this purpose; otherwise arguments come in backwards.
I generated a hash function with gperf couple of days ago. What I saw for the hash function was alien to me. It was something like this (I don't remember the exact syntax) :
unsigned int
hash(str, size)
register char* str;
register unsigned int size;
{
//Definition
}
Now, when I tried to compile with a C++ compiler (g++) it threw errors at me for not having str and size declared. But this compiled on the C compiler (gcc). So, questions:
I thought C++ was a superset of C. If its so, this should compile with a C++ compiler as well right?
How does the C compiler understand the definition? str and size are undeclared when they first appear.
What is the purpose of declaring str and size after function signature but before function body rather than following the normal approach of doing it in either of the two places?
How do I get this function to compile on g++ so I can use it in my C++ code? Or should I try generating C++ code from gperf? Is that possible?
1. C++ is not a superset, although this is not standard C either.
2/3. This is a K&R function declaration. See What are the major differences between ANSI C and K&R C?
.
4. gperf does in fact have an option, -L, to specify the language. You can just use -L C++ to use C++.
The Old C syntax for the declaration of a function's formal arguments is still supported by some compilers.
For example
int func (x)
int x
{
}
is old style (K&R style) syntax for defining a function.
I thought C++ was a superset of C. If its so, this should compile with a C++ compiler as well right?
Nopes! C++ is not a superset of C. This style(syntax) of function declaration/definition was once a part of C but has never been a part of C++. So it shouldn't compile with a C++ compiler.
This appears to be "old-school" C code. Declaring the types of the parameters outside of the parentheses but before the open curl-brace of the code block is a relic of the early days of C programming (I'm not sure why but I guess it has something to do with variable management on the stack and/or compiler design).
To answer your questions:
Calling C++ a "superset" of C is somewhat a misnomer. While they share basic syntax features, and you can even make all sorts of C library calls from C++, they have striking differences with respect to type safety, warnings vs. errors (C is more permissible), and compiler/preprocessor options.
Most contemporary C compilers understand legacy code (such as this appears to be). The C compiler holds the function parameter names sort of like "placeholders" until their type can be declared immediately following the function header name.
No real "purpose" other than again, this appears to be ancient code, and the style back in the day was like this. The "normal" approach is IMO the better, more intuitive way.
My suggestion:
unsigned int hash(register char *str, register unsigned int size)
{
// Definition
}
A word of advice: Consider abandoning the register keyword - this was used in old C programs as a way of specifying that the variable would be stored in a memory register (for speed/efficiency), but nowadays compilers are better at optimizing away this need. I believe that modern compilers ignore it. Also, you cannot use the & (address of) operator in C/C++ on a register variable.
I write code in C99 and compile via GCC. I would like to use function overloading for stylistic reasons (otherwise I would have to do name mangling by myself).
I have read Is there a reason that C99 doesn't support function overloading? however, I still wonder whether it can be enabled in GCC.
Can you help me at this point?
No, there is no function overloading in C99, not even in silly GCC extensions. C11 adds _Generic, but even then you still have to mangle names yourself.
void foo_int(int x);
void foo_double(double x);
#define foo(arg) _Generic((arg), int: foo_int, double: foo_double)(arg)
Whether that's better or worse, well. It's C.
In C macros may partially replace function overloading of other languages. As Cat Plus Plus indicates in her answer C11 has the additional construct _Generic to program type generic macros.
With C99 you already have possibilities to program macros that are in some sense type generic. P99 has facilities that ease the use of that, e.g to call such a macro with a different number of parameters. To distinguish which function to call according to a specific parameter A you could then use something like (sizeof(A) == sizeof(float) ? sqrtf(A) : sqrt(A)).
Gcc has extensions that allow to program such things even more comfortably, namely block expressions with ({ any code here }) and typeof to declare auxiliary variables of the same type as a macro parameter.
LLVM Clang3.3 has introduced function overloading. In fact, function overloading might not so easy as you expect. It involves such problems as function-call convention and ABI(Application Binary Interface). When you mix your C code with assembly language, those problems may occur.
When you work with assembly procedures, the names of the exported procedures should not be overloaded.
In LLVM Clang, you can do this with attribute (overloadable):
static void __attribute__((overloadable)) MyFunc(float x)
{
puts("This is a float function");
}
static int __attribute__((overloadable)) MyFunc(int x)
{
puts("This is an integer function");
return x;
}
When i see man sqrt on Linux, I see 3 prototypes of the function -
double sqrt(double x);
float sqrtf(float x);
long double sqrtl(long double x);
If compiler/library is written in C++, I understand it might be using function overloading.
If the compiler library that provides this is written in C, How does the compiler(gcc) implement this kind of thing, which is like function overloading which C does not support? (Or is it that some later standard of C like C99 does support something like this?)
What programming language is gcc implemented in?
The function names are simply chosen differently - plain sqrt for double and its friends sqrtf and sqrtl for floats and long doubles. It looks like overloading, but it isn't, because the function names are different.
In Windows it has typically been handled by having a #define to rename the function name to a specific type depending on a definition
e.g.
#ifdef UNICODE
#define strlen wcslen
#else
#define strlen strlen
#endif
In either C or C++ these are NOT the same functions.
You have a separate variant of each.
In C++ (and other languages with overloading) the names of each variant can be the same but the compiler can keep them apart by the type of arguments/return values. (In fact: Under the hood a sort of auto-generated unique name is used that is constructed from the real name and the types of the arguments. So it's not really the same name.)
In C the names are different so the distinction is clear to the programmer, compiler and linker.
I just want to know if C supports over loading?
As we use system functions like printf with different no of arguments.
Help me out
No, C doesn't support any form of overloading (unless you count the fact that the built-in operators are overloaded already, to be a form of overloading).
printf works using a feature called varargs. You make a call that looks like it might be overloaded:
printf("%d", 12); // int overload?
printf("%s", "hi"); // char* overload?
Actually it isn't. There is only one printf function, but the compiler uses a special calling convention to call it, where whatever arguments you provide are put in sequence on the stack[*]. printf (or vprintf) examines the format string and uses that to work out how to read those arguments back. This is why printf isn't type-safe:
char *format = "%d";
printf(format, "hi"); // undefined behaviour, no diagnostic required.
[*] the standard doesn't actually say they're passed on the stack, or mention a stack at all, but that's the natural implementation.
C does not support overloading. (Obviously, even if it did, they wouldn't use that for printf: you'd need a printf for every possible combination of types!)
printf uses varargs.
No, C does not support overloading, but it does support Variadic functions. printf is an example of Variadic functions.
It all depends on how you define "support".
Obviously, C language provides overloaded operators within the core language, since most operators in C have overloaded functionality: you can use binary + with int, long and with pointer types.
Yet at the same time C does not allow you to create your own overloaded functions, and C standard library also has to resort to differently-named functions to be used with different types (like abs, fabs, labs and so on).
In other words, C has some degree of overloading hardcoded into the core language, but neither the standard library nor the users are allowed to do their own overloading.
No, C doesn't support overloading. If you want to implement overloading similar to C++, you will have to mangle your function names manually, using some sort of consistent convention. For example:
int myModule_myFunction_add();
int myModule_myFunction_add_int(int);
int myModule_myFunction_add_char_int(char, int);
int myModule_myFunction_add_pMyStruct_int(MyStruct*, int);
There is no provision in the C standard for operator overloading; proposals to add it have been rejected on the basis that many build systems have no facility to accommodate multiple functions with the same name. While C++ can work around this by e.g. having
void foo(int);
int foo(char*);
long foo(char *, char **);
compile to functions named something like v__foo_i, i__foo_pc, and l__foo_pc_ppc [compilers use different naming conventions, though the C++ standard forbids the use of internal double-underscores in identifiers so as to allow compilers to give things names like the above without conflict]. The authors of the C standard did not want to require any compilers to change naming conventions to allow for overloading, so they don't provide for it.
It would be possible and useful for a compiler to allow overloading of static and inline functions without creating naming problems; this would in practice be just as useful as allowing overloading of externally-linkable functions since one could have a header file:
void foo_zz1(int);
int foo_zz2(char*);
long foo_zz3(char *, char **);
inline void foo(int x) { foo_zz1(x); }
inline int foo(char* st) { foo_zz2(st); }
long foo(char *p1, char **p2) { foo_zz3(p1,p2); }
I recall looking at an embedded compiler for a hybrid between C and C++ which supported the above as a non-standard extension, but I'm not positive about the details. In any case, even if some C compilers do support overloading of functions which do not have external linkage, it is not supported by C14 nor am I aware (unfortunately) of any active efforts to add such a feature to future C standards.
Nonetheless, GCC can be made, using macros, to support a form of overloading which is not supported directly in languages with operator overloading. GCC includes an intrinsic which will identify whether an expression can be evaluated as a compile-time constant. Using this intrinsic, one can write a macro which can evaluate an expression different ways (including by calling functions) depending upon the argument. This can be useful in some cases where a formula would evaluate as a compile-time constant if given a compile-time constant argument, but would yield a horrible mess if given a variable argument. As a simple example, suppose one wishes to bit-reverse a 32-bit value. If the value is constant, one could do that via:
#define nyb_swap(x) \
((((x) & 1)<<3) | (((x) & 2)<<1) | (((x) & 4)>>1) | ((((x) & 8)>>3) )
#define byte_swap(x) \
( (nyb_swap(x)<<4) | nyb_swap((x) >> 4) )
#define word_swap(x) \
( (byte_swap(x)<<24) | (byte_swap((x) >> 8)<<16) | \
(byte_swap((x) >> 16)<<8) | (byte_swap((x) >> 24)) )
And an expression like uint32_t x=word_swap(0x12345678); would simply load x with 0x87654321. On the other hand, if the value is not a constant, the result would be horrible: an expression like uint32_t y=word_swap(x); might generate many dozens of instructions; a call to a function with a partially-unrolled loop would be almost as fast but a lot more compact. On the other hand, using a loop would prevent the result from being regarded as a compile-time constant.
Using GCC, one can define a macro which will either use the constant-yielding macro if given a constant, or call a function when given a variable:
#define wswap(x) \
(__builtin_constant_p((x)) ? word_swap((x)) : word_swap_func((x))
This approach can't do everything type-based overloading can do, but it can do many things overloading can't.
Not directly, and this is not how printf works, but it is possible to create the equivalent of overloaded functions using macros if the types are of different sizes. The type-generic math functions in tgmath.h of the C99 standard may be implemented in that manner.
C Does not support overloading. But we can implement that functionality by programming our own library that in turn could provide overloading support.
No c does not support function overloading.
But you can get it to compile/work if you are using g++ (a c++ compiler).