Is passing variable number of arguments feature- function overloading? - c

Using stdarg.h we can have function call with variable number of arguments. Is this also classified as function overloading?

Typically, function overloading has the implication that a different instance of a function/method is invoked depending on the given parameters. With variable arguments in C, the same function is called regardless of the parameter list. So based on that, the answer would be, "No." The function itself could of course mimic the behavior of overloading (do A if 1 argument, do B if 2 arguments, etc.), but it probably would not normally be termed "overloaded".

If you're referring to the implementation, no the compiler doesn't create overloads. Variable argument functions use va_start/va_arg/va_end to get their arguments.

Really, the answer could be "yes" or "no", depending on your definition of "function overloading".
From the compiler's perspective, there is only one function instantiated. From the user's persepctive, you could call this "overloading", but it's enforced by neither the language nor the compiler.

No.
Overloading means that a different function will be called depending on the number and/or type(s) of the arguments (some languages can also use the return type). But in this case, you're calling the same function regardless of the number of arguments.
It's no more overloading than func(42) vs. func(43).
Note that C99 does have something that behaves much like a narrow form of overloading. If you have #include <tgmath.h>, then sqrt(x) will call one of three different functions (sqrtf(), sqrt(), or sqrtl()), depending on the type of x. But that's actually a "type-generic macro", not an overloaded function. C11 adds the _Generic keyword, making this facility available to user-written code. But that's not related to the OP's question.

No, this is not an example of function overloading.
True overloading implies that you have several different functions with the same name, but distinguished by their argument lists. Depending on the arguments you pass, a different function instance will be invoked.

The answer is a straight no since C doesnt implement function overloading as is. It may allow you to alter an argument or two but internally what takes place isnt the actual mechanism of function overloading.
When an overloaded function is called, the C++ compiler selects the
proper function by examining the number, types and order of the
arguments in the call. Function overloading is commonly used to create
several functions of the same name that perform similar tasks but on
different data types.
I dont really understand your question but in general function overloading depends upon the difference in the arguments.
The arguments of the method.
public int add(int a, int b)
public float add(float a, float b)
This is function overloading. Atleast one of the arguments has to change for the function to be overloaded. This is not possible in early versions of c has they dont identify the functions by the types of parameters being passed.

Overloading means having several contracts tied to a same name.
For example:
int a(int i1, int i2);
int a(char c);
int a();
This doesn't exist in C as a symbol has to be unique within a same scope, but this exists in C++.
So even if it can be called several ways, with different parameters types and numbers, the function int a(int i, ...); cannot be seen as overloading in C as there is only one contract, which is "give me the arguments you wish and I'll find a way to handle them".
But this function can be seen as an overloading in C++ in such a case:
int a(int i, ...);
int a();

Related

Why does a function in C(or Objective C) with no listed arguments allow inputting one argument?

In C when a function is declared like void main(); trying to input an argument to it(as the first and the only argument) doesn't cause a compilation error and in order to prevent it, function can be declared like void main(void);. By the way, I think this also applies to Objective C and not to C++. With Objective C I am referring to the functions outside classes. Why is this? Thanks for reaching out. I imagine it's something like that in Fortran variables whose names start with i, j, k, l, m or n are implicitly of integer type(unless you add an implicit none).
Edit: Does Objective C allow this because of greater compatibility with C, or is it a reason similar to the reason for C having this for having this?
Note: I've kept the mistake in the question so that answers and comments wouldn't need to be changed.
Another note: As pointed out by #Steve Summit and #matt (here), Objective-C is a strict superset of C, which means that all C code is also valid Objective-C code and thus has to show this behavior regarding functions.
Because function prototypes were not a part of pre-standard C, functions could be declared only with empty parentheses:
extern double sin();
All existing code used that sort of notation. The standard would have failed had such code been made invalid, or made to mean “zero arguments”.
So, in standard C, a function declaration like that means “takes an undefined list of zero or more arguments”. The standard does specify that all functions with a variable argument list must have a prototype in scope, and the prototype will end with , ...). So, a function declared with an empty argument list is not a variadic function (whereas printf() is variadic).
Because the compiler is not told about the number and types of the arguments, it cannot complain when the function is called, regardless of the arguments in the call.
In early (pre-ANSI) C, a correct match of function arguments between a function's definition and its calls was not checked by the compiler.
I believe this was done for two reasons:
It made the compiler considerably simpler
C was always designed for separate compilation, and checking consistency across translation units (that is, across multiple source files) is a much harder problem.
So, in those early days, making sure that a function's call(s) matched its definition was the responsibility of the programmer, or of a separate program, lint.
The lax checking of function arguments also made varargs functions like printf possible.
At any rate, in the original C, when you wrote
extern int f();
, you were not saying "f is a function accepting no arguments and returning int". You were simply saying "f is a function returning int". You weren't saying anything about the arguments.
Basically, early C's type system didn't even have a way of recording the parameters expected by a function. And that was especially true when separate compilation came into play, because the linker resolved external symbols based pretty much on their names only.
C++ changed this, of course, by introducing function prototypes. In C++, when you say extern int f();, you are declaring a function that explicitly takes 0 arguments. (Also a scheme of "name mangling" was devised, which among other things let the linker do some consistency checking at link time.)
Now, this was all somewhat of a deficiency in old C, and the biggest change that ANSI C introduced was to adopt C++'s function prototype notation into C. It was slightly different, though: to maintain compatibility, in C saying extern int f(); had to be interpreted as meaning "function returning int and taking unspecified arguments". If you wanted to explicitly say that a function took no arguments, you had to (and still have to) say extern int f(void);.
There was also a new ... notation to explicitly mark a function as taking variable arguments, like printf, and the process of getting rid of "implicit int" in declarations was begun.
All in all it was a significant improvement, although there are still a few holes. In particular, there's still some responsibility placed on the programmer, namely to ensure that accurate function prototypes are always in scope, so that the compiler can check them. See also this question.
Two additional notes: You asked about Objective C, but I don't know anything about that language, so I can't address that point. And you said that for a function without a prototype, "trying to input an argument to it (as the first and the only argument) doesn't cause a compilation error", but in fact, you can pass any number or arguments to such a function, without error.

In C, does a caller ever treat variadics specially?

It has been my understanding that C variadic arguments are handled entirely on the callee's side, i.e. that if you called a function f with
f(1, 2, 3.0)
The compiler would generate the same code for the call, whether you had declared f as
void f(int, int, double);
or
void f(int, int, ...);
The context for this question is this issue with calling a not-truly-variadic C function from Rust with a variadic FFI definition. If variadics do not matter from the caller's perspective (aside of course from type checking), then it seems odd to me that Rust would generate different code for a call where the function had been declared as variadic.
If this is not in fact decided by the C specification, but rather ABI-dependant, I would be most interested in the answer for the System V ABI, which from what I read of it didn't seem to indicate any special handling of variadics on the caller's side.
This is a non-ABI-specific answer.
Yes, formally the caller can (and, in general case, will) treat functions with variadic arguments in a special way. This is actually the reason why from the beginning of standardized times C language required all variadic functions to be declared with prototype before the point of the call. Note that even though it was possible to safely call undeclared functions in C89/90, the permission to do so did not extend to variadic functions: those always had to be declared in advance. Otherwise, the behavior was undefined.
In a slightly different form the rule still stands in modern C. Even though post-C99 C no longer allows calling undeclared functions, it still does not require prototype declarations. Yet, variadic functions have to be declared with prototype before the point of the call. The rationale is the same: the caller has to know that it is calling a variadic function and, possibly, handle the call differently.
And historically, there were implementations that used completely differrent calling conventions when calling variadic functions.

In C, should I define (not declare/prototype) a function that takes no arguments with void or with an empty list?

There may or may not be a duplicate to this question, although I tried to find one but everyone's answer seemed to only be referring to the declaration/prototype. They specify that a definition void foo() { } is the same as void foo(void) { }, but which way should I actually use? In C89? In C99? I believe I should start using void foo(void); for my prototype declarations, but is there any difference at all if I use void or not for the definition?
They are different, void foo(void) declares foo as a function that takes NO argument, and returns nothing.
While for void foo(), the function foo takes UNSPECIFIED number of arguments, and returns void.
You should always use the first one for standard conforming C.
They are semantically different
Given the following functions:
void f(void);
void g();
It is a compile-time error to call f with arguments:
error: too many arguments to function "f"
However, that declaration of g means it takes an unspecified number of arguments. To the compiler, this means it can take any number of arguments, from zero to some implementation-defined upper bound. The compiler will accept:
g();
g(argument);
g(argument1, argument2, ... , argumentN);
Essentially, because g did not specify its arguments, the compiler doesn't really know how many arguments g accepts. So the compiler will accept anything and emit code according to the actual usage of g. If you pass one argument, it will emit code to push one argument, call g and then pop it off the stack.
It's the difference between explicitly saying "no, I don't take any arguments" and not saying anything when questioned. Remaining silent keeps the issue ambiguous, to the point where the statement which calls g is the only concrete information the compiler has regarding which parameters the function accepts. So, it will emit machine code according to that specification.
Recommendations
which way should I actually use?
According to the SEI CERT C Coding Standard, it is recommended to explicitly specify void when a function accepts no arguments.
The article cites, as the basis of its recommendation, the C11 standard, subclause 6.11.6:
The use of function declarators with empty parentheses
(not prototype-format parameter type declarators)
is an obsolescent feature.
Declaring a function with an unspecified parameter list is classified as medium severity. Concrete examples of problems that may arise are presented. Namely:
Ambiguous Interface
Compiler will not perform checks
May hide errors
Information Outflow
Potential security flaw
Information Security has a post exploring not just the security but also the programming and software development implications of both styles.
The issue is more about quality assurance.
Old-style declarations are dangerous, not because of evil programmers,
but because of human programmers, who cannot think of everything
and must be helped by compiler warnings. That's all the point of function
prototypes, introduced in ANSI C, which include type information for
the function parameters.
I'll try to answer simply and practically.
From the practice and reference I'm familiar with, c89 and c99 should treat declaration/definition/call of functions which take no arguments and return no value equally.
In case one omits the prototype declaration (usually in the header file), the definition has to specify the number and type of arguments taken (i.e. it must take the form of prototype, explicitly void foo(void) for taking no arguments)
and should precede the actual function call in the source file (if used in the same program). I've always been advised to write prototypes and decently segmented code as part of good programming practice.
Declaration:
void foo (void); /*not void foo(), in order to conform the prototype definition !*/
Definition:
void foo (void) /*must match its prototype from the declaration !*/
{
/*code for what this function actually does*/
return;
}
Function call from within main() or another function:
...
foo();
...
Yes, there is a difference. It is better to define functions like void foo(void){} cause it will prevent passing any arguments to function in compilation time with error like:too many arguments to function 'foo'
EDIT: If you want to add such compiler's validation for existing code, this probably can be done changing prototypes in the headers. Without changing the function definitions. But it looks awkward IMHO. So for newly created programs (as pointed by skillful commentators above) it's better to make definition and declaration match verbose, and this is bad and ancient practice to declare and define with empty parentheses

How is the ANSI C function declaration an improvement on the old Kernigan and Ritchie style?

With regards to the ANSI C function declaration, how is this an improvement from the old K&R style? I know the differences between them, I just want to know what problems could arise from using the old style and how the new style is an improvement.
Old-style function declarations, in particular, don't allow for compile-time checking of calls.
For example:
int func(x, y)
char *x;
double y;
{
/* ... */
}
...
func(10, 20);
When the compiler sees the call, it doesn't know the types of the parameters of the function func, so it can't diagnose the error.
By contrast:
int better_func(char *x, double y) {
/* ... */
}
...
better_func(10, 20);
will result in a compiler error message (or at least a warning).
Another improvement: prototypes make it possible to have functions with parameters of type float, and of integer types narrower than int (the 3 char types and the two short types). Without a prototype, float is promoted to double, and narrow integer types are promoted to int or to unsigned int. With a prototype, a float argument is passed as a float (unless the function is variadic, like printf, in which case the old rules apply to the variadic arguments).
The C Rationale document discusses this in section 6.7.5.3, probably better than I have:
The function prototype mechanism is one of the most useful additions
to the C language. The feature, of course, has precedent in many of
the Algol-derived languages of the past 25 years. The particular form
adopted in the Standard is based in large part upon C++.
Function prototypes provide a powerful translation-time error
detection capability. In traditional C practice without prototypes, it
is extremely difficult for the translator to detect errors (wrong
number or type of arguments) in calls to functions declared in another
source file. Detection of such errors has occurred either at runtime
or through the use of auxiliary software tools.
In function calls not in the scope of a function prototype, integer
arguments have the integer promotions applied and float
arguments are widened to double. It is not possible in such a call
to pass an unconverted char or float argument. Function
prototypes give the programmer explicit control over the function
argument type conversions, so that the often inappropriate and
sometimes inefficient default widening rules for arguments can be
suppressed by the implementation.
There's more; go read it.
A non-defining function declaration in K&R looks as follows
int foo();
and introduces a function that accepts unspecified number of arguments. The problem with such declaration style is obvious: it specifies neither the number of parameters nor their types. There's no way for the compiler to check the correctness of the call with respect to the number of arguments or their types at the point of the call. There's no way for the compiler to perform the argument type conversion or issue and error message in situations when argument type does not match the expected parameter type.
A function declaration, which is used as a part of function definition in K&R looks as follows
int foo(a, b)
int a;
char b;
{ ...
It specifies the number of parameters, but still does not specify their types. Moreover, even though the number of parameters appears to be exposed by this declaration, it still formally declares foo the same way as int foo(); does, meaning that calling it as foo(1, 2, 3, 4, 5) still does not constitute a constraint violation.
The new style, i.e. declaration with prototype is better for obvious reasons: it exposes both the number and the types of parameters. It forces the compiler to check the validity of the call (with regard to the number and the types of parameters). And it allows the compiler to perform implicit type conversions from argument types to parameter types.
There are other, less obvious benefits provided by prototype declarations. Since the number and types of function parameters are known precisely to both the caller and the function itself, it is possible to choose the most efficient method of passing the arguments (the calling convention) at the point of the call without seeing the function definition. Without that information K&R implementations were forced to follow a single pre-determined "one size fits all" calling convention for all functions.

Why is void f(...) not allowed in C?

Why doesn't C allow a function with variable length argument list such as:
void f(...)
{
// do something...
}
I think the motivation for the requirement that varargs functions must have a named parameter is for uniformity of va_start. For ease of implementation, va_start takes the name of the last named parameter. With a typical varargs calling convention, and depending on the direction arguments are stored, va_arg will find the first vararg at address (&parameter_name) + 1 or (first_vararg_type*)(&parameter_name) - 1, plus or minus some padding to ensure alignment.
I don't think there's any particular reason why the language couldn't support varargs functions with no named parameters. There would have to be an alternative form of va_start for use in such functions, that would have to get the first vararg directly from the stack pointer (or to be pedantic the frame pointer, which is in effect the value that the stack pointer had on function entry, since the code in the function might well have moved the sp since function entry). That's possible in principle -- any implementation should have access to the stack[*] somehow, at some level -- but it might be annoying for some implementers. Once you know the varargs calling convention you can generally implement the va_ macros without any other implementation-specific knowledge, and this would require also knowing how to get at the call arguments directly. I have implemented those varargs macros before, in an emulation layer, and it would have annoyed me.
Also, there's not a lot of practical use for a varargs function with no named parameters. There's no language feature for a varargs function to determine the type and number of variable arguments, so the callee has to know the type of the first vararg anyway in order to read it. So you might as well make it a named parameter with a type. In printf and friends the value of the first parameter tells the function what the types are of the varargs, and how many of them there are.
I suppose that in theory the callee could look at some global to figure out how to read the first argument (and whether there even is one), but that's pretty nasty. I would certainly not go out of my way to support that, and adding a new version of va_start with extra implementation burden is going out of my way.
[*] or if the implementation doesn't use a stack, to whatever it uses instead to pass function arguments.
With variable-length argument list you must declare the type of the first argument - that's the syntax of the language.
void f(int k, ...)
{
/* do something */
}
will work just fine. You then have to use va_list, va_start, va_end, etc. inside the function to access individual arguments.
C does allow for variable length arguments, but you need to use va_list, va_start, va_end, etc. for it. How do you think printf and friends are implemented? That said, I would recommend against it. You can usually accomplish a similar thing more cleanly using an array or struct for the parameters.
Playing around with it, made this nice implementation that I think some people might want to consider.
template<typename T>
void print(T first, ...)
{
va_list vl;
va_start(vl, first);
T temp = first;
do
{
cout << temp << endl;
}
while (temp = va_arg(vl, T));
va_end(vl);
}
It ensures you have one variable minimum, but allows you to put them all in a loop in a clean way.
There's no an intrisic reason why C can't accept void f(...). It could, but "designers" of this C feature decided not to do so.
My speculation about their motivations is that allowing void f(...) would require more "hidden" code (that can be accounted as a runtime) than not allowing it: in order to make distinguishable the case f() from f(arg) (and the others), C should provide a way to count how many args are given, and this needs more generated code (and likely a new keyword or a special variable like say "nargs" to retrieve the count), and C usually tries to be as minimalist as possible.
The ... allows for no arguments, ie: for int printf(const char *format, ...); the statement
printf("foobar\n");
is valid.
If you don't mandate at least 1 parameter (which should be used to check for more parameters), there is no way for the function to "know" how it was called.
All these statements would be valid
f();
f(1, 2, 3, 4, 5);
f("foobar\n");
f(qsort);

Resources