Returning Structure from Function Implementation - c

In C and C++ we can return structures or classes from functions and methods:
class A final { public:
int i;
A(int n) { this->i=n; }
};
A function(void) {
return A(4);
}
int main(void) {
A result = function();
return 0;
}
What I want to know is how is this implemented? The traditional wisdom is that it is copied, thus incurring a cost: to mitigate, you can pass a pointer to the structure instead and return nothing.
However, I'm not sure that this is (always) necessary. In the above, for example, the constructed structure was never used locally. Since we're returning it, couldn't the data be instead filled into the result variable one level higher directly, before the stack is popped?
The main problem to doing this, as I see it, is that the callee function needs to know where the caller wants the result. Another problem is in the case of more recursive functions: if the ultimate variable you're copying into is several layers higher in the stack, then it would become much harder for the callee to locate the correct place. My guess, based on my limited compiler knowledge, is that it is instead the caller that copies the struct from the callee, right after the return.
In general, I feel like such an optimization might be tricky. For inlined functions, I expect it to happen implicitly, and if the structure is never used locally, I can't see a reason why it couldn't be implemented this way. However, for everything else, I expect the issues you run into trying to implement it are too great, and indeed in general it is just copied.
So:
inline: (happens implicitly?)
return never-used new structure: (happens?)
recursive: (any complications?)
in general: (doesn't happen?)

In C++, compilers often use the copy elision idiom (often called " return value optimization") to avoid unnecessary copies when returning values. At least, they are often allowed to do it.
C++ Standard, section § 12.8 :
When certain criteria are met, an implementation is allowed to omit
the copy/move construction of a class object, even if the constructor
selected for the copy/move operation and/or the destructor for the
object have side effects. In such cases, the implementation treats the
source and target of the omitted copy/move operation as simply two
different ways of referring to the same object, and the destruction of
that object occurs at the later of the times when the two objects
would have been destroyed without the optimization.
This elision of copy/move operations, called copy elision, is
permitted in the following circumstances (which may be combined to
eliminate multiple copies)
In a return statement in a function with a class return type, when the
expression is the name of a non-volatile automatic object (other than
a function or catch-clause parameter) with the same cv- unqualified
type as the function return type, the copy/move operation can be
omitted by constructing the automatic object directly into the
function’s return value
...

Related

alloca instead of local variable in alsa

I was using a sample C ALSA program as reference and ran along the following piece of code:
...
snd_ctl_event_t *event;
snd_ctl_event_alloca(&event);
...
Based on the ALSA source code, snd_ctl_event_alloca is a macro that calls __snd_alloca which is a macro that finally expands to the following equivalent line for snd_ctl_event_alloca(&event); (with some straightforward simplification):
event = (snd_ctl_event_t *) alloca(snd_ctl_event_sizeof());
memset(event, 0, snd_ctl_event_sizeof());
where snd_ctl_event_sizeof() is only implemented once in the whole library as:
size_t snd_ctl_event_sizeof()
{
return sizeof(snd_ctl_event_t);
}
So my question is, isn't this whole process equivalent to simply doing:
snd_ctl_event_t event = {0};
For reference, these are the macros:
#define snd_ctl_event_alloca(ptr) __snd_alloca(ptr, snd_ctl_event)
#define __snd_alloca(ptr,type) do { *ptr = (type##_t *) alloca(type##_sizeof()); memset(*ptr, 0, type##_sizeof()); } while (0)
Clarifications:
The first block of code above is at the start of the body of a function and not in a nested block
EDIT
As it turns out (from what I understand), doing:
snd_ctl_event_t event;
gives a storage size of 'event' isn't known error because snd_ctl_event_t is apparently an opaque struct that's defined privately. Therefore the only option is dynamic allocation.
Since it is an opaque structure, the purpose of all these actions is apparently to implement an opaque data type while saving all the "pros" and defeating at least some of their "cons".
One prominent problem with opaque data types is that in standard C you are essentially forced to allocate them dynamically in an opaque library function. It is not possible to implicitly declare an opaque object locally. This negatively impacts efficiency and often forces the client to implement additional resource management (i.e. remember to release the object when it is no longer needed). Exposing the exact size of the opaque object (through a function in this case) and relying on alloca to allocate storage is as close as you can get to a more efficient and fairly care-free local declaration.
If function-wide lifetime is not required, alloca can be replaced with VLAs, but the authors probably didn't want/couldn't use VLAs. (I'd say that using VLA would take one even closer to emulating a true local declaration.)
Often in order to implement the same technique the opaque object size might be exposed as a compile-time constant in a header file. However, using a function has an added benefit of not having to recompile the entire project if the object size in this isolated library changes (as #R. noted in the comments).
Previous version of the answer (the points below still apply, but apparently are secondary):
It is not exactly equivalent, since alloca defies scope-based lifetime rules. Lifetime of alloca-ed memory extends to the end of the function, while lifetime of local object extends only to the end of the block. It could be a bad thing, it could be a good thing depending on how you use it.
In situations like
some_type *ptr;
if (some condition)
{
...
ptr = /* alloca one object */;
...
}
else
{
...
ptr = /* alloca another object */;
...
}
the difference in semantics can be crucial. Whether it is your case or not - I can't say from what you posted so far.
Another unrelated difference in semantics is that memset will zero-out all bytes of the object, while = { 0 } is not guaranteed to zero-out padding bytes (if any). It could be important if the object is then used with some binary-based API's (like sent to a compressed I/O stream).

Error: Expected a “;” Visual Studio 2013 [duplicate]

This is not a lambda function question, I know that I can assign a lambda to a variable.
What's the point of allowing us to declare, but not define a function inside code?
For example:
#include <iostream>
int main()
{
// This is illegal
// int one(int bar) { return 13 + bar; }
// This is legal, but why would I want this?
int two(int bar);
// This gets the job done but man it's complicated
class three{
int m_iBar;
public:
three(int bar):m_iBar(13 + bar){}
operator int(){return m_iBar;}
};
std::cout << three(42) << '\n';
return 0;
}
So what I want to know is why would C++ allow two which seems useless, and three which seems far more complicated, but disallow one?
EDIT:
From the answers it seems that there in-code declaration may be able to prevent namespace pollution, what I was hoping to hear though is why the ability to declare functions has been allowed but the ability to define functions has been disallowed.
It is not obvious why one is not allowed; nested functions were proposed a long time ago in N0295 which says:
We discuss the introduction of nested functions into C++. Nested
functions are well understood and their introduction requires little
effort from either compiler vendors, programmers, or the committee.
Nested functions offer significant advantages, [...]
Obviously this proposal was rejected, but since we don't have meeting minutes available online for 1993 we don't have a possible source for the rationale for this rejection.
In fact this proposal is noted in Lambda expressions and closures for C
++ as a possible alternative:
One article [Bre88] and proposal N0295 to the C
++ committee [SH93] suggest adding nested functions to C
++ . Nested functions are similar to lambda expressions, but are defined as statements within a function body, and the resulting
closure cannot be used unless that function is active. These proposals
also do not include adding a new type for each lambda expression, but
instead implementing them more like normal functions, including
allowing a special kind of function pointer to refer to them. Both of
these proposals predate the addition of templates to C
++ , and so do not mention the use of nested functions in combination with generic algorithms. Also, these proposals have no way to copy
local variables into a closure, and so the nested functions they
produce are completely unusable outside their enclosing function
Considering we do now have lambdas we are unlikely to see nested functions since, as the paper outlines, they are alternatives for the same problem and nested functions have several limitations relative to lambdas.
As for this part of your question:
// This is legal, but why would I want this?
int two(int bar);
There are cases where this would be a useful way to call the function you want. The draft C++ standard section 3.4.1 [basic.lookup.unqual] gives us one interesting example:
namespace NS {
class T { };
void f(T);
void g(T, int);
}
NS::T parm;
void g(NS::T, float);
int main() {
f(parm); // OK: calls NS::f
extern void g(NS::T, float);
g(parm, 1); // OK: calls g(NS::T, float)
}
Well, the answer is "historical reasons". In C you could have function declarations at block scope, and the C++ designers did not see the benefit in removing that option.
An example usage would be:
#include <iostream>
int main()
{
int func();
func();
}
int func()
{
std::cout << "Hello\n";
}
IMO this is a bad idea because it is easy to make a mistake by providing a declaration that does not match the function's real definition, leading to undefined behaviour which will not be diagnosed by the compiler.
In the example you give, void two(int) is being declared as an external function, with that declaration only being valid within the scope of the main function.
That's reasonable if you only wish to make the name two available within main() so as to avoid polluting the global namespace within the current compilation unit.
Example in response to comments:
main.cpp:
int main() {
int foo();
return foo();
}
foo.cpp:
int foo() {
return 0;
}
no need for header files. compile and link with
c++ main.cpp foo.cpp
it'll compile and run, and the program will return 0 as expected.
You can do these things, largely because they're actually not all that difficult to do.
From the viewpoint of the compiler, having a function declaration inside another function is pretty trivial to implement. The compiler needs a mechanism to allow declarations inside of functions to handle other declarations (e.g., int x;) inside a function anyway.
It will typically have a general mechanism for parsing a declaration. For the guy writing the compiler, it doesn't really matter at all whether that mechanism is invoked when parsing code inside or outside of another function--it's just a declaration, so when it sees enough to know that what's there is a declaration, it invokes the part of the compiler that deals with declarations.
In fact, prohibiting these particular declarations inside a function would probably add extra complexity, because the compiler would then need an entirely gratuitous check to see if it's already looking at code inside a function definition and based on that decide whether to allow or prohibit this particular declaration.
That leaves the question of how a nested function is different. A nested function is different because of how it affects code generation. In languages that allow nested functions (e.g., Pascal) you normally expect that the code in the nested function has direct access to the variables of the function in which it's nested. For example:
int foo() {
int x;
int bar() {
x = 1; // Should assign to the `x` defined in `foo`.
}
}
Without local functions, the code to access local variables is fairly simple. In a typical implementation, when execution enters the function, some block of space for local variables is allocated on the stack. All the local variables are allocated in that single block, and each variable is treated as simply an offset from the beginning (or end) of the block. For example, let's consider a function something like this:
int f() {
int x;
int y;
x = 1;
y = x;
return y;
}
A compiler (assuming it didn't optimize away the extra code) might generate code for this roughly equivalent to this:
stack_pointer -= 2 * sizeof(int); // allocate space for local variables
x_offset = 0;
y_offset = sizeof(int);
stack_pointer[x_offset] = 1; // x = 1;
stack_pointer[y_offset] = stack_pointer[x_offset]; // y = x;
return_location = stack_pointer[y_offset]; // return y;
stack_pointer += 2 * sizeof(int);
In particular, it has one location pointing to the beginning of the block of local variables, and all access to the local variables is as offsets from that location.
With nested functions, that's no longer the case--instead, a function has access not only to its own local variables, but to the variables local to all the functions in which it's nested. Instead of just having one "stack_pointer" from which it computes an offset, it needs to walk back up the stack to find the stack_pointers local to the functions in which it's nested.
Now, in a trivial case that's not all that terrible either--if bar is nested inside of foo, then bar can just look up the stack at the previous stack pointer to access foo's variables. Right?
Wrong! Well, there are cases where this can be true, but it's not necessarily the case. In particular, bar could be recursive, in which case a given invocation of bar might have to look some nearly arbitrary number of levels back up the stack to find the variables of the surrounding function. Generally speaking, you need to do one of two things: either you put some extra data on the stack, so it can search back up the stack at run-time to find its surrounding function's stack frame, or else you effectively pass a pointer to the surrounding function's stack frame as a hidden parameter to the nested function. Oh, but there's not necessarily just one surrounding function either--if you can nest functions, you can probably nest them (more or less) arbitrarily deep, so you need to be ready to pass an arbitrary number of hidden parameters. That means you typically end up with something like a linked list of stack frames to surrounding functions, and access to variables of surrounding functions is done by walking that linked list to find its stack pointer, then accessing an offset from that stack pointer.
That, however, means that access to a "local" variable may not be a trivial matter. Finding the correct stack frame to access the variable can be non-trivial, so access to variables of surrounding functions is also (at least usually) slower than access to truly local variables. And, of course, the compiler has to generate code to find the right stack frames, access variables via any of an arbitrary number of stack frames, and so on.
This is the complexity that C was avoiding by prohibiting nested functions. Now, it's certainly true that a current C++ compiler is a rather different sort of beast from a 1970's vintage C compiler. With things like multiple, virtual inheritance, a C++ compiler has to deal with things on this same general nature in any case (i.e., finding the location of a base-class variable in such cases can be non-trivial as well). On a percentage basis, supporting nested functions wouldn't add much complexity to a current C++ compiler (and some, such as gcc, already support them).
At the same time, it rarely adds much utility either. In particular, if you want to define something that acts like a function inside of a function, you can use a lambda expression. What this actually creates is an object (i.e., an instance of some class) that overloads the function call operator (operator()) but it still gives function-like capabilities. It makes capturing (or not) data from the surrounding context more explicit though, which allows it to use existing mechanisms rather than inventing a whole new mechanism and set of rules for its use.
Bottom line: even though it might initially seem like nested declarations are hard and nested functions are trivial, more or less the opposite is true: nested functions are actually much more complex to support than nested declarations.
The first one is a function definition, and it is not allowed. Obvious, wt is the usage of putting a definition of a function inside another function.
But the other twos are just declarations. Imagine you need to use int two(int bar); function inside the main method. But it is defined below the main() function, so that function declaration inside the function makes you to use that function with declarations.
The same applies to the third. Class declarations inside the function allows you to use a class inside the function without providing an appropriate header or reference.
int main()
{
// This is legal, but why would I want this?
int two(int bar);
//Call two
int x = two(7);
class three {
int m_iBar;
public:
three(int bar):m_iBar(13 + bar) {}
operator int() {return m_iBar;}
};
//Use class
three *threeObj = new three();
return 0;
}
This language feature was inherited from C, where it served some purpose in C's early days (function declaration scoping maybe?).
I don't know if this feature is used much by modern C programmers and I sincerely doubt it.
So, to sum up the answer:
there is no purpose for this feature in modern C++ (that I know of, at least), it is here because of C++-to-C backward compatibility (I suppose :) ).
Thanks to the comment below:
Function prototype is scoped to the function it is declared in, so one can have a tidier global namespace - by referring to external functions/symbols without #include.
Actually, there is one use case which is conceivably useful. If you want to make sure that a certain function is called (and your code compiles), no matter what the surrounding code declares, you can open your own block and declare the function prototype in it. (The inspiration is originally from Johannes Schaub, https://stackoverflow.com/a/929902/3150802, via TeKa, https://stackoverflow.com/a/8821992/3150802).
This may be particularily useful if you have to include headers which you don't control, or if you have a multi-line macro which may be used in unknown code.
The key is that a local declaration supersedes previous declarations in the innermost enclosing block. While that can introduce subtle bugs (and, I think, is forbidden in C#), it can be used consciously. Consider:
// somebody's header
void f();
// your code
{ int i;
int f(); // your different f()!
i = f();
// ...
}
Linking may be interesting because chances are the headers belong to a library, but I guess you can adjust the linker arguments so that f() is resolved to your function by the time that library is considered. Or you tell it to ignore duplicate symbols. Or you don't link against the library.
This is not an answer to the OP question, but rather a reply to several comments.
I disagree with these points in the comments and answers: 1 that nested declarations are allegedly harmless, and 2 that nested definitions are useless.
1 The prime counterexample for the alleged harmlessness of nested function declarations is the infamous Most Vexing Parse. IMO the spread of confusion caused by it is enough to warrant an extra rule forbidding nested declarations.
2 The 1st counterexample to the alleged uselessness of nested function definitions is frequent need to perform the same operation in several places inside exactly one function. There is an obvious workaround for this:
private:
inline void bar(int abc)
{
// Do the repeating operation
}
public:
void foo()
{
int a, b, c;
bar(a);
bar(b);
bar(c);
}
However, this solution often enough contaminates the class definition with numerous private functions, each of which is used in exactly one caller. A nested function declaration would be much cleaner.
Specifically answering this question:
From the answers it seems that there in-code declaration may be able to prevent namespace pollution, what I was hoping to hear though is why the ability to declare functions has been allowed but the ability to define functions has been disallowed.
Because consider this code:
int main()
{
int foo() {
// Do something
return 0;
}
return 0;
}
Questions for language designers:
Should foo() be available to other functions?
If so, what should be its name? int main(void)::foo()?
(Note that 2 would not be possible in C, the originator of C++)
If we want a local function, we already have a way - make it a static member of a locally-defined class. So should we add another syntactic method of achieving the same result? Why do that? Wouldn't it increase the maintenance burden of C++ compiler developers?
And so on...
Just wanted to point out that the GCC compiler allows you to declare functions inside functions. Read more about it here. Also with the introduction of lambdas to C++, this question is a bit obsolete now.
The ability to declare function headers inside other functions, I found useful in the following case:
void do_something(int&);
int main() {
int my_number = 10 * 10 * 10;
do_something(my_number);
return 0;
}
void do_something(int& num) {
void do_something_helper(int&); // declare helper here
do_something_helper(num);
// Do something else
}
void do_something_helper(int& num) {
num += std::abs(num - 1337);
}
What do we have here? Basically, you have a function that is supposed to be called from main, so what you do is that you forward declare it like normal. But then you realize, this function also needs another function to help it with what it's doing. So rather than declaring that helper function above main, you declare it inside the function that needs it and then it can be called from that function and that function only.
My point is, declaring function headers inside functions can be an indirect method of function encapsulation, which allows a function to hide some parts of what it's doing by delegating to some other function that only it is aware of, almost giving an illusion of a nested function.
Nested function declarations are allowed probably for
1. Forward references
2. To be able to declare a pointer to function(s) and pass around other function(s) in a limited scope.
Nested function definitions are not allowed probably due to issues like
1. Optimization
2. Recursion (enclosing and nested defined function(s))
3. Re-entrancy
4. Concurrency and other multithread access issues.
From my limited understanding :)

return a static structure in a function

C89
gcc (GCC) 4.7.2
Hello,
I am maintaining someones software and I found this function that returns the address of a static structure. This should be ok as the static would indicate that it is a global so the address of the structure will be available until the program terminates.
DRIVER_API(driver_t*) driver_instance_get(void)
{
static struct tag_driver driver = {
/* Elements initialized here */
};
return &driver;
}
Used like this:
driver_t *driver = NULL;
driver = driver_instance_get();
The driver variable is used throughout the program until it terminates.
some questions:
Is it good practice to do like this?
Is there any difference to declaring it static outside the function at file level?
Why not pass it a memory pool into the function and allocate memory to the structure so that the structure is declared on the heap?
Many thanks for any suggestions,
Generally, no. It makes the function non-reentrable. It can be used with restraint in situations when the code author really knows what they are doing.
Declaring it outside would pollute the file-level namespace with the struct object's name. Since direct access to the the object is not needed anywhere else, it makes more sense to declare it inside the function. There's no other difference.
Allocate on the heap? Performance would suffer. Memory fragmentation would occur. And the caller will be burdened with the task of explicitly freeing the memory. Forcing the user to use dynamic memory when it can be avoided is generally not a good practice.
A better idea for a reentrable implementation would be to pass a pointer to the destination struct from the outside. That way the caller has the full freedom of allocating the recipient memory in any way they see fit.
Of course, what you see here can simply be a C implementation of a singleton-like idiom (and most likely it is, judging by the function's name). This means that the function is supposed to return the same pointer every time, i.e. all callers are supposed to see and share the same struct object through the returned pointer. And, possibly, thy might even expect to modify the same object (assuming no concurrency). In that case what you see here is a function-wrapped implementation of a global variable. So, changing anything here in that case would actually defeat the purpose.
As long as you realize that any code that modifies the pointer returned by the function is modifying the same variable as any other code that got the same pointer is referring to, it isn't a huge problem. That 'as long as' can be a fairly important issue, but it works. It usually isn't the best practice — for example, the C functions such as asctime() that return a pointer to a single static variable are not as easy to use as those that put their result into a user-provided variable — especially in threaded code (the function is not reentrant). However, in this context, it looks like you're achieving a Singleton Pattern; you probably only want one copy of 'the driver', so it looks reasonable to me — but we'd need a lot more information about the use cases before pontificating 'this is diabolically wrong'.
There's not really much difference between a function static and a file static variable here. The difference is in the implementation code (a file static variable can be accessed by any code in the file; the function static variable can only be accessed in the one function) and not in the consumer code.
'Memory pool' is not a standard C concept. It would probably be better, in general, to pass in the structure to be initialized by the called function, but it depends on context. As it stands, for the purpose for which it appears to be designed, it is OK.
NB: The code would be better written as:
driver_t *driver = driver_instance_get();
The optimizer will probably optimize the code to that anyway, but there's no point in assigning NULL and then reassigning immediately.

C function parameter optimization: (MyStruct const * const myStruct) vs. (MyStruct const myStruct)

Example available at ideone.com:
int passByConstPointerConst(MyStruct const * const myStruct)
int passByValueConst (MyStruct const myStruct)
Would you expect a compiler to optimize the two functions above such that neither one would actually copy the contents of the passed MyStruct?
I do understand that many optimization questions are specific to individual compilers and optimization settings, but I can't be designing for a single compiler. Instead, I would like to have a general expectation as to whether or not I need to be passing pointers to avoid copying. It just seems like using const and allowing the compiler to handle the optimization (after I configure it) should be a better choice and would result in more legible and less error prone code.
In the case of the example at ideone.com, the compiler clearly is still copying the data to a new location.
In the first case (passing a const pointer to const) no copying occurs.
In the second case, copying does occur and I would not expect that to be optimized out if for no other reason because the address of the object is taken and then passed through an ellipsis into a function and from the point of view of the compiler, who knows what the function does with that pointer?
More generally speaking, I don't think changing call-by-value into call-by-reference is something compilers do. If you want copy by reference, implement it yourself.
Is it theoretically possible that a compiler could detect that it could just convert the function to be pass-by-reference? Yes; nothing in the C standard says it cannot..
Why are you worrying about this? If you are concerned about performance, has profiling shown copy-by-value to be a significant bottleneck in your software?
This topic is addressed by the comp.lang.c FAQ:
http://c-faq.com/struct/passret.html
When large structures are passed by value, this is commonly optimized by actually passing the address of the object rather than a copy. The callee then determines whether a copy needs to be made, or whether it can simply work with the original object.
The const qualifier on the parameter makes no difference. It is not part of the type; it is simply ignored. That is to say, these two function declarations are equivalent:
int foo(int);
int foo(const int);
It's possible for the declaration to omit the const, but for the definition to have it and vice versa. The optimization of the call cannot hinge on this const in the declaration. That const is not what creates the semantics that the object is passed by value and hence the original cannot be modified.
The optimization has to preserve the semantics; it has to look as if a copy of the object was really passed.
There are two ways you can tell that a copy was not passed: one is that a modification to the apparent copy affects the original. The other way is to compare addresses. For instance:
int compare(struct foo *ptr, struct foo copy);
Inside compare we can take the address of copy and see whether it is equal to ptr. If the optimization takes place even though we have done this, then it reveals itself to us.
The second declaration is actually a direct request by the user to receive a copy of the passed struct.
const modifier eliminates the possibility of any modifications made to the local copy, however, it is does not eliminate all the reasons for copying.
Firstly, the copy has to maintain its address identity, meaning that inside the second function the &myStruct expression should produce a value different from the address of any other MyStruct object. A smart compiler can, of course, detect the situations that depend on the address identity of the object.
Secondly, aliasing presents another problem. Imagine that the program has a global pointer MyStruct *global_struct and inside the second function someone modifies the *global_struct. There's a possibility that the *global_struct is the same struct object that was passed to the function as an argument. If no copy was made, the modifications made to *global_struct will be visible through the local parameter, which is a disaster. Aliasing issues are much more difficult (and in general case impossible) to resolve at compilation time, which is why compilers usually won't be able to optimize out the copying.
So, I would expect any compiler to perform the copying, as requested.

Function pointer cast to different signature

I use a structure of function pointers to implement an interface for different backends. The signatures are very different, but the return values are almost all void, void * or int.
struct my_interface {
void (*func_a)(int i);
void *(*func_b)(const char *bla);
...
int (*func_z)(char foo);
};
But it is not required that a backends supports functions for every interface function. So I have two possibilities, first option is to check before every call if the pointer is unequal NULL. I don't like that very much, because of the readability and because I fear the performance impacts (I haven't measured it, however). The other option is to have a dummy function, for the rare cases an interface function doesn't exist.
Therefore I'd need a dummy function for every signature, I wonder if it is possible to have only one for the different return values. And cast it to the given signature.
#include <stdio.h>
int nothing(void) {return 0;}
typedef int (*cb_t)(int);
int main(void)
{
cb_t func;
int i;
func = (cb_t) nothing;
i = func(1);
printf("%d\n", i);
return 0;
}
I tested this code with gcc and it works. But is it sane? Or can it corrupt the stack or can it cause other problems?
EDIT: Thanks to all the answers, I learned now much about calling conventions, after a bit of further reading. And have now a much better understanding of what happens under the hood.
By the C specification, casting a function pointer results in undefined behavior. In fact, for a while, GCC 4.3 prereleases would return NULL whenever you casted a function pointer, perfectly valid by the spec, but they backed out that change before release because it broke lots of programs.
Assuming GCC continues doing what it does now, it will work fine with the default x86 calling convention (and most calling conventions on most architectures), but I wouldn't depend on it. Testing the function pointer against NULL at every callsite isn't much more expensive than a function call. If you really want, you may write a macro:
#define CALL_MAYBE(func, args...) do {if (func) (func)(## args);} while (0)
Or you could have a different dummy function for every signature, but I can understand that you'd like to avoid that.
Edit
Charles Bailey called me out on this, so I went and looked up the details (instead of relying on my holey memory). The C specification says
766 A pointer to a function of one type may be converted to a pointer to a function of another type and back again;
767 the result shall compare equal to the original pointer.
768 If a converted pointer is used to call a function whose type is not compatible with the pointed-to type, the behavior is undefined.
and GCC 4.2 prereleases (this was settled way before 4.3) was following these rules: the cast of a function pointer did not result in NULL, as I wrote, but attempting to call a function through a incompatible type, i.e.
func = (cb_t)nothing;
func(1);
from your example, would result in an abort. They changed back to the 4.1 behavior (allow but warn), partly because this change broke OpenSSL, but OpenSSL has been fixed in the meantime, and this is undefined behavior which the compiler is free to change at any time.
OpenSSL was only casting functions pointers to other function types taking and returning the same number of values of the same exact sizes, and this (assuming you're not dealing with floating-point) happens to be safe across all the platforms and calling conventions I know of. However, anything else is potentially unsafe.
I suspect you will get an undefined behaviour.
You can assign (with the proper cast) a pointer to function to another pointer to function with a different signature, but when you call it weird things may happen.
Your nothing() function takes no arguments, to the compiler this may mean that he can optimize the usage of the stack as there will be no arguments there. But here you call it with an argument, that is an unexpected situation and it may crash.
I can't find the proper point in the standard but I remember it says that you can cast function pointers but when you call the resulting function you have to do with the right prototype otherwise the behaviour is undefined.
As a side note, you should not compare a function pointer with a data pointer (like NULL) as thee pointers may belong to separate address spaces. There's an appendix in the C99 standard that allows this specific case but I don't think it's widely implemented. That said, on architecture where there is only one address space casting a function pointer to a data pointer or comparing it with NULL, will usually work.
You do run the risk of causing stack corruption. Having said that, if you declare the functions with extern "C" linkage (and/or __cdecl depending on your compiler), you may be able to get away with this. It would be similar then to the way a function such as printf() can take a variable number of arguments at the caller's discretion.
Whether this works or not in your current situation may also depend on the exact compiler options you are using. If you're using MSVC, then debug vs. release compile options may make a big difference.
It should be fine. Since the caller is responsible for cleaning up the stack after a call, it shouldn't leave anything extra on the stack. The callee (nothing() in this case) is ok since it wont try to use any parameters on the stack.
EDIT: this does assume cdecl calling conventions, which is usually the default for C.
As long as you can guarantee that you're making a call using a method that has the caller balance the stack rather than the callee (__cdecl). If you don't have a calling convention specified the global convention could be set to something else. (__stdcall or __fastcall) Both of which could lead to stack corruption.
This won't work unless you use implementation-specific/platform-specific stuff to force the correct calling convention. For some calling conventions the called function is responsible for cleaning up the stack, so they must know what's been pushed on.
I'd go for the check for NULL then call - I can't imagine it would have any impact on performance.
Computers can check for NULL about as fast as anything they do.
Casting a function pointer to NULL is explicitly not supported by the C standard. You're at the mercy of the compiler writer. It works OK on a lot of compilers.
It is one of the great annoyances of C that there is no equivalent of NULL or void* for function pointers.
If you really want your code to be bulletproof, you can declare your own nulls, but you need one for each function type. For example,
void void_int_NULL(int n) { (void)n; abort(); }
and then you can test
if (my_thing->func_a != void_int_NULL) my_thing->func_a(99);
Ugly, innit?

Resources