Function pointer cast to different signature - c

I use a structure of function pointers to implement an interface for different backends. The signatures are very different, but the return values are almost all void, void * or int.
struct my_interface {
void (*func_a)(int i);
void *(*func_b)(const char *bla);
...
int (*func_z)(char foo);
};
But it is not required that a backends supports functions for every interface function. So I have two possibilities, first option is to check before every call if the pointer is unequal NULL. I don't like that very much, because of the readability and because I fear the performance impacts (I haven't measured it, however). The other option is to have a dummy function, for the rare cases an interface function doesn't exist.
Therefore I'd need a dummy function for every signature, I wonder if it is possible to have only one for the different return values. And cast it to the given signature.
#include <stdio.h>
int nothing(void) {return 0;}
typedef int (*cb_t)(int);
int main(void)
{
cb_t func;
int i;
func = (cb_t) nothing;
i = func(1);
printf("%d\n", i);
return 0;
}
I tested this code with gcc and it works. But is it sane? Or can it corrupt the stack or can it cause other problems?
EDIT: Thanks to all the answers, I learned now much about calling conventions, after a bit of further reading. And have now a much better understanding of what happens under the hood.

By the C specification, casting a function pointer results in undefined behavior. In fact, for a while, GCC 4.3 prereleases would return NULL whenever you casted a function pointer, perfectly valid by the spec, but they backed out that change before release because it broke lots of programs.
Assuming GCC continues doing what it does now, it will work fine with the default x86 calling convention (and most calling conventions on most architectures), but I wouldn't depend on it. Testing the function pointer against NULL at every callsite isn't much more expensive than a function call. If you really want, you may write a macro:
#define CALL_MAYBE(func, args...) do {if (func) (func)(## args);} while (0)
Or you could have a different dummy function for every signature, but I can understand that you'd like to avoid that.
Edit
Charles Bailey called me out on this, so I went and looked up the details (instead of relying on my holey memory). The C specification says
766 A pointer to a function of one type may be converted to a pointer to a function of another type and back again;
767 the result shall compare equal to the original pointer.
768 If a converted pointer is used to call a function whose type is not compatible with the pointed-to type, the behavior is undefined.
and GCC 4.2 prereleases (this was settled way before 4.3) was following these rules: the cast of a function pointer did not result in NULL, as I wrote, but attempting to call a function through a incompatible type, i.e.
func = (cb_t)nothing;
func(1);
from your example, would result in an abort. They changed back to the 4.1 behavior (allow but warn), partly because this change broke OpenSSL, but OpenSSL has been fixed in the meantime, and this is undefined behavior which the compiler is free to change at any time.
OpenSSL was only casting functions pointers to other function types taking and returning the same number of values of the same exact sizes, and this (assuming you're not dealing with floating-point) happens to be safe across all the platforms and calling conventions I know of. However, anything else is potentially unsafe.

I suspect you will get an undefined behaviour.
You can assign (with the proper cast) a pointer to function to another pointer to function with a different signature, but when you call it weird things may happen.
Your nothing() function takes no arguments, to the compiler this may mean that he can optimize the usage of the stack as there will be no arguments there. But here you call it with an argument, that is an unexpected situation and it may crash.
I can't find the proper point in the standard but I remember it says that you can cast function pointers but when you call the resulting function you have to do with the right prototype otherwise the behaviour is undefined.
As a side note, you should not compare a function pointer with a data pointer (like NULL) as thee pointers may belong to separate address spaces. There's an appendix in the C99 standard that allows this specific case but I don't think it's widely implemented. That said, on architecture where there is only one address space casting a function pointer to a data pointer or comparing it with NULL, will usually work.

You do run the risk of causing stack corruption. Having said that, if you declare the functions with extern "C" linkage (and/or __cdecl depending on your compiler), you may be able to get away with this. It would be similar then to the way a function such as printf() can take a variable number of arguments at the caller's discretion.
Whether this works or not in your current situation may also depend on the exact compiler options you are using. If you're using MSVC, then debug vs. release compile options may make a big difference.

It should be fine. Since the caller is responsible for cleaning up the stack after a call, it shouldn't leave anything extra on the stack. The callee (nothing() in this case) is ok since it wont try to use any parameters on the stack.
EDIT: this does assume cdecl calling conventions, which is usually the default for C.

As long as you can guarantee that you're making a call using a method that has the caller balance the stack rather than the callee (__cdecl). If you don't have a calling convention specified the global convention could be set to something else. (__stdcall or __fastcall) Both of which could lead to stack corruption.

This won't work unless you use implementation-specific/platform-specific stuff to force the correct calling convention. For some calling conventions the called function is responsible for cleaning up the stack, so they must know what's been pushed on.
I'd go for the check for NULL then call - I can't imagine it would have any impact on performance.
Computers can check for NULL about as fast as anything they do.

Casting a function pointer to NULL is explicitly not supported by the C standard. You're at the mercy of the compiler writer. It works OK on a lot of compilers.
It is one of the great annoyances of C that there is no equivalent of NULL or void* for function pointers.
If you really want your code to be bulletproof, you can declare your own nulls, but you need one for each function type. For example,
void void_int_NULL(int n) { (void)n; abort(); }
and then you can test
if (my_thing->func_a != void_int_NULL) my_thing->func_a(99);
Ugly, innit?

Related

What could happen when you call function returning int with void (*)() pointer?

I would like to know what could happen in a situation like this:
int foo()
{
return 1;
}
void bar()
{
void(*fPtr)();
fPtr = (void(*)())foo;
fPtr();
}
Address of function returning int is assigned to pointer of void(*)() type and the function pointed is called.
What does the standard say about it?
Regardless of answer to 1st question: Are we safe to call the function like this? In practise shouldnt the outcome be just that callee (foo) will put something in EAX / RAX and caller (bar) will just ignore the rax content and go on with the program? I'm interested in Windows calling convention x86 and x64.
Thanks a lot for your time
1)
From the C11 standard - 6.5.2.2 - 9
If the function is defined with a type that is not compatible with the type (of the expression) pointed to by the expression that denotes the called function, the behavior is undefined
It is clearly stated that if a function is called using a pointer of type that does not match the type it is defined with, it leads to Undefined Behavior.
But the cast is okay.
2)
Regarding your second question - In case of a well defined Calling convention XXX and implementation YYYY -
You might have disassembled a sample program (even this one) and figured out that it "works". But there are slight complications. You see, the compilers these days are very smart. There are some compilers which are capable of performing precise inter procedural analysis. Some compiler might figure out that you have behavior that is not defined and it might make some assumption that might break the behavior.
A simple example -
Since the compiler sees that this function is being called with type void(*)(), it will assume that it is not supposed to return anything, and it might remove the instructions required to return the correct value.
In this case other functions calling this functions (in a right way) will get a bad value and thus it would have visible bad effects.
PS: As pointed out by #PeterCordes any modern, sane and useful compiler won't have such an optimization and probably it is always safe to use such calls. But the intent of the answer and the example (probably too simplistic) is to remind that one must tread very carefully when dealing with UBs.
What happens in practice depends a lot on how the compiler implements this. You're assuming C is just a thin ("obvious") layer over asm, but it isn't.
In this case, a compiler can see that you're calling a function through a pointer with the wrong type (which has undefined behavior1), so it could theoretically compile bar() to:
bar:
ret
A compiler can assume undefined behavior never happens during the execution of a program. Calling bar() always results in undefined behavior. Therefore the compiler can assume bar is never called and optimize the rest of the program based on that.
1 C99, 6.3.2.3/8:
If a converted
pointer is used to call a function whose type is not compatible with the pointed-to type,
the behavior is undefined.
About sub-question 2:
Nearly all x86 calling conventions I know (cdecl, stdcall, syscall, fastcall, pascal, 64-bit Windows and 64-bit Linux) will allow void functions to modify the ax/eax/rax register and the difference between an int function and a void function is only that the returned value is passed in the eax register.
The same is true for the "default" calling convention on most other CPUs I have already worked with (MIPS, Sparc, ARM, V850/RH850, PowerPC, TriCore). The register name is not eax but different, of course.
So when using these calling convention you can safely call the int function using a void pointer.
There are however calling conventions where this is not the case: I've read about a calling convention that implicitly use an additional argument for non-void functions...
At the asm level only, this is safe in all normal x86 calling conventions for integer types: eax/rax is call-clobbered, and the caller doesn't have to do anything differently to call a void function vs. an int function and ignoring the return value.
For non-integer return types, this is a problem even in asm. Struct returns are done via a hidden pointer arg that displaces the other args, and the caller is going to store through it so it better not hold garbage. (Assuming the case is more complex than the one shown here, so the function doesn't just inline when optimization is enabled.) See the Godbolt link below for an example of calling through a casted function pointer that results in a store through a garbage "pointer" in rdi.
For legacy 32-bit code, FP return values are in st(0) on the x87 stack, and it's the caller's responsibility to not leave the x87 stack unbalanced. float / double / __m128 return values are safe to ignore in 64-bit ABIs, or in 32-bit code using a calling convention that returns FP values in xmm0 (SSE/SSE2).
In C, this is UB (see other answers for quotes from the standard). When possible / convenient, prefer a workaround (see below).
It's possible that future aggressive optimizations based on a no-UB assumption could break code like this. For example, a compiler might assume any path that leads to UB is never taken, so an if() condition that leads to this code running must always be false.
Note that merely compiling bar() can't break foo() or other functions that don't call bar(). There's only UB if bar() ever runs, so emitting a broken externally-visible definition for foo() (like #Ajay suggests) is not a possible consequence. (Except maybe if you use whole-program optimization and the compiler proves that bar() is always called at least once.) The compiler can break functions that call bar(), though, at least the parts of them that lead to the UB.
However, it is allowed (by accident or on purpose) by many current compilers for x86. Some users expect this to work, and this kind of thing is present in some real codebases, so compiler devs may support this usage even if they implement aggressive optimizations that would otherwise assume this function (and thus all paths that lead to it in any callers) never run. Or maybe not!
An implementation is free to define the behaviour in cases where the ISO C standard leaves the behaviour undefined. However, I don't think gcc/clang or any other compiler explicitly guarantees that this is safe. Compiler devs might or might not consider it a compiler bug if this code stopped working.
I definitely can't recommend doing this, because it may well not continue to be safe. Hopefully if compiler devs decide to break it with aggressive no-UB-assuming optimizations, there will be options to control which kinds of UB are assumed not to happen. And/or there will be warnings. As discussed in comments, whether to take a risk of possible future breakage for short-term performance / convenience benefits depends on external factors (like will lives be at risk, and how carefully you plan to maintain in the future, e.g. checking compiler warnings with future compiler versions.)
Anyway, if it works, it's because of the generosity of your compiler, not because of any kind of standards guarantee. This compiler generosity may be intentional and semi-maintained, though.
See also discussion on another answer: the compilers people actually use aim to be useful, not just standards compliant. The C standard allows enough freedom to make a compliant but not very useful implementation. (Many would argue that compilers that assume no signed overflow even on machines where it has well-defined semantics have already gone past this point, though. See also What Every C Programmer Should Know About Undefined Behavior (an LLVM blog post).)
If the compiler can't prove that it would be UB (e.g. if it can't statically determine which function a function-pointer is pointing to), there's pretty much no way it can break (if the functions are ABI-compatible). Clang's runtime UB-sanitizer would still find it, but a compiler doesn't have much choice in code-gen for calling through an unknown function pointer. It just has to call the way the ABI / calling convention says it should. It can't tell the difference between casting a function pointer to the "wrong" type and casting it back to the correct type (unless you dereference the same function pointer with two different types, which means one or the other must be UB. But the compiler would have a hard time proving it, because the first call might not return. noreturn functions don't have to be marked noreturn.)
But remember that link-time optimization / inlining / constant-propagation could let the compiler see which function is pointed to even in a function that gets a function pointer as an arg or from a global variable.
Workarounds (for a function before you take its address):
If the function won't be part of Link-Time-Optimization, you could lie to the compiler and give it a prototype that matches how you want to call it (as long as you're sure you got the asm-level calling convention is compatible).
You could write a wrapper function. It's potentially less efficient (an extra jmp if it just tail-calls the original), but if it inlines then you're cloning the function to make a version that doesn't do any of the work of creating a return value. This might still be a loss if that was cheap compared to the extra I-cache / uop cache pressure of a 2nd definition, if the version that does return a value is used too.
You could also define an alternate name for a function, using linker stuff so both symbols have the same address. That way you can have two prototypes for the same block of compiler-generated machine code.
Using the GNU toolchain, you can use an attribute on a prototype to make it a weak alias (at the asm / linker level). This doesn't work for all targets; it works for ELF object files, but IDK about Windows.
// in GNU C:
int foo(void) { return 4; }
// include this line in a header if you want; weakref is per translation unit
// a definition (or prototype) for foo doesn't have to be visible.
static void foo_void(void) __attribute((weakref("foo"))); // in C++, use the mangled name
int bar_safe(void) {
void (*goo)(void) = (void(*)())foo_void;
goo();
return 1;
}
example on Godbolt for gcc7.2 and clang5.0.
gcc7.2 inlines foo through the weak alias call to foo_void! clang doesn't, though. I think that means that this is safe, and so is function-pointer casting, in gcc. Alternatively it means that this is potentially dangerous, too. >.<
clang's undefined-behaviour sanitizer does runtime function typeinfo checking (in C++ mode only) for calls through function pointers. int () is different from void (), so it will detect and report this UB on x86. (See the asm on Godbolt). It probably doesn't mean it's actually unsafe at the moment, though, because it doesn't yet detect / warn about it at compile time.
Use the above workarounds in the code that takes the address of the function, not in the code that receives a function pointer.
You want to let the compiler see a real function with the signature that it will eventually be called with, regardless of the function pointer type you pass it through. Make an alias / wrapper with a signature that matches what the function pointer will eventually be cast to. If that means you have to cast the function pointer to pass it in the first place, so be it.
(I think it's safe to create a pointer to the wrong type as long as it's not dereferenced. It's UB to even create an unaligned pointer, even if you don't dereference, but that's different.)
If you have code that needs to deref the same function pointer as int foo(args) in one place and void foo(args) in another place, you're screwed as far as avoiding UB.
C11 ยง6.3.2.3 paragraph 8:
A pointer to a function of one type may be converted to a pointer to a
function of another type and back again; the result shall compare
equal to the original pointer. If a converted pointer is used to call
a function whose type is not compatible with the referenced type, the
behavior is undefined.

C callbacks - optional argument?

Hey I have implemented some callbacks in my C program.
typedef void (*server_end_callback_t)(void *callbackArg);
then I have variable inside structure to store this callback
server->server_end_callback = on_sever_end;
What I have noticed it that I can pass in on_server_end callback function implementation that skips void *callbackArg and the code works correctly (no errors).
Is it correct to skip some arguments like void * implementing callback functions which prototypes takes such arguments?
void on_server_end(void) {
// some code goes here
}
I believe it is an undefined behavior from the C point of view, but it works because of the calling convention you are using.
For example, AMD64 ABI states that the first six arguments get passed to the calling function using CPU registers, not stack. So neither caller nor callee need no clean-up for the first six arguments and it works fine.
For more info please refer the Wikipedia.
The code works correctly because of the convention of passing arguments. Caller knows that callee expects some arguments - exactly one. So, it prepares the argument(s) (either in register or on stack - depending on ABI on your platform). Then callee uses those parameters or not. After return from callee, caller cleans up the stack if necessary. That's the mistery.
However, you shall not abuse this specific behaviour by passing incompatible function. It is a good practice to always compile your code with options -W -Wall -Werror (clang/gcc and compatible). Enabling such option would provide you a compilation error.
C allows a certain amount of playing fast and loose with function arguments. So
void (*fptr) ();
means "a pointer to a function which takes zero or more arguments". However this is for backwards compatibility, it's not wise to use it in new C code. The other way round
void (*fptr)(void *ptr)
{
/* don't use the void */
}
/* in another scope */
(*fptr)(); /* call with no arguments */
also works, as long as you don't use the void *, and I believe it is guaranteed to work though I'm not completely sure about that (on a modern machine the calling convention is to pass the first arguments in registers, so you just get a garbage register, and it will work). Again, it is a very bad idea to rely on it.
You can pass a void *, which you then cast to a structure of appropriate type containing as many arguments as you wish. That is a good idea and a sensible use of C's flexibility.
Is it correct to skip some arguments like void * implementing callback functions which prototypes takes such arguments?
No it is not. Any function with a given function declaration is not compatile with a function of a different function declaration. This rule applies for pointers to functions too.
So if you have a function such as pthread_create(..., my_callback, ...); and it expects you to pass a function pointer of type void* (*) (void*), then you cannot pass a function pointer of a different format. This invokes undefined behavior and compilers may generate incorrect code.
That being said, function pointer compatibility is a common non-standard extension on many systems. If the calling convention of the system is specified in a way that the function format doesn't matter, and the specific compiler port supports it, then such code might work just fine.
Such code is however not portable and not standard. It is best to avoid it whenever possible.

Is it safe to ignore return values when calling symbols from a C library

I've been fiddling around with LLVM and wrote a simple compiler. It uses the libc as its standard library. Naturally I have to declare the functions in my IR somehow.
I noticed that the following seems to work:
declare void #puts(i8*)
In C the function is defined like this:
int puts(const char *s);
so it should really be
declare i32 #puts(i8*)
This is a really simple case but I am sure that somewhere along the road I will make mistakes declaring these functions. For instance I was not aware that puts returned an int before I read the manpage.
How grave are these mistakes? Does it mess with the stack or does LLVM handle it somehow? What are the security implications of such mistakes?
Note: I was not able to produce any errors with the void declaration of puts.
The answer to this depends on the calling convention used by your C compiler's ABI. In the conventions used by most C compilers on x86 and x86-64, the return value is passed in a register. Mis-declaring an int-returning function as void will cause the value of the return register to be ignored (which it would be anyway if you're not using it). This doesn't cause any harm because the caller is responsible for saving the eax register anyway.
For example, the following code:
void callee(int, int, int);
void caller(void)
{
callee(1, 2, 3);
}
...will be compiled into exact same assembly if you declare callee to return int instead of void.
This applies to "small" return types, i.e. those that consist of an integer, a double-precision floating-point, or a 64-bit integer (which x86 returns in two integer registers). Large return types are handled differently - if you change the declaration of callee to something like:
struct { char x[100]; } callee(int, int, int);
...the calling code will change drastically, despite the passed-in types not having changed. The return structure will now be allocated on the caller's stack, and its address will be passed as a hidden first argument to the callee (this is on x86, things are slightly different on x86-64), which is expected to write the return value to that area.
In other words, as long as you understand the calling convention, and you are careful not to mis-declare functions that return large types by value (which AFAIK don't exist in the standard C and POSIX libraries), the erroneous declaration will work.
Small return values are usually placed in a return value register so ignoring those won't fatally crash. For larger values some ABIs require the caller to allocate stack space and pass it as an invisible first parameter to the function, in this case your program would probably quickly crash since you wouldn't be allocating or passing it. If you're using an abi that doesn't store previous-frame-pointers I.e. It must know how big it's own stack frame is and the abi allows callees to adjust the stack pointer, this would be fatal as well.
Basically it might work until it doesn't.
Richard
The answers so far are good but I would consider one big implication is if you are ignoring C function returns that, as part of their functionality, allocate memory or open/create files, etc. etc. and then return some kind of pointer.
Ignoring these will, of course, orphan the memory that will only free up when the program exits (if it makes it that far), leave files open, etc. etc.
Basically, if the function you are calling returns anything BUT register values or stack instance values the implications may be significant.

C function call with too few arguments

I am working on some legacy C code. The original code was written in the mid-90s, targeting Solaris and Sun's C compiler of that era. The current version compiles under GCC 4 (albeit with many warnings), and it seems to work, but I'm trying to tidy it up -- I want to squeeze out as many latent bugs as possible as I determine what may be necessary to adapt it to 64-bit platforms, and to compilers other than the one it was built for.
One of my main activities in this regard has been to ensure that all functions have full prototypes (which many did not have), and in that context I discovered some code that calls a function (previously un-prototyped) with fewer arguments than the function definition declares. The function implementation does use the value of the missing argument.
Example:
impl.c:
int foo(int one, int two) {
if (two) {
return one;
} else {
return one + 1;
}
}
client1.c:
extern foo();
int bar() {
/* only one argument(!): */
return foo(42);
}
client2.c:
extern int foo();
int (*foop)() = foo;
int baz() {
/* calls the same function as does bar(), but with two arguments: */
return (*foop)(17, 23);
}
Questions: is the result of a function call with missing arguments defined? If so, what value will the function receive for the unspecified argument? Otherwise, would the Sun C compiler of ca. 1996 (for Solaris, not VMS) have exhibited a predictable implementation-specific behavior that I can emulate by adding a particular argument value to the affected calls?
EDIT: I found a stack thread C function with no parameters behavior which gives a very succinct and specific, accurate answer. PMG's comment at the end of the answer taks about UB. Below were my original thoughts, which I think are along the same lines and explain why the behaviour is UB..
Questions: is the result of a function call with missing arguments defined?
I would say no... The reason being is that I think the function will operate as-if it had the second parameter, but as explained below, that second parameter could just be junk.
If so, what value will the function receive for the unspecified argument?
I think the values received are undefined. This is why you could have UB.
There are two general ways of parameter passing that I'm aware of... (Wikipedia has a good page on calling conventions)
Pass by register. I.e., the ABI (Application Binary Interface) for the plat form will say that registers x & y for example are for passing in parameters, and any more above that get passed via stack...
Everything gets passed via stack...
Thus when you give one module a definition of the function with "...unspecified (but not variable) number of parameters..." (the extern def), it will not place as many parameters as you give it (in this case 1) in either the registers or stack location that the real function will look in to get the parameter values. Therefore the second area for the second parameter, which is missed out, essentially contains random junk.
EDIT: Based on the other stack thread I found, I would ammended the above to say that the extern declared a function with no parameters to a declared a function with "unspecified (but not variable) number of parameters".
When the program jumps to the function, that function assumes the parameter passing mechanism has been correctly obeyed, so either looks in registers or the stack and uses whatever values it finds... asumming them to be correct.
Otherwise, would the Sun C compiler of ca. 1996 (for Solaris, not VMS) have exhibited a >> predictable implementation-specific behavior
You'd have to check your compiler documentation. I doubt it... the extern definition would be trusted completely so I doubt the registers or stack, depending on parameter passing mechanism, would get correctly initialised...
If the number or the types of arguments (after default argument promotions) do not match the ones used in the actual function definition, the behavior is undefined.
What will happen in practice depends on the implementation. The values of missing parameters will not be meaningfully defined (assuming the attempt to access missing arguments will not segfault), i.e. they will hold unpredictable and possibly unstable values.
Whether the program will survive such incorrect calls will also depend on the calling convention. A "classic" C calling convention, in which the caller is responsible for placing the parameters into the stack and removing them from there, will be less crash-prone in presence of such errors. The same can be said about calls that use CPU registers to pass arguments. Meanwhile, a calling convention in which the function itself is responsible for cleaning the stack will crash almost immediately.
It is very unlikely the bar function ever in the past would give consistent results. The only thing I can imagine is that it is always called on fresh stack space and the stack space was cleared upon startup of the process, in which case the second parameter would be 0. Or the difference between between returning one and one+1 didn't make a big difference in the bigger scope of the application.
If it really is like you depict in your example, then you are looking at a big fat bug. In the distant past there was a coding style where vararg functions were implemented by specifying more parameters than passed, but just as with modern varargs you should not access any parameters not actually passed.
I assume that this code was compiled and run on the Sun SPARC architecture. According to this ancient SPARC web page: "registers %o0-%o5 are used for the first six parameters passed to a procedure."
In your example with a function expecting two parameters, with the second parameter not specified at the call site, it is likely that register %01 always happened to have a sensible value when the call was made.
If you have access to the original executable and can disassemble the code around the incorrect call site, you might be able to deduce what value %o1 had when the call was made. Or you might try running the original executable on a SPARC emulator, like QEMU. In any case this won't be a trivial task!

What is the purpose of this return value?

I ran into some code I couldn't find an answer to on Google or SO. I am looking at a thread function which returns void* as you could expect. However, before the thread function ends it suddenly pulls this stunt,
return (void*) 0;
What is the purpose of that? I can't make any sense of it.
edit:
After understanding this is the same as NULL-- it is my thought they used this to skip including stdlib.
(void*)0 is the null pointer, a.k.a. NULL (which actually is a macro defined in several header files, e.g. stddef.h or stdio.h, that basically amounts to the same thing as (void*)0).
Update:
How to explain null pointers and their usefulness? Basically, it's a special value that says, "This pointer doesn't point anywhere," or, "This pointer is not set to a valid object reference."
Historical note: Tony Hoare, who is said to have invented null references in 1965, is known to regret that invention and thus calls it his "Billion Dollar Mistake":
Whenever you work with pointers, you must make sure to never dereference a null pointer (because it doesn't reference anything by definition). If you do it anyway, you'll either get abnormal program termination, a general protection fault, or unexpected program behaviour at the very least.
Well, I have not encountered any C++ compiler saying NULL or 0 cannot be converted to void* (or to/from int*, for example). But there might be some smart compilers or static-analysis tools that would report 0 to void-pointer conversion as a warning.
That statement is commonly found in callback implementation (like a thread-routine), which must adhere to prototype of callback being demanded (pthread_create, CreateThread etc). Therefore, when you implement that function, you must return the same type it was demanded for. For pthread_create routine, you must return a void* - and that's why return (void*)0; is there.

Resources