Malloced pointer changes value on return - c

I have code that looks like this :
Foo* create(args) {
Foo *t = malloc (sizeof (Foo)) ;
// Fill up fields in struct t from args.
return t;
}
The call is
Foo *created = create (args)
Note that the function and the call to the function are two separate modules.
The value of the pointer assigned to t on being malloced is slightly different to what is captured in created. Seemingly the MSB of the address is changed and replaced with fffff. The LSB portion is the same for around 6-7 characters.
I'm at a loss as to what's going on.
I'm using GCC 4.6

The most likely explanation one can come up with from what you provided is that at the point of the call the function create is undeclared. A permissive C compiler assumed that unknown function create returned an int and generated code, which effectively truncated the pointer value or, more precisely, sign-extended the assumed int return value into the MSB of the recipient pointer (assuming a platform where pointers are wider than int in terms of their bit-width, e.g. 64-bit platform).
Most likely the compiler issued a warning about an undeclared function being called, but the warning was ignored by the user.
Make sure declaration (or, better, prototype) of create is visible at the point of the call. If you are compiling in C99 mode (or later), ask your compiler to strictly enforce C99 requirements. And don't ignore compiler diagnostic messages.

Related

Casting function pointers with different return types

This question is about using function pointers, which are not precisely compatible, but which I hope I can use nevertheless as long as my code relies only on the compatible parts. Let's start with some code to get the idea:
typedef void (*funcp)(int* val);
static void myFuncA(int* val) {
*val *= 2;
return;
}
static int myFuncB(int* val) {
*val *= 2;
return *val;
}
int main(void) {
funcp f = NULL;
int v = 2;
f = myFuncA;
f(&v);
// now v is 4
// explicit cast so the compiler will not complain
f = (funcp)myFuncB;
f(&v);
// now v is 8
return 0;
}
While the arguments of myFuncA and myFuncB are identical and fully compatible, the return values are not and are thus just ignored by the calling code. I tried the above code and it works correctly using GCC.
What I learned so far from here and here is that the functions are incompatible by definition of the standard and may cause undefined behavior. My intuition, however, tells me that my code example will still work correctly, since it does not rely in any way on the incompatible parts (the return value). However, in the answers to this question a possible corruption of the stack has been mentioned.
So my question is: Is my example above valid C code so it will always work as intended (without any side effects), or does it depend on the compiler?
EDIT:
I want to do this in order to use a "more powerful" function with a "les powerful" interface. In my example funcp is the interface but I would like to provide additional functionality like myFuncB for optional use.
Agreed it is undefined behaviour, don't do that!
Yes the code functions, i.e. it doesn't fall over, but the value you assign after returning void is undefined.
In a very old version of "C" the return type was unspecified and int and void functions could be 'safely' intermixed. The integer value being returned in the designated accumulator register. I remember writing code using this 'feature'!
For almost anything else you might return the results are likely to be fatal.
Going forward a few years, floating-point return values are often returned using the fp coprocessor (we are still in the 80s) register, so you can't mix int and float return types, because the state of the coprocessor would be confused if the caller does not strip off the value, or strips off a value that was never placed there and causes an fp exception. Worse, if you build with fp emulation, then the fp value may be returned on the stack as described next. Also on 32-bit builds it is possible to pass 64bit objects (on 16 bit builds you can have 32 bit objects) which would be returned either using multiple registers or on the stack. If they are on the stack and allocated the wrong size, then some local stomping will occur,
Now, c supports struct return types and return value copy optimisations. All bets are off if you don't match the types correctly.
Also some function models have the caller allocate stack space for the parameters for the call, but the function itself releases the stack. Disagreement between caller and implementation on on the number or types of parameters and return values would be fatal.
By default C function names are exported and linked undecorated - just the function name defines the symbol, so different modules of your program could have different views about function signatures, which conflict when you link, and potentially generate very interesting runtime errors.
In c++ the function names are highly decorated primarily to allow overloading, but also it helps to avoid signature mismatches. This helps with keeping arguments in step, but actually, ( as noted by #Jens ) the return type is not encoded into the decorated name, primarily because the return type isn't used (wasn't, but I think occasionally can now influence) for overload resolution.
Yes, this is an undefined behaviour and you should never rely on undefined behaviour if want to write portable code.
Function with different return values can have different calling conventions. Your example will probably work for small return types, but when returning large structs (e.g. larger than 32 bits) some compilers will generate code where struct is returned in a temporary memory area which should be cleaned up the the caller.

Segmentation fault while accessing the return address from a C function in 64 bit machine

I have code in C (linux(x86_64)) some like this:
typedef struct
{
char k[32];
int v;
}ABC;
ABC states[6] = {0};
ABC* get_abc()
{
return &states[5];
}
while in main():
int main()
{
ABC *p = get_abc();
.
.
.
printf("%d\n", p->v);
}
I am getting segmentation fault at printf statement while accessing p->v. I tried to debug it from gdb and it says "can not access the memory". One important thing here is that when I compile this code, gcc throws me a warning on ABC *p = get_abc(); that I am trying to convert pointer from integer. My question here is that I am returning address of structure from get_abc() then why compiler gives me such warning? why compiler considers it as integer? I think I am getting segmentation fault due to this warning as an integer can not hold memory address in x86_64.
Any help would be appreciated.
Define the get_abc prototype before main function. If function prototype is not available before that function call means, compiler will treat that function by default as passing int arguments and returning int. Here get_abc actually returning 8 byte address, but that value has been suppressed to 4 bytes and it is stored in ABC *p variable which leads the crash.
ABC* get_abc();
int main()
{
ABC *p = get_abc();
}
Note : This crash will not occur in 32 bit machine where size of int and size of address is 4 bytes, because suppression will not happen. But that warning will be there.
You haven't shown us all your code, but I can guess with some confidence that the get_abc() and main() functions are defined in separate source files, and that there's no visible declaration of get_abc() visible from the call in main().
You should create a header file that contains a declaration of get_abc():
ABC *get_abc();
and #include that header both in the file that defines get_abc() and in the one that defines main(). (You'll also need header guards.) You'll need to move the definition of the type ABC to that header.
Or, as a quick-and-dirty workaround, you can add an explicit declaration before your definition of main() -- but that's a rather brittle solution, since it depends on you to get the declaration exactly right.
In the absence of a visible declaration, and undeclared function is assumed to return int. The compiler sees your call to get_abc(), generates code to call it as if it returned an int, and implicitly converts that int value to a pointer. Hilarity ensues.
Note: There actually is no implicit conversion from int to pointer types, apart from the special case of a null pointer constant, but many compilers implement such an implicit conversion for historical reasons. Also, the "implicit int" rule was dropped in the 1999 version of the standard, but again, many compilers still implement it for historical reasons. Your compiler should have options to enable better warnings. If you're using gcc, try gcc -pedantic -std=c99 -Wall -Wextra.

return struct from function [duplicate]

The following simple code segfaults under gcc 4.4.4
#include<stdio.h>
typedef struct Foo Foo;
struct Foo {
char f[25];
};
Foo foo(){
Foo f = {"Hello, World!"};
return f;
}
int main(){
printf("%s\n", foo().f);
}
Changing the final line to
Foo f = foo(); printf("%s\n", f.f);
Works fine. Both versions work when compiled with -std=c99. Am I simply invoking undefined behavior, or has something in the standard changed, which permits the code to work under C99? Why does is crash under C89?
I believe the behavior is undefined both in C89/C90 and in C99.
foo().f is an expression of array type, specifically char[25]. C99 6.3.2.1p3 says:
Except when it is the operand of the sizeof operator or the unary
& operator, or is a string literal used to initialize an array, an
expression that has type "array of type" is converted to an
expression with type "pointer to type" that points to the initial
element of the array object and is not an lvalue. If the array object
has register storage class, the behavior is undefined.
The problem in this particular case (an array that's an element of a structure returned by a function) is that there is no "array object". Function results are returned by value, so the result of calling foo() is a value of type struct Foo, and foo().f is a value (not an lvalue) of type char[25].
This is, as far as I know, the only case in C (up to C99) where you can have a non-lvalue expression of array type. I'd say that the behavior of attempting to access it is undefined by omission, likely because the authors of the standard (understandably IMHO) didn't think of this case. You're likely to see different behaviors at different optimization settings.
The new 2011 C standard patches this corner case by inventing a new storage class. N1570 (the link is to a late pre-C11 draft) says in 6.2.4p8:
A non-lvalue expression with structure or union type, where the
structure or union contains a member with array type (including,
recursively, members of all contained structures and unions) refers to
an object with automatic storage duration and temporary lifetime.
Its lifetime begins when the expression is evaluated and its initial
value is the value of the expression. Its lifetime ends when the
evaluation of the containing full expression or full declarator ends.
Any attempt to modify an object with temporary lifetime results in
undefined behavior.
So the program's behavior is well defined in C11. Until you're able to get a C11-conforming compiler, though, your best bet is probably to store the result of the function in a local object (assuming your goal is working code rather than breaking compilers):
[...]
int main(void ) {
struct Foo temp = foo();
printf("%s\n", temp.f);
}
printf is a bit funny, because it's one of those functions that takes varargs. So let's break it down by writing a helper function bar. We'll return to printf later.
(I'm using "gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3")
void bar(const char *t) {
printf("bar: %s\n", t);
}
and calling that instead:
bar(foo().f); // error: invalid use of non-lvalue array
OK, that gives an error. In C and C++, you are not allowed to pass an array by value. You can work around this limitation by putting the array inside a struct, for example void bar2(Foo f) {...}
But we're not using that workaround - we're not allowed to pass in the array by value. Now, you might think it should decay to a char*, allowing you to pass the array by reference. But decay only works if the array has an address (i.e. is an lvalue). But temporaries, such as the return values from function, live in a magic land where they don't have an address. Therefore you can't take the address & of a temporary. In short, we're not allowed to take the address of a temporary, and hence it can't decay to a pointer. We are unable to pass it by value (because it's an array), nor by reference (because it's a temporary).
I found that the following code worked:
bar(&(foo().f[0]));
but to be honest I think that's suspect. Hasn't this broken the rules I just listed?
And just to be complete, this works perfectly as it should:
Foo f = foo();
bar(f.f);
The variable f is not a temporary and hence we can (implicitly, during decay) takes its address.
printf, 32-bit versus 64-bit, and weirdness
I promised to mention printf again. According to the above, it should refuse to pass foo().f to any function (including printf). But printf is funny because it's one of those vararg functions. gcc allowed itself to pass the array by value to the printf.
When I first compiled and ran the code, it was in 64-bit mode. I didn't see confirmation of my theory until I compiled in 32-bit (-m32 to gcc). Sure enough I got a segfault, as in the original question. (I had been getting some gibberish output, but no segfault, when in 64 bits).
I implemented my own my_printf (with the vararg nonsense) which printed the actual value of the char * before trying to print the letters pointed at by the char*. I called it like so:
my_printf("%s\n", f.f);
my_printf("%s\n", foo().f);
and this is the output I got (code on ideone):
arg = 0xffc14eb3 // my_printf("%s\n", f.f); // worked fine
string = Hello, World!
arg = 0x6c6c6548 // my_printf("%s\n", foo().f); // it's about to crash!
Segmentation fault
The first pointer value 0xffc14eb3 is correct (it points to the characters "Hello, world!"), but look at the second 0x6c6c6548. That's the ASCII codes for Hell (reverse order - little endianness or something like that). It has copied the array by value into printf and the first four bytes have been interpreted as a 32-bit pointer or integer. This pointer doesn't point anywhere sensible and hence the program crashes when it attempts to access that location.
I think this is in violation of the standard, simply by virtue of the fact that we're not supposed to be allowed to copy arrays by value.
On MacOS X 10.7.2, both GCC/LLVM 4.2.1 ('i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)') and GCC 4.6.1 (which I built) compile the code without warnings (under -Wall -Wextra), in both 32-bit and 64-bit modes. The programs all run without crashing. This is what I'd expect; the code looks fine to me.
Maybe the problem on Ubuntu is a bug in the specific version of GCC that has since been fixed?

Returning struct containing array

The following simple code segfaults under gcc 4.4.4
#include<stdio.h>
typedef struct Foo Foo;
struct Foo {
char f[25];
};
Foo foo(){
Foo f = {"Hello, World!"};
return f;
}
int main(){
printf("%s\n", foo().f);
}
Changing the final line to
Foo f = foo(); printf("%s\n", f.f);
Works fine. Both versions work when compiled with -std=c99. Am I simply invoking undefined behavior, or has something in the standard changed, which permits the code to work under C99? Why does is crash under C89?
I believe the behavior is undefined both in C89/C90 and in C99.
foo().f is an expression of array type, specifically char[25]. C99 6.3.2.1p3 says:
Except when it is the operand of the sizeof operator or the unary
& operator, or is a string literal used to initialize an array, an
expression that has type "array of type" is converted to an
expression with type "pointer to type" that points to the initial
element of the array object and is not an lvalue. If the array object
has register storage class, the behavior is undefined.
The problem in this particular case (an array that's an element of a structure returned by a function) is that there is no "array object". Function results are returned by value, so the result of calling foo() is a value of type struct Foo, and foo().f is a value (not an lvalue) of type char[25].
This is, as far as I know, the only case in C (up to C99) where you can have a non-lvalue expression of array type. I'd say that the behavior of attempting to access it is undefined by omission, likely because the authors of the standard (understandably IMHO) didn't think of this case. You're likely to see different behaviors at different optimization settings.
The new 2011 C standard patches this corner case by inventing a new storage class. N1570 (the link is to a late pre-C11 draft) says in 6.2.4p8:
A non-lvalue expression with structure or union type, where the
structure or union contains a member with array type (including,
recursively, members of all contained structures and unions) refers to
an object with automatic storage duration and temporary lifetime.
Its lifetime begins when the expression is evaluated and its initial
value is the value of the expression. Its lifetime ends when the
evaluation of the containing full expression or full declarator ends.
Any attempt to modify an object with temporary lifetime results in
undefined behavior.
So the program's behavior is well defined in C11. Until you're able to get a C11-conforming compiler, though, your best bet is probably to store the result of the function in a local object (assuming your goal is working code rather than breaking compilers):
[...]
int main(void ) {
struct Foo temp = foo();
printf("%s\n", temp.f);
}
printf is a bit funny, because it's one of those functions that takes varargs. So let's break it down by writing a helper function bar. We'll return to printf later.
(I'm using "gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3")
void bar(const char *t) {
printf("bar: %s\n", t);
}
and calling that instead:
bar(foo().f); // error: invalid use of non-lvalue array
OK, that gives an error. In C and C++, you are not allowed to pass an array by value. You can work around this limitation by putting the array inside a struct, for example void bar2(Foo f) {...}
But we're not using that workaround - we're not allowed to pass in the array by value. Now, you might think it should decay to a char*, allowing you to pass the array by reference. But decay only works if the array has an address (i.e. is an lvalue). But temporaries, such as the return values from function, live in a magic land where they don't have an address. Therefore you can't take the address & of a temporary. In short, we're not allowed to take the address of a temporary, and hence it can't decay to a pointer. We are unable to pass it by value (because it's an array), nor by reference (because it's a temporary).
I found that the following code worked:
bar(&(foo().f[0]));
but to be honest I think that's suspect. Hasn't this broken the rules I just listed?
And just to be complete, this works perfectly as it should:
Foo f = foo();
bar(f.f);
The variable f is not a temporary and hence we can (implicitly, during decay) takes its address.
printf, 32-bit versus 64-bit, and weirdness
I promised to mention printf again. According to the above, it should refuse to pass foo().f to any function (including printf). But printf is funny because it's one of those vararg functions. gcc allowed itself to pass the array by value to the printf.
When I first compiled and ran the code, it was in 64-bit mode. I didn't see confirmation of my theory until I compiled in 32-bit (-m32 to gcc). Sure enough I got a segfault, as in the original question. (I had been getting some gibberish output, but no segfault, when in 64 bits).
I implemented my own my_printf (with the vararg nonsense) which printed the actual value of the char * before trying to print the letters pointed at by the char*. I called it like so:
my_printf("%s\n", f.f);
my_printf("%s\n", foo().f);
and this is the output I got (code on ideone):
arg = 0xffc14eb3 // my_printf("%s\n", f.f); // worked fine
string = Hello, World!
arg = 0x6c6c6548 // my_printf("%s\n", foo().f); // it's about to crash!
Segmentation fault
The first pointer value 0xffc14eb3 is correct (it points to the characters "Hello, world!"), but look at the second 0x6c6c6548. That's the ASCII codes for Hell (reverse order - little endianness or something like that). It has copied the array by value into printf and the first four bytes have been interpreted as a 32-bit pointer or integer. This pointer doesn't point anywhere sensible and hence the program crashes when it attempts to access that location.
I think this is in violation of the standard, simply by virtue of the fact that we're not supposed to be allowed to copy arrays by value.
On MacOS X 10.7.2, both GCC/LLVM 4.2.1 ('i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)') and GCC 4.6.1 (which I built) compile the code without warnings (under -Wall -Wextra), in both 32-bit and 64-bit modes. The programs all run without crashing. This is what I'd expect; the code looks fine to me.
Maybe the problem on Ubuntu is a bug in the specific version of GCC that has since been fixed?

Is declaring an header file essential?

Is declaring an header file essential? This code:
main()
{
int i=100;
printf("%d\n",i);
}
seems to work, the output that I get is 100. Even without using stdio.h header file. How is this possible?
You don't have to include the header file. Its purpose is to let the compiler know all the information about stdio, but it's by no means necessary if your compiler is smart (or lazy).
You should include it because it's a good habit to get into - if you don't, then the compiler has no real way to know if you're breaking the rules, such as with:
int main (void) {
puts (7); // should be a string.
return 0;
}
which compiles without issue but rightly dumps core when running. Changing it to:
#include <stdio.h>
int main (void) {
puts (7);
return 0;
}
will result in the compiler warning you with something like:
qq.c:3: warning: passing argument 1 of ‘puts’ makes pointer
from integer without a cast
A decent compiler may warn you about this, such as gcc knowing about what printf is supposed to look like, even without the header:
qq.c:7: warning: incompatible implicit declaration of
built-in function ‘printf’
How is this possible? In short: three pieces of luck.
This is possible because some compilers will make assumptions about undeclared functions. Specifically, parameters are assumed to be int, and the return type also int. Since an int is often the same size as a char* (depending on the architecture), you can get away with passing ints and strings, as the correct size parameter will get pushed onto the stack.
In your example, since printf was not declared, it was assumed to take two int parameters, and you passed a char* and an int which is "compatible" in terms of the invocation. So the compiler shrugged and generated some code that should have been about right. (It really should have warned you about an undeclared function.)
So the first piece of luck was that the compiler's assumption was compatible with the real function.
Then at the linker stage, because printf is part of the C Standard Library, the compiler/linker will automatically include this in the link stage. Since the printf symbol was indeed in the C stdlib, the linker resolved the symbol and all was well. The linking was the second piece of luck, as a function anywhere other than the standard library will need its library linked in also.
Finally, at runtime we see your third piece of luck. The compiler made a blind assumption, the symbol happened to be linked in by default. But - at runtime you could have easily passed data in such a way as to crash your app. Fortunately the parameters matched up, and the right thing ended up occurring. This will certainly not always be the case, and I daresay the above would have probably failed on a 64-bit system.
So - to answer the original question, it really is essential to include header files, because if it works, it is only through blind luck!
As paxidiablo said its not necessary but this is only true for functions and variables but if your header file provides some types or macros (#define) that you use then you must include the header file to use them because they are needed before linking happens i.e during pre-processing or compiling
This is possible because when C compiler sees an undeclared function call (printf() in your case) it assumes that it has
int printf(...)
signature and tries to call it casting all the arguments to int type. Since "int" and "void *" types often have same size it works most of the time. But it is not wise to rely on such behavior.
C supprots three types of function argument forms:
Known fixed arguments: this is when you declare function with arguments: foo(int x, double y).
Unknown fixed arguments: this is when you declare it with empty parentheses: foo() (not be confused with foo(void): it is the first form without arguments), or not declare it at all.
Variable arguments: this is when you declare it with ellipsis: foo(int x, ...).
When you see standard function working then function definition (which is in form 1 or 3) is compatible with form 2 (using same calling convention). Many old std. library functions are so (as desugned to be), because they are there form early versions of C, where was no function declarations and they all was in form 2. Other function may be unintentionally be compatible with form 2, if they have arguments as declared in argument promotion rules for this form. But some may not be so.
But form 2 need programmer to pass arguments of same types everywhere, because compiler not able to check arguments with prototype and have to determine calling convention osing actual passed arguments.
For example, on MC68000 machine first two integer arguments for fixed arg functions (for both forms 1 and 2) will be passed in registers D0 and D1, first two pointers in A0 and A1, all others passed through stack. So, for example function fwrite(const void * ptr, size_t size, size_t count, FILE * stream); will get arguments as: ptr in A0, size in D0, count in D1 and stream in A1 (and return a result in D0). When you included stdio.h it will be so whatever you pass to it.
When you do not include stdio.h another thing happens. As you call fwrite with fwrite(data, sizeof(*data), 5, myfile) compiler looks on argruments and see that function is called as fwrite(*, int, int, *). So what it do? It pass first pointer in A0, first int in D0, second int in D1 and second pointer in A1, so it what we need.
But when you try to call it as fwrite(data, sizeof(*data), 5.0, myfile), with count is of double type, compiler will try to pass count through stack, as it is not integer. But function require is in D1. Shit happens: D1 contain some garbage and not count, so further behaviour is unpredictable. But than you use prototype defined in stdio.h all will be ok: compiler automatically convert this argument to int and pass it as needed. It is not abstract example as double in arument may be just result of computation involving floating point numbers and you may just miss this assuming result is int.
Another example is variable argument function (form 3) like printf(char *fmt, ...). For it calling convention require last named argument (fmt here) to be passed through stack regardess of its type. So, then you call printf("%d", 10) it will put pointer to "%d" and number 10 on stack and call function as need.
But when you do not include stdio.h comiler will not know that printf is vararg function and will suppose that printf("%d", 10) is calling to function with fixed arguments of type pointer and int. So MC68000 will place pointer to A0 and int to D0 instead of stack and result is again unpredictable.
There may be luck that arguments was previously on stack and occasionally read there and you get correct result... this time... but another time is will fail. Another luck is that compiler takes care if not declared function may be vararg (and somehow makes call compatible with both forms). Or all arguments in all forms are just passed through stack on your machine, so fixed, unknown and vararg forms are just called identically.
So: do not do this even you feel lucky and it works. Unknown fixed argument form is there just for compatibility with old code and is strictly discouraged to use.
Also note: C++ will not allow this at all, as it require function to be declared with known arguments.

Resources