In C, what exactly are the performance benefits that come with observing strict aliasing?
There is a page that describes aliasing very thoroughly here.
There are also some SO topics here and here.
To summarize, the compiler cannot assume the value of data when two pointers of different types are accessing the same location (i.e. it must read the value every time and therefore cannot make optimizations).
This only occurs when strict aliasing is not being enforced. Strict aliasing options:
gcc: -fstrict-aliasing [default] and -fno-strict-aliasing
msvc:
Strict aliasing is off by default.
(If somebody knows how to turn it on,
please say so.)
Example
Copy-paste this code into main.c:
void f(unsigned u)
{
unsigned short* const bad = (unsigned short*)&u;
}
int main(void)
{
f(5);
return 0;
}
Then compile the code with these options:
gcc main.c -Wall -O2
And you will get:
main.c:3:
warning: dereferencing type-punned
pointer will break strict-aliasing
rules
Disable aliasing with:
gcc main.c -fno-strict-aliasing
-Wall -O2
And the warning goes away. (Or just take out -Wall but...don't compile without it)
Try as I might I could not get MSVC to give me a warning.
The level of performance improvement that will result from applying type-based aliasing will depend upon:
The extent to which code caches things in automatic-duration objects, or via the restrict qualifier, indicates that compilers may do so without regard for whether they might be affected by certain pointer-based operations.
Whether the aliasing assumptions made by a compiler are consistent with what a programmer needs to do (if they're not, reliable processing would require disabling type-based aliasing, negating any benefits it could otherwise have offered).
Consider the following two code snippets:
struct descriptor { uint32_t size; uint16_t *dat; };
void test(struct descriptor *ptr)
{
for (uint32_t i=0; i < ptr->size; i++)
ptr->dat[i] = 1234;
}
void test2(struct descriptor *ptr)
{
int size = ptr->size;
short *dat = ptr->dat;
for (uint32_t i=0; i < size; i++)
dat[i] = 1234;
}
In the absence of type-based aliasing rules, a compiler given test1() would have to allow for the possibility that ptr->dat might point to an address within ptr->size or ptr->dat. This would in turn require that it either check whether ptr->dat was in range to access those things, or else reload the contents of ptr->size and ptr->dat on every iteration of the loop. In this scenario, type-based aliasing rules might allow for a 10x speedup.
On the other hand, a compiler given test2() could generate code equivalent to the optimized version of test1() without having to care about type-based aliasing rules. In this case, performing the same operation, type-based aliasing rules would not offer any speedup.
Now consider the following functions:
uint32_t *ptr;
void set_bottom_16_bits_and_advance_v1(uint16_t value)
{
((uint16_t)ptr)[IS_BIG_ENDIAN] = value;
ptr++;
}
void set_bottom_16_bits_and_advance_v2(uint16_t value)
{
((unsigned char*)ptr)[3*IS_BIG_ENDIAN] = value & 255;
((unsigned char*)ptr)[(3*IS_BIG_ENDIAN) ^ 1] = value >> 8;
ptr++;
}
void test1(unsigned n)
{
for (unsigned i=0; i<n; i++)
set_bottom_16_bits_v1(i);
}
void test2(unsigned n, int value)
{
for (unsigned i=0; i<n; i++)
set_bottom_16_bits_v2(value);
}
If a compiler given set_bottom_16_bits_and_advance_v1 and test1 were--even with type-based aliasing enabled--accommodate the possibility that it might modify an object of type uint32_t (since its execution makes use of a value of type uint32_t*), it would not need to allow for the possibility that ptr might hold its own address. If a compiler could not handle the possibility of the first function accessing a uint32_t without disabling type-based aliasing entirely, however, it would need to reload ptr on every iteration of the loop. Almost any compiler(*), with or without type-based aliasing analysis, which is given set_bottom_16_bits_and_advance_v1 and test2, however, would be required to reload ptr every time through the loop, reducing to zero any performance benefits type-based aliasing could have offered.
(*) The CompCert C dialect expressly disallows the use of character pointers, or any other pointer-to-integer type, to modify the values of stored pointer object, since making allowance for such accesses would not only degrade performance, but also make it essentially impossible to identify all corner cases that would need to be evaluated to guarantee that the behavior of a compiler's generated machine code will match the specified behavior of the source.
Related
For example, can this
unsigned f(float x) {
unsigned u = *(unsigned *)&x;
return u;
}
cause unpredictable results on a platform where,
unsigned and float are both 32-bit
a pointer has a fixed size for all types
unsigned and float can be stored to and loaded from the same part of memory.
I know about strict aliasing rules, but most examples showing problematic cases of violating strict aliasing is like the following.
static int g(int *i, float *f) {
*i = 1;
*f = 0;
return *i;
}
int h() {
int n;
return g(&n, (float *)&n);
}
In my understanding, the compiler is free to assume that i and f are implicitly restrict. The return value of h could be 1 if the compiler thinks *f = 0; is redundant (because i and f can't alias), or it could be 0 if it puts into account that the values of i and f are the same. This is undefined behaviour, so technically, anything else can happen.
However, the first example is a bit different.
unsigned f(float x) {
unsigned u = *(unsigned *)&x;
return u;
}
Sorry for my unclear wording, but everything is done "in-place". I can't think of any other way the compiler might interpret the line unsigned u = *(unsigned *)&x;, other than "copy the bits of x to u".
In practice, all compilers for various architectures I tested in https://godbolt.org/ with full optimization produce the same result for the first example, and varying results (either 0 or 1) for the second example.
I know it's technically possible that unsigned and float have different sizes and alignment requirements, or should be stored in different memory segments. In that case even the first code won't make sense. But on most modern platforms where the following holds, is the first example still undefined behaviour (can it produce unpredictable results)?
unsigned and float are both 32-bit
a pointer has a fixed size for all types
unsigned and float can be stored to and loaded from the same part of memory.
In real code, I do write
unsigned f(float x) {
unsigned u;
memcpy(&u, &x, sizeof(x));
return u;
}
The compiled result is the same as using pointer casting, after optimization. This question is about interpretation of the standard about strict aliasing rules for code such as the first example.
Is it always undefined behaviour to copy the bits of a variable through an incompatible pointer?
Yes.
The rule is https://port70.net/~nsz/c/c11/n1570.html#6.5p7 :
An object shall have its stored value accessed only by an lvalue expression that has one of
the following types:
a type compatible with the effective type of the object,
a qualified version of a type compatible with the effective type of the object,
a type that is the signed or unsigned type corresponding to the effective type of the
object,
a type that is the signed or unsigned type corresponding to a qualified version of the
effective type of the object,
an aggregate or union type that includes one of the aforementioned types among its
members (including, recursively, a member of a subaggregate or contained union), or
a character type.
The effective type of the object x is float - it is defined with that type.
unsigned is not compatible with float,
unsigned is not a qualified version of float,
unsigned is not a signed or unsigned type of float,
unsigned is not a signed or unsigned type corresponding to qualified version of float,
unsigned is not an aggregate or union type
and unsigned is not a character type.
The "shall" is violated, it is undefined behavior (see https://port70.net/~nsz/c/c11/n1570.html#4p2 ). There is no other interpretation.
We also have https://port70.net/~nsz/c/c11/n1570.html#J.2 :
The behavior is undefined in the following circumstances:
An object has its stored value accessed other than by an lvalue of an allowable type (6.5).
As Kamil explains, it's UB. Even int and long (or long and long long) aren't alias-compatible even when they're the same size. (But interestingly, unsigned int is compatible with int)
It's nothing to do with being the same size, or using the same register-set as suggested in a comment, it's mainly a way to let compilers assume that different pointers don't point to overlapping memory when optimizing. They still have to support C99 union type-punning, not just memcpy. So for example a dst[i] = src[i] loop doesn't need to check for possible overlap when unrolling or vectorizing, if dst and src have different types.1
If you're accessing the same integer data, the standard requires that you use the exact same type, modulo only things like signed vs. unsigned and const. Or that you use (unsigned) char*, which is like GNU C __attribute__((may_alias)).
The other part of your question seems to be why it appears to work in practice, despite the UB.
Your godbolt link forgot to link the actual compilers you tried.
https://godbolt.org/z/rvj3d4e4o shows GCC4.1, from before GCC went out of its way to support "obvious" local compile-time-visible cases like this, to sometimes not break people's buggy code using non-portable idioms like this.
It loads garbage from stack memory, unless you use -fno-strict-aliasing to make it movd to that location first. (Store/reload instead of movd %xmm0, %eax is a missed-optimization bug that's been fixed in later GCC versions for most cases.)
f: # GCC4.1 -O3
movl -4(%rsp), %eax
ret
f: # GCC4.1 -O3 -fno-strict-aliasing
movss %xmm0, -4(%rsp)
movl -4(%rsp), %eax
ret
Even that old GCC version warns warning: dereferencing type-punned pointer will break strict-aliasing rules which should make it obvious that GCC notices this and does not consider it well-defined. Later GCC that do choose to support this code still warn.
It's debatable whether it's better to sometimes work in simple cases, but break other times, vs. always failing. But given that GCC -Wall does still warn about it, that's probably a good tradeoff between convenience for people dealing with legacy code or porting from MSVC. Another option would be to always break it unless people use -fno-strict-aliasing, which they should if dealing with codebases that depend on this behaviour.
Being UB doesn't mean required-to-fail
Just the opposite; it would take tons of extra work to actually trap on every signed overflow in the C abstract machine, for example, especially when optimizing stuff like 2 + c - 3 into c - 1. That's what gcc -fsanitize=undefined tries to do, adding x86 jo instructions after additions (except it still does constant-propagation so it's just adding -1, not detecting temporary overflow on INT_MAX. https://godbolt.org/z/WM9jGT3ac). And it seems strict-aliasing is not one of the kinds of UB it tries to detect at run time.
See also the clang blog article: What Every C Programmer Should Know About Undefined Behavior
An implementation is free to define behaviour the ISO C standard leaves undefined
For example, MSVC always defines this aliasing behaviour, like GCC/clang/ICC do with -fno-strict-aliasing. Of course, that doesn't change the fact that pure ISO C leaves it undefined.
It just means that on those specific C implementations, the code is guaranteed to work the way you want, rather than happening to do so by chance or by de-facto compiler behaviour if it's simple enough for modern GCC to recognize and do the more "friendly" thing.
Just like gcc -fwrapv for signed-integer overflows.
Footnote 1: example of strict-aliasing helping code-gen
#define QUALIFIER // restrict
void convert(float *QUALIFIER pf, const int *pi) {
for(int i=0 ; i<10240 ; i++){
pf[i] = pi[i];
}
}
Godbolt shows that with the -O3 defaults for GCC11.2 for x86-64, we get just a SIMD loop with movdqu / cvtdq2ps / movups and loop overhead. With -O3 -fno-strict-aliasing, we get two versions of the loop, and an overlap check to see if we can run the scalar or the SIMD version.
Is there actual cases where strict aliasing helps better code generation, in which the same cannot be achieved with restrict
You might well have a pointer that might point into either of two int arrays, but definitely not at any float variable, so you can't use restrict on it. Strict-aliasing will let the compiler still avoid spill/reload of float objects around stores through the pointer, even if the float objects are global vars or otherwise aren't provably local to the function. (Escape analysis.)
Or a struct node * that definitely isn't the same type as the payload in a tree.
Also, most code doesn't use restrict all over the place. It could get quite cumbersome. Not just in loops, but in every function that deals with pointers to structs. And if you get it wrong and promise something that's not true, your code's broken.
The Standard was never intended to fully, accurately, and unambiguously partition programs that have defined behavior and those that don't(*), but instead relies upon compiler writers to exercise a certain amount of common sense.
(*) If it was intended for that purpose, it fails miserably, as evidenced by the amount of confusion stemming from it.
Consider the following two code snippets:
/* Assume suitable declarations of u are available everywhere */
union test { uint32_t ww[4]; float ff[4]; } u;
/* Snippet #1 */
uint32_t proc1(int i, int j)
{
u.ww[i] = 1;
u.ff[j] = 2.0f;
return u.ww[i];
}
/* Snippet #2, part 1, in one compilation unit */
uint32_t proc2a(uint32_t *p1, float *p2)
{
*p1 = 1;
*p2 = 2.0f;
return *p1;
}
/* Snippet #2, part 2, in another compilation unit */
uint32_t proc2(int i, int j)
{
return proc2a(u.ww+i, u.ff+j);
}
It is clear that the authors of the Standard intended that the first version of the code be processed meaningfully on platforms where that would make sense, but it's also clear that at least some of the authors of C99 and later versions did not intend to require that the second version be processed likewise (some of the authors of C89 may have intended that the "strict aliasing rule" only apply to situations where a directly named object would be accessed via pointer of another type, as shown in the example given in the published Rationale; nothing in the Rationale suggests a desire to apply it more broadly).
On the other hand, the Standard defines the [] operator in such a fashion that proc1 is semantically equivalent to:
uint32_t proc3(int i, int j)
{
*(u.ww+i) = 1;
*(u.ff+j) = 2.0f;
return *(u.ww+i);
}
and there's nothing in the Standard that would imply that proc() shouldn't have the same semantics. What gcc and clang seem to do is special-case the [] operator as having a different meaning from pointer dereferencing, but nothing in the Standard makes such a distinction. The only way to consistently interpret the Standard is to recognize that the form with [] falls in the category of actions which the Standard doesn't require that implementations process meaningfully, but relies upon them to handle anyway.
Constructs such as yours example of using a directly-cast pointer to access storage associated with an object of the original pointer's type fall in a similar category of constructs which at least some authors of the Standard likely expected (and would have demanded, if they didn't expect) that compilers would handle reliably, with or without a mandate, since there was no imaginable reason why a quality compiler would do otherwise. Since then, however, clang and gcc have evolved to defy such expectations. Even if clang and gcc would normally generate useful machine code for a function, they seek to perform aggressive inter-procedural optimizations that make it impossible to predict what constructs will be 100% reliable. Unlike some compilers which refrain from applying potential optimizing transforms unless they can prove that they are sound, clang and gcc seek to perform transforms that can't be proven to affect program behavior.
I've been reading some articles about the Strict Aliasing Rules for a couple of days. Here are my understandings:
An object's effective type is the type of its declaration. If the object is an allocated memory, it does not have one until accessed by an lvalue with an effective type, which becomes the object's effective type.
An access to the value of an object must be through a compatible type with its effective type.
After I thought I got this, I wanted to do a simple experiment to see if my compiler really warns about this when I deliberately break the rule. Here's my code:
int main(void) {
unsigned char c[5] = {0x1, 0x2, 0x3, 0x4, 0x5};
int i = *(int*)c;
printf("%x\n", i);
return 0;
}
To me this seems to be against the rule because c has an effective type of char and we are trying to access it with an int pointer.
But my gcc compiles just fine! Even with the highest constraining level (-Wstrict-aliasing) it does not give a single warning. But strangely, replacing the int with a float gives the expected response:
int main(void) {
unsigned char c[5] = {0x1, 0x2, 0x3, 0x4, 0x5};
float i = *(float*)c;
printf("%f\n", i);
return 0;
}
The compiler gives a warning for this code. (dereferencing type-punned pointer will breakk strict-aliasing rules [Wstrict-aliasing])
Does the first code really break the rules? I know casting any pointer into a char* type is fine, but is it true the other way around? Or is it just something gcc does not care so much for?
According to the published Rationale for the C Standard, the purpose of the constraint referred to as the "strict aliasing rule" was to avoid requiring that a compiler given something like:
int x;
int test(double *p)
{
x = 1;
*p = 2.0;
return x;
}
allow for the possibility that the store to *p might affect the value of x, even though there is nothing in the code that would suggest that such a thing could happen. Because the Standard explicitly specifies that implementations may process constructs where it imposes no requirements "in a documented manner characteristic of the environment", the authors saw no need to fully enumerate all of the constructs they expected implementations to handle consistently. Consequently, in order to avoid rendering the language useless, every compiler must process meaningfully some constructs that violate the "aliasing" constraints as written.
From what I can tell, one of the ways that gcc does that is by applying the rules bidirectionally in some cases beyond those provided for by the constraint. The rules as written wouldn't require that an implementation allow for the possibility that an object of character-array type might be accessed using an lvalue of type int, but they also wouldn't require that implementations allow for the possibility that a structure object containing an array of integers might be accessed by dereferencing an int* such as that formed by the decay of the array in an expression like myStruct.intArray[2]. In cases where the authors of gcc recognize that treating a construct as a violation of the "aliasing" constraints would be silly, they will treat the construct as though it were not a constraint violation, and thus will not warn about it.
Consider this artificial example:
#include <stddef.h>
static inline void nullify(void **ptr) {
*ptr = NULL;
}
int main() {
int i;
int *p = &i;
nullify((void **) &p);
return 0;
}
&p (an int **) is casted to void **, which is then dereferenced. Does this break the strict aliasing rules?
According to the standard:
An object shall have its stored value accessed only by an lvalue
expression that has one of the following types:
a type compatible with the effective type of the object,
So unless void * is considered compatible with int *, this violates the strict aliasing rules.
However, this is not what is suggested by gcc warnings (even if it proves nothing).
While compiling this sample:
#include <stddef.h>
void f(int *p) {
*((float **) &p) = NULL;
}
gcc warns about strict aliasing:
$ gcc -c -Wstrict-aliasing -fstrict-aliasing a.c
a.c: In function âfâ:
a.c:3:7: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*((float **) &p) = NULL;
~^~~~~~~~~~~~~
However, with a void **, it does not warn:
#include <stddef.h>
void f(int *p) {
*((void **) &p) = NULL;
}
So is it valid regarding the strict aliasing rules?
If it is not, how to write a function to nullify any pointer (for example) which does not break the strict aliasing rules?
There is no general requirement that implementations use the same representations for different pointer types. On a platform that would use a different representation for e.g. an int* and a char*, there would be no way to support a single pointer type void* that could act upon both int* and char* interchangeably. Although an implementation that can handle pointers interchangeably would facilitate low-level programming on platforms which use compatible representations, such ability would not be supportable on all platforms. Consequently, the authors of the Standard had no reason to mandate support for such a feature rather than treating it as a quality of implementation issue.
From what I can tell, quality compilers like icc which are suitable for low-level programming, and which target platforms where all pointers have the same representation, will have no difficulty with constructs like:
void resizeOrFail(void **p, size_t newsize)
{
void *newAddr = realloc(*p, newsize);
if (!newAddr) fatal_error("Failure to resize");
*p = newAddr;
}
anyType *thing;
... code chunk #1 that uses thing
resizeOrFail((void**)&thing, someDesiredSize);
... code chunk #2 that uses thing
Note that in this example, both the act of taking thing's address, and all use the of resulting pointer, visibly occur between the two chunks of code that use thing. Thus, there is no actual aliasing, and any compiler which is not willfully blind will have no trouble recognizing that the act of passing thing's address to reallocorFail might cause thing to be modified.
On the other hand, if the usage had been something like:
void **myptr;
anyType *thing;
myptr = &thing;
... code chunk #1 that uses thing
*myptr = realloc(*myptr, newSize);
... code chunk #2 that uses thing
then even quality compilers might not realize that thing might be affected between the two chunks of code that use it, since there is no reference to anything of type anyType* between those two chunks. On such compilers, it would be necessary to write the code as something like:
myptr = &thing;
... code chunk #1 that uses thing
*(void *volatile*)myptr = realloc(*myptr, newSize);
... code chunk #2 that uses thing
to let the compiler know that the operation on *mtptr is doing something "weird". Quality compilers intended for low-level programming will regard this as a sign that they should avoid caching the value of thing across such an operation, but even the volatile qualifier won't be enough for implementations like gcc and clang in optimization modes that are only intended to be suitable for purposes that don't involve low-level programming.
If a function like reallocOrFail needs to work with compiler modes that aren't really suitable for low-level programming, it could be written as:
void resizeOrFail(void **p, size_t newsize)
{
void *newAddr;
memcpy(&newAddr, p, sizeof newAddr);
newAddr = realloc(newAddr, newsize);
if (!newAddr) fatal_error("Failure to resize");
memcpy(p, &newAddr, sizeof newAddr);
}
This would, however, require that compilers allow for the possibility that resizeOrFail might alter the value of an arbitrary object of any type--not merely data pointers--and thus needlessly impair what should be useful optimizations. Worse, if the pointer in question happens to be stored on the heap (and isn't of type void*), a conforming compilers that isn't suitable for low-level programming would still be allowed to assume that the second memcpy can't possibly affect it.
A key part of low-level programming is ensuring that one chooses implementations and modes that are suitable for that purpose, and knowing when they might need a volatile qualifier to help them out. Some compiler vendors might claim that any code which requires that compilers be suitable for its purposes is "broken", but attempting to appease such vendors will result in code that is less efficient than could be produced by using a quality compiler suitable for one's purposes.
I thought I knew C pretty well, but I'm confused by the following code:
typedef struct {
int type;
} cmd_t;
typedef struct {
int size;
char data[];
} pkt_t;
int func(pkt_t *b)
{
int *typep;
char *ptr;
/* #1: Generates warning */
typep = &((cmd_t*)(&(b->data[0])))->type;
/* #2: Doesn't generate warning */
ptr = &b->data[0];
typep = &((cmd_t*)ptr)->type;
return *typep;
}
When I compile with GCC, I get the "dereferencing type-punned pointer will break strict-aliasing rules" warning.
Why am I getting this warning at all? I'm dereferencing at char array. Casting a char * to anything is legal. Is this one of those cases where an array is not exactly the same as a pointer?
Why aren't both assignments generating the warning? The 2nd assignment is the equivalent of the first, isn't it?
When strict aliasing is turned on, the compiler is allowed to assume that two pointers of different type (char* vs cmt_t* in this instance) will not point to the same memory location. This allows for a greater range of optimizations which you would otherwise not want to be applied if they do indeed point to the same memory location. Various examples/horror-stories can be found in this question.
This is why, under strict-aliasing, you have to be careful how you do type punning. I believe that the standard doesn't allow for any type-puning what-so-ever (don't quote me on that) but most compilers have exemption for unions (my google-fu is failing in turning up the relevant manual pages):
union float_to_int {
double d;
uint64_t i;
};
union float_to_int ftoi;
ftoi.d = 1.0;
... = ftoi.i;
Unfortunately, this doesn't quite work for your situation as you would have to memcpy the content of the array into the union which is less then ideal. A simpler approach would be to simply to turn off strict-aliasing via the -fno-strict-aliasing switch. This will ensure that your code is correct and it's unlikely to have a significant performance impact (do measure if performance matters).
As for why the warning doesn't show up when the line is broken up, I don't know. Chances are that the modifications to the source code manages to confuse the compiler's static analysis pass enough that it doesn't see the type-punning. Note that the static analysis pass responsible for detecting type-punning is unrelated and doesn't talk to the various optimization passes that assume strict-aliasing. You can think of any static analysis done by compilers (unless otherwise specified) as a best-effort type of thing. In other words, the absence of warning doesn't mean that there are no errors which means that simply breaking up the line doesn't magically make your type-punning safe.
I used the following piece of code to read data from files as part of a larger program.
double data_read(FILE *stream,int code) {
char data[8];
switch(code) {
case 0x08:
return (unsigned char)fgetc(stream);
case 0x09:
return (signed char)fgetc(stream);
case 0x0b:
data[1] = fgetc(stream);
data[0] = fgetc(stream);
return *(short*)data;
case 0x0c:
for(int i=3;i>=0;i--)
data[i] = fgetc(stream);
return *(int*)data;
case 0x0d:
for(int i=3;i>=0;i--)
data[i] = fgetc(stream);
return *(float*)data;
case 0x0e:
for(int i=7;i>=0;i--)
data[i] = fgetc(stream);
return *(double*)data;
}
die("data read failed");
return 1;
}
Now I am told to use -O2 and I get following gcc warning:
warning: dereferencing type-punned pointer will break strict-aliasing rules
Googleing I found two orthogonal answers:
Concluding: there's no need to worry; gcc tries to be more law obedient than
the actual law.
vs
So basically if you have an int* and a float* they are not allowed to point to the same memory location. If your code does not respect this, then the compiler's optimizer will most likely break your code.
In the end I don't want to ignore the warnings. What would you recommend?
[update] I substituted the toy example with the real function.
The problem occurs because you access a char-array through a double*:
char data[8];
...
return *(double*)data;
But gcc assumes that your program will never access variables though pointers of different type. This assumption is called strict-aliasing and allows the compiler to make some optimizations:
If the compiler knows that your *(double*) can in no way overlap with data[], it's allowed to all sorts of things like reordering your code into:
return *(double*)data;
for(int i=7;i>=0;i--)
data[i] = fgetc(stream);
The loop is most likely optimized away and you end up with just:
return *(double*)data;
Which leaves your data[] uninitialized. In this particular case the compiler might be able to see that your pointers overlap, but if you had declared it char* data, it could have given bugs.
But, the strict-aliasing rule says that a char* and void* can point at any type. So you can rewrite it into:
double data;
...
*(((char*)&data) + i) = fgetc(stream);
...
return data;
Strict aliasing warnings are really important to understand or fix. They cause the kinds of bugs that are impossible to reproduce in-house because they occur only on one particular compiler on one particular operating system on one particular machine and only on full-moon and once a year, etc.
It looks a lot as if you really want to use fread:
int data;
fread(&data, sizeof(data), 1, stream);
That said, if you do want to go the route of reading chars, then reinterpreting them as an int, the safe way to do it in C (but not in C++) is to use a union:
union
{
char theChars[4];
int theInt;
} myunion;
for(int i=0; i<4; i++)
myunion.theChars[i] = fgetc(stream);
return myunion.theInt;
I'm not sure why the length of data in your original code is 3. I assume you wanted 4 bytes; at least I don't know of any systems where an int is 3 bytes.
Note that both your code and mine are highly non-portable.
Edit: If you want to read ints of various lengths from a file, portably, try something like this:
unsigned result=0;
for(int i=0; i<4; i++)
result = (result << 8) | fgetc(stream);
(Note: In a real program, you would additionally want to test the return value of fgetc() against EOF.)
This reads a 4-byte unsigned from the file in little-endian format, regardless of what the endianness of the system is. It should work on just about any system where an unsigned is at least 4 bytes.
If you want to be endian-neutral, don't use pointers or unions; use bit-shifts instead.
Using a union is not the correct thing to do here. Reading from an unwritten member of the union is undefined - i.e. the compiler is free to perform optimisations that will break your code (like optimising away the write).
This doc summarizes the situation: http://dbp-consulting.com/tutorials/StrictAliasing.html
There are several different solutions there, but the most portable/safe one is to use memcpy(). (The function calls may be optimized out, so it's not as inefficient as it appears.) For example, replace this:
return *(short*)data;
With this:
short temp;
memcpy(&temp, data, sizeof(temp));
return temp;
Basically you can read gcc's message as guy you are looking for trouble, don't say I didn't warn ya.
Casting a three byte character array to an int is one of the worst things I have seen, ever. Normally your int has at least 4 bytes. So for the fourth (and maybe more if int is wider) you get random data. And then you cast all of this to a double.
Just do none of that. The aliasing problem that gcc warns about is innocent compared to what you are doing.
The authors of the C Standard wanted to let compiler writers generate efficient code in circumstances where it would be theoretically possible but unlikely that a global variable might have its value accessed using a seemingly-unrelated pointer. The idea wasn't to forbid type punning by casting and dereferencing a pointer in a single expression, but rather to say that given something like:
int x;
int foo(double *d)
{
x++;
*d=1234;
return x;
}
a compiler would be entitled to assume that the write to *d won't affect x. The authors of the Standard wanted to list situations where a function like the above that received a pointer from an unknown source would have to assume that it might alias a seemingly-unrelated global, without requiring that types perfectly match. Unfortunately, while the rationale strongly suggests that authors of the Standard intended to describe a standard for minimum conformance in cases where a compiler would otherwise have no reason to believe that things might alias, the rule fails to require that compilers recognize aliasing in cases where it is obvious and the authors of gcc have decided that they'd rather generate the smallest program it can while conforming to the poorly-written language of the Standard, than generate code which is actually useful, and instead of recognizing aliasing in cases where it's obvious (while still being able to assume that things that don't look like they'll alias, won't) they'd rather require that programmers use memcpy, thus requiring a compiler to allow for the possibility that pointers of unknown origin might alias just about anything, thus impeding optimization.
Apparently the standard allows sizeof(char*) to be different from sizeof(int*) so gcc complains when you try a direct cast. void* is a little special in that everything can be converted back and forth to and from void*.
In practice I don't know many architecture/compiler where a pointer is not always the same for all types but gcc is right to emit a warning even if it is annoying.
I think the safe way would be
int i, *p = &i;
char *q = (char*)&p[0];
or
char *q = (char*)(void*)p;
You can also try this and see what you get:
char *q = reinterpret_cast<char*>(p);