Some code flattens multidimensional arrays like this:
int array[10][10];
int* flattened_array = (int*)array;
for (int i = 0; i < 10*10; ++i)
flattened_array[i] = 42;
This is, as far as I know, undefined behaviour.
I am trying to detect cases like this with gcc sanitizers, however, neither -fsanitize=address nor -fsanitize=undefined work.
Is there a sanitizer option that I'm missing, or perhaps a different way to detect this at run time? Or maybe I am mistaken and the code is legal?
Edit: the sanitizers detect this access as an error:
array[0][11] = 42;
but do not detect this:
int* first_element = array[0];
first_element[11] = 42;
Furthermore, clang detects the first access statically, and gives out a warning
warning: array index 11 is past the end of the array (which contains 10 elements) [-Warray-bounds]
Edit: the above does not change if int in the declaration is replaced with char.
Edit: There are two potential sources of UB.
Accessing an object (of type int[10]) through an lvalue of an incompatible type (int).
Out-of-bounds access with a pointer of type int* and an index >=10 where the size of the underlying array is 10 (rather than 100).
Sanitizers don't seem to detect the first kind of violation. There's a debate whether this is a violation at all. After all, there's also an object of type int at the same address.
As for the second potential UB, the UB sanitizer does detect such access, but only if it is done directly via the 2D array itself and not via another variable that points to its first element, as shown above. I don't think the two accesses should differ in legality. They should be either both legal (and then ubsan has a false positive) or both illegal (and then ubsan has a false negative).
Edit: Appendix J2 says array[0][11] should be UB, even though it is only informative.
From a language lawyer point of view, this is generally seen as invalid code because the integers arrays are only of size 10 and the code does access past the declared array size. Yet it used to be a common idiom, and I know no compiler that would not accept it. Still with all real world compilers I know, the resulting program will have the expected behaviour.
After a second (in reality much more) reading of the C11 standard draft (n1570) the intent of the standard is still not clear. 6.2.5 Types § 20 says:
An array type describes a contiguously allocated nonempty set of objects with a
particular member object type, called the element type.
It makes clear that an array contains contiguously allocated objects. But IMHO is unclear about whether a contiguously allocated set of objects is an array.
If you answer no, then the shown code does invoke UB by accessing an array past it last element
But if you answer yes, then a set of 10 contiguous sets of 10 contiguous integers gives 100 contiguous integers and can be seen as an array of 100 integers. Then the shown code would be legal.
That latter acception seems to be common in the real word because it is consistent with dynamic array allocation: you allocate enough memory for a number of objects, and you can access that as if it had been declared as an array - and the allocation function ensures no alignment problem.
My conclusion so far is:
is it nice and clean code: certainly not and I would avoid it in production code
does it invokes UB: I really do not know and my personal opinion is probably no
Let us look at the code added in the edit:
array[0][11] = 42;
The compiler knows that array is declared as int[10][10]. So it knows that both indexes must be less than 10, and it can raise a warning.
int* first_element = array[0];
first_element[11] = 42;
first_element is declared as a mere pointer. Statically, the compiler has to assume that it can point inside an array of unknown size, so outside of a specific context, it is much harder to raise a warning. Of course for a human programmer it is evident that both way should be seen the same, but as a compiler is not required to emit any diagnostic for out of bounds array, efforts to detect them are left to the minimum and only trivial cases are detected.
In addition, when a compiler internally codes pointer arithmetics on common platforms, it just computes a memory address which is the original address and a byte offset. So it could emit the same code as:
char *addr = (char *) first_element; // (1)
addr += 11 * sizeof(int); // (2)
*((int *) addr) = 42; // (3)
(1) is legal because a pointer to any objet (here an int) can be converter to a pointer to char, which is required to point to the first byte of the representation of the object
(2) the trick here is that (char *) first_element is the same as (char *) array because the first byte of the 10*10 array is the first byte of the first int of the first row, and an single byte can only have one single address. As the size of array is 10 * 10 * sizeof(int), 11 * sizeof(int) is a valid offset in it.
(3) for the very same reason, (char *) &array[1][1] is addr because elements in an array are contiguous so their byte representation are also contiguous. And as a forth and back conversion between 2 types is legal and required to give back the original pointer, (int *) addr is (int*) ((char*) &array[1][1]). That means that dereferencing (int *) addr is legal and shall have the same effect as array[1][1] = 42.
This does not mean that first_element[11] does not involve UB. array[0] has a declared size which is 10. It just explains why all known compilers accepts it (in addition to not wanting to break legacy code).
The sanitizers are not especially good at catching out-of-bounds access unless the array in question is a complete object.
For example, they do not catch out-of-bounds access in this case:
struct {
int inner[10];
char tail[sizeof(int)];
} outer;
int* p = outer.inner;
p[10] = 42;
which is clearly illegal. But they do catch access to p[11].
Array flattening is not really different in spirit from this kind of access. Code generated by the compiler, and the way it is instrumented by sanitizers, should be pretty similar. So there's little hope that array flattening can be detected by these tools.
Multidimensional arrays are required to be contiguously allocated (C uses row-major). And there can't be any padding between elements of an array - though not stated explicitly in the standard, this can be inferred with array definition that says "contiguously allocated nonempty set of objects" and the definition of sizeof operator.
So the "flattening" should be legal.
Re. accessing array[0][11]: although, Annex J2 directly gives an example, what exactly is the violation in the normative isn't obvious. Nevertheless, it's still possible to make it legal an intermediate cast to char*:
*((int*)((char*)array + 11 * sizeof(int))) = 42;
(writing such code is obviously not advised ;)
The problem here is that there Standard describes as equivalent two operations, one of which clearly should be defined and one of which the Standard expressly says is not defined.
The cleanest way to resolve this, which seems to coincide with what clang and gcc already do, which is to say that applying [] operator to an array lvalue or non-l value does not cause it to decay, but instead looks up an element directly, yielding an lvalue if the array operand was an lvalue, and a non-l value otherwise.
Recognizing the use of [] with an array as being a distinct operator would clean up a number of corner cases in the semantics, including accessing an array within a structure returned by a function, register-qualified arrays, arrays of bitfields, etc. It would also make clear what the inner-array-subscript limitations are supposed to mean. Given foo[x][y], a compiler would be entitled to assume that y would be within the bounds of the inner array, but given *(foo[x]+y) it would not be entitled to make such an assumption.
Related
Take a look at the following code, taken from an older version of ffmpeg:
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
struct foo
{
int16_t (*ac_val_base)[16];
int16_t (*ac_val[3])[16];
};
int main(int argc, char *argv[])
{
struct foo bar;
int16_t *ac_val, *ac_val1;
bar.ac_val_base = malloc(4639 * 16 * sizeof(int16_t));
bar.ac_val[0] = bar.ac_val_base + 66;
ac_val = bar.ac_val[0][0] + 3780 * 16;
ac_val1 = ac_val;
printf("Result: %d\n", (int) (((char *) ac_val1) - ((char *) bar.ac_val[0][0])));
return 0;
}
When compiling this with established compilers like gcc or Visual C, the result is 120960. This makes sense to me because I'm adding 3780 * 16 to an int16_t array pointer so I'd expect the resulting pointer to be 120960 bytes above the source pointer.
When compiling the code using vbcc, however, the result is -8000 because the compiler performs some optimizations. The author of the vbcc compiler is convinced that the optimization is covered by 6.5.6/8 of the C99 standard which says that the behaviour is undefined in that case, quote:
If both the pointer operand and the result point to elements of the
same array object, or one past the last element of the array object,
the evaluation shall not produce an overflow; otherwise, the behavior
is undefined.
So is the code above really relying on undefined behaviour? I'm a bit skeptical because the code works on all compilers except vbcc.
The short answer is that the type of the expression bar.ac_val[0][0] is "array of 16 int16_t". Although this array object is located within a larger malloc block, and the expression evaluates to a pointer within the block, the pointer has provenance from an array.
A pointer obtained from an array expression, where the array dimension is N, can be displaced by at most N (one byte past the end of the array), while staying within defined behavior. (If displaced all the way to N, the pointer must not be dereferenced.)
A simpler example is something like:
struct obj {
int arr[32];
int other_member;
};
Suppose you have a malloc-ed pointer to this, but use ptr->arr[32] to access other_member, this is not well-defined, even though everything is in the malloc-ed object.
One possible optimization the compiler can perform is to use some addressing mode which only works for that size of array. Say that ptr->arr[i] translates to some instruction which has a five-bit field to encode a scaled displacement value from 0 to 31. The compiler is free to ignore that the displacement [32] cannot fit into that instruction, and just truncate it to the lowest five bits, which are zero, effectively changing the meaning to ptr->arr[0].
Alternatively, the rules can enable useful diagnostic tools. The compiler may be able to warn you at compile time that there is an array overrun, and because it's undefined behavior, it can fail the translation, while remaining conforming. There can be tooling whereby the code is compiled in such a way that you get detailed array bound checking at run time (not just checking for overrun of the malloc-ed block). Accessing past the end of the array can be an accident, resulting in a hard-to-find bug, particularly if the access doesn't go past the allocation.
In
ac_val = bar.ac_val[0][0] + 3780 * 16;
bar.ac_val[0][0] is int16_t[16], so that adding anything other than values in range [0, 16) to it results in undefined behaviour.
The reason for the undefined behaviour is segmented memory model (as opposed to modern flat/linear memory model) which C is still compatible with, when pointer values are composed of segment descriptor and byte offset within the segment. In such a model, distinct arrays may reside in different segments. Segment descriptor units are not byte offsets, so that subtracting segment descriptor values doesn't produce distance in bytes. The difference between pointers to different arrays residing in different segments ends up subtracting segment descriptors resulting in undefined behaviour.
Your particular array is allocated using malloc. It cannot possibly span multiple memory segments. As long as your pointers, including expression temporaries, don't point outside this heap allocated array, these pointers are valid and well-defined.
It is the array element type int16_t[16] and indexing outside its bounds what causes the undefined behaviour. This array element type is essentially a red herring for a C compiler.
If you switch your array element type to plain int16_t and convert your 2d array indexes into 1d, e.g. [row][column] to [row * n_columns + column], this problem ceases to exist.
You can also side-step the undefined behaviour arising from pointer arithmetic with integer arithmetic:
uintptr_t ac_val = (uintptr_t)bar.ac_val[0][0] + 3780 * 16 * sizeof(int16_t);
printf("Result: %zu\n", (size_t) ((ac_val - ((uintptr_t) bar.ac_val[0][0])));
This relies on the facts that:
Converting a pointer to uintptr_t and back is well-defined.
Unsigned integer addition and subtraction is well-defined.
This question already has answers here:
Can an equality comparison of unrelated pointers evaluate to true?
(4 answers)
Closed 3 years ago.
#include <stdio.h>
int main(void)
{
int a, b;
int *p = &a;
#ifdef __clang__
int *q = &b + 1;
#elif __GNUC__
int *q = &b - 1;
#endif
printf("%p %p %d\n", (void *)p, (void *)q, p == q);
}
C11 § 6.5.9 \ 6 says that
Two pointers compare equal if and only if both are null pointers, both are pointers to the same object (including a pointer to an object and a subobject at its beginning) or function, both are pointers to one past the last element of the same array object, or one is a pointer to one past the end of one array object and the other is a pointer to the start of a different array object that happens to immediately follow the first array object in the address space.
I have tested it four different ways:
Clang 9.0.1 with -01 option;
Clang 9.0.1 without any options;
GCC 9.2.0 with -01 option;
GCC 9.2.9 without any options.
The results are the following:
$ ./prog_clang
0x7ffebf0a65d4 0x7ffebf0a65d4 1
$ ./prog_clang_01
0x7ffd9931b9bc 0x7ffd9931b9bc 1
$ ./prog_gcc
0x7ffea055a980 0x7ffea055a980 1
$ ./prog_gcc_01
0x7fffd5fa5490 0x7fffd5fa5490 0
What is the correct behavior in this case?
What is the correct behavior in this case?
There is none. Comparing pointers to or one past the end of two completely unrelated objects is undefined behavior.
Per footnote 109 of the C11 standard (bolding is mine):
Two objects may be adjacent in memory because they are adjacent elements of a larger array or adjacent members of a structure with no padding between them, or because the implementation chose to place them so, even though they are unrelated. If prior invalid pointer operations (such as accesses outside array bounds) produced undefined behavior, subsequent comparisons also produce undefined behavior.
Two pointers compare equal if and only if both are null pointers,
they are not null
both are pointers to the same object (including a pointer to an object and a subobject at its beginning) or function
they do not point to the same object, nor a subobject, nor a function
both are pointers to one past the last element of the same array object,
they are not pointers to array elements.
or one is a pointer to one past the end of one array object and the other is a pointer to the start of a different array object that happens to immediately follow the first array object in the address space.
they are not pointers to array elements.
So, according to the standard, your pointers do not meet the requirements for comparing as equal, and should have never compared as equal.
Now, in your tests, in the first three cases, the pointers did in fact compare as equal. One can say that the compilers do not strictly adhere to the standard, because the standard says "if and only if", but as you have seen, clang and gcc without -O1 behave as if the standard said "if" without the "and only if" part. The compilers simply do not try to take extra measures to ensure that the "and only if" part is respected, so they allow the pointers to compare as equal, as a matter of pure coincidence, despite the fact that according to the standard, they shouldn't.
Since it was pure coincidence, in the last case the coincidence does not hold true anymore, due to a number of unknown reasons having to do with the compiler's implementation of optimizations. The compiler may have decided to reverse the order of the variables on the stack, or to put them farther away from each other, or who knows what.
Just to clarify a thing in other answers. Assume that you have two objects, a and b, a pointer p and make the pointer point to one of the objects with an offset, for instance p = &a + 1. Now p could just by coincidence have the same value as &b.
But - and this is important - this does NOT mean that p is pointing at b.
I take a comparison here. Maybe it's a bit strange but bear with me. Imagine that you see a road sign pointing towards a city. Now imagine that you decide to stand between the sign and the city. Does this mean that the sign is pointing at you? In this situation the answer could be quite philosophical, but it shows the point (haha). Even if one could argue for both yes and no, it's quite clear that the sign is not intended to point at you. In the C standard, they have chosen to interpret "pointing at" as "intentionally pointing at".
I followed the discussion on One-byte-off pointer still valid in C?.
The gist of that discussion, as far as I could gather, was that if you have:
char *p = malloc(4);
Then it is OK to get pointers up to p+4 by using pointer arithmetic. If you get a pointer by using p+5, then the behavior is undefined.
I can see why dereferencing p+5 could cause undefined behavior. But undefined behavior using just pointer arithmetic?
Why would the arithmetic operators + and - not be valid operations? I don’t see any harm by adding or subtracting a number from a pointer. After all, a pointer is a represented by a number that captures the address of an object.
Of course, I was not in the standardization committee :) I am not privy to the discussions they had before codifying the standard. I am just curious. Any insight will be useful.
The simplest answer is that it is conceivable that a machine traps integer overflow. If that were the case, then any pointer arithmetic which wasn't confined to a single storage region might cause overflow, which would cause a trap, disrupting execution of the program. C shouldn't be obliged to check for possible overflow before attempting pointer arithmetic, so the standard allows a C implementation on such a machine to just allow the trap to happen, even if chaos ensues.
Another case is an architecture where memory is segmented, so that a pointer consists of a segment address (with implicit trailing 0s) and an offset. Any given object must fit in a single segment, which means that valid pointer arithmetic can work only on the offset. Again, overflowing the offset in the course of pointer arithmetic might produce random results, and the C implementation is under no obligation to check for that.
Finally, there may well be optimizations which the compiler can produce on the assumption that all pointer arithmetic is valid. As a simple motivating case:
if (iter - 1 < object.end()) {...}
Here the test can be omitted because it must be true for any pointer iter whose value is a valid position in (or just after) object. The UB for invalid pointer arithmetic means that the compiler is not under any obligation to attempt to prove that iter is valid (although it might need to ensure that it is based on a pointer into object), so it can just drop the comparison and proceed to generate unconditional code. Some compilers may do this sort of thing, so watch out :)
Here, by the way, is the important difference between unspecified behaviour and undefined behaviour. Comparing two pointers (of the same type) with == is defined regardless of whether they are pointers into the same object. In particular, if a and b are two different objects of the same type, end_a is a pointer to one-past-the-end of a and begin_b is a pointer to b, then
end_a == begin_b
is unspecified; it will be 1 if and only if b happens to be just after a in memory, and otherwise 0. Since you can't normally rely on knowing that (unless a and b are array elements of the same array), the comparison is normally meaningless; but it is not undefined behaviour and the compiler needs to arrange for either 0 or 1 to be produced (and moreover, for the same comparison to consistently have the same value, since you can rely on objects not moving around in memory.)
One case I can think of where the result of a + or - might give unexpected results is in the case of overflow or underflow.
The question you refer to points out that for p = malloc(4) you can do p+4 for comparison. One thing this needs to guarantee is that p+4 will not overflow. It doesn't guarantee that p+5 wont overflow.
That is to say that the + or - themselves wont cause any problems, but there is a chance, however small, that they will return a value that is unsuitable for comparison.
Performing basic +/- arithmetic on a pointer will not cause a problem. The order of pointer values is sequential: &p[0] < &p[1] < ... &p[n] for a type n objects long. But pointer arithmetic outside this range is not defined. &p[-1] may be less or greater than &p[0].
int *p = malloc(80 * sizeof *p);
int *q = p + 1000;
printf("p:%p q:%p\n", p, q);
Dereferencing pointers outside their range or even inside the memory range, but unaligned is a problem.
printf("*p:%d\n", *p); // OK
printf("*p:%d\n", p[79]); // OK
printf("*p:%d\n", p[80]); // Bad, but &p[80] will be greater than &p[79]
printf("*p:%d\n", p[-1]); // Bad, order of p, p[-1] is not defined
printf("*p:%d\n", p[81]); // Bad, order of p[80], p[81] is not defined
char *r = (char*) p;
printf("*p:%d\n", *((int*) (r + 1)) ); // Bad
printf("*p:%d\n", *q); // Bad
Q: Why is p[81] undefined behavior?
A: Example: memory runs 0 to N-1. char *p has the value N-81. p[0] to p[79] is well defined. p[80] is also well defined. p[81] would need to the value N to be consistent, but that overflows so p[81] may have the value 0, N or who knows.
A couple of things here, the reason p+4 would be valid in such a case is because iteration to one past the last position is valid.
p+5 would not be a problem theoretically, but according to me the problem will be when you will try to dereference (p+5) or maybe you will try to overwrite that address.
I was browsing through a webpage which had some c FAQ's, I found this statement made.
Similarly, if a has 10 elements and ip
points to a[3], you can't compute or
access ip + 10 or ip - 5. (There is
one special case: you can, in this
case, compute, but not access, a
pointer to the nonexistent element
just beyond the end of the array,
which in this case is &a[10].
I was confused by the statement
you can't compute ip + 10
I can understand accessing the element out of bounds is undefined, but computing!!!.
I wrote the following snippet which computes (let me know if this is what the website meant by computing) a pointer out-of-bounds.
#include <stdio.h>
int main()
{
int a[10], i;
int *p;
for (i = 0; i<10; i++)
a[i] = i;
p = &a[3];
printf("p = %p and p+10 = %p\n", p, p+10);
return 0;
}
$ ./a.out
p = 0xbfa53bbc and p+10 = 0xbfa53be4
We can see that p + 10 is pointing to 10 elements(40 bytes) past p. So what exactly does the statement made in the webpage mean. Did I mis-interpret something.
Even in K&R (A.7.7) this statement is made:
The result of the + operator is the
sum of the operands. A pointer to an
object in an array and a value of any
integral type may be added. ... The
sum is a pointer of the same type as
the original pointer, and points to
another object in the same array,
appropriately offset from the original
object. Thus if P is a pointer to an
object in an array, the expression P+1
is a pointer to the next object in the
array. If the sum pointer points
outside the bounds of the array,
except at the first location beyond
the high end, the result is
undefined.
What does being "undefined" mean. Does this mean the sum will be undefined, or does it only mean when we dereference it the behavior is undefined. Is the operation undefined even when we do not dereference it and just calculate the pointer to element out-of-bounds.
Undefined behavior means exactly that: absolutely anything could happen. It could succeed silently, it could fail silently, it could crash your program, it could blue screen your OS, or it could erase your hard drive. Some of these are not very likely, but all of them are permissible behaviors as far as the C language standard is concerned.
In this particular case, yes, the C standard is saying that even computing the address of a pointer outside of valid array bounds, without dereferencing it, is undefined behavior. The reason it says this is that there are some arcane systems where doing such a calculation could result in a fault of some sort. For example, you might have an array at the very end of addressable memory, and constructing a pointer beyond that would cause an overflow in a special address register which generates a trap or fault. The C standard wants to permit this behavior in order to be as portable as possible.
In reality, though, you'll find that constructing such an invalid address without dereferencing it has well-defined behavior on the vast majority of systems you'll come across in common usage. Creating an invalid memory address will have no ill effects unless you attempt to dereference it. But of course, it's better to avoid creating those invalid addresses so that your code will work perfectly even on those arcane systems.
The web page wording is confusing, but technically correct. The C99 language specification (section 6.5.6) discusses additive expressions, including pointer arithmetic. Subitem 8 specifically states that computing a pointer one past the end of an array shall not cause an overflow, but beyond that the behavior is undefined.
In a more practical sense, C compilers will generally let you get away with it, but what you do with the resulting value is up to you. If you try to dereference the resulting pointer to a value, as K&R states, the behavior is undefined.
Undefined, in programming terms, means "Don't do that." Basically, it means the specification that defines how the language works does not define an appropriate behavior in that situation. As a result, theoretically anything can happen. Generally all that happens is you have a silent or noisy (segfault) bug in your program, but many programmers like to joke about other possible results from causing undefined behavior, like deleting all of your files.
The behaviour would be undefined in the following case
int a[3];
(a + 10) ; // this is UB too as you are computing &a[10]
*(a+10) = 10; // Ewwww!!!!
Following an hot comment thread in another question, I came to debate of what is and what is not defined in C99 standard about C arrays.
Basically when I define a 2D array like int a[5][5], does the standard C99 garantee or not that it will be a contiguous block of ints, can I cast it to (int *)a and be sure I will have a valid 1D array of 25 ints.
As I understand the standard the above property is implicit in the sizeof definition and in pointer arithmetic, but others seems to disagree and says casting to (int*) the above structure give an undefined behavior (even if they agree that all existing implementations actually allocate contiguous values).
More specifically, if we think an implementation that would instrument arrays to check array boundaries for all dimensions and return some kind of error when accessing 1D array, or does not give correct access to elements above 1st row. Could such implementation be standard compilant ? And in this case what parts of the C99 standard are relevant.
We should begin with inspecting what int a[5][5] really is. The types involved are:
int
array[5] of ints
array[5] of arrays
There is no array[25] of ints involved.
It is correct that the sizeof semantics imply that the array as a whole is contiguous. The array[5] of ints must have 5*sizeof(int), and recursively applied, a[5][5] must have 5*5*sizeof(int). There is no room for additional padding.
Additionally, the array as a whole must be working when given to memset, memmove or memcpy with the sizeof. It must also be possible to iterate over the whole array with a (char *). So a valid iteration is:
int a[5][5], i, *pi;
char *pc;
pc = (char *)(&a[0][0]);
for (i = 0; i < 25; i++)
{
pi = (int *)pc;
DoSomething(pi);
pc += sizeof(int);
}
Doing the same with an (int *) would be undefined behaviour, because, as said, there is no array[25] of int involved. Using a union as in Christoph's answer should be valid, too. But there is another point complicating this further, the equality operator:
6.5.9.6
Two pointers compare equal if and only if both are null pointers, both are pointers to the same object (including a pointer to an object and a subobject at its beginning) or function, both are pointers to one past the last element of the same array object, or one is a pointer to one past the end of one array object and the other is a pointer to the start of a different array object that happens to immediately follow the first array object in the address space. 91)
91) Two objects may be adjacent in memory because they are adjacent elements of a larger array or adjacent members of a structure with no padding between them, or because the implementation chose to place them so, even though they are unrelated. If prior invalid pointer operations (such as accesses outside array bounds) produced undefined behavior, subsequent comparisons also produce undefined behavior.
This means for this:
int a[5][5], *i1, *i2;
i1 = &a[0][0] + 5;
i2 = &a[1][0];
i1 compares as equal to i2. But when iterating over the array with an (int *), it is still undefined behaviour, because it is originally derived from the first subarray. It doesn't magically convert to a pointer into the second subarray.
Even when doing this
char *c = (char *)(&a[0][0]) + 5*sizeof(int);
int *i3 = (int *)c;
won't help. It compares equal to i1 and i2, but it isn't derived from any of the subarrays; it is a pointer to a single int or an array[1] of int at best.
I don't consider this a bug in the standard. It is the other way around: Allowing this would introduce a special case that violates either the type system for arrays or the rules for pointer arithmetic or both. It may be considered a missing definition, but not a bug.
So even if the memory layout for a[5][5] is identical to the layout of a[25], and the very same loop using a (char *) can be used to iterate over both, an implementation is allowed to blow up if one is used as the other. I don't know why it should or know any implementation that would, and maybe there is a single fact in the Standard not mentioned till now that makes it well defined behaviour. Until then, I would consider it to be undefined and stay on the safe side.
I've added some more comments to our original discussion.
sizeof semantics imply that int a[5][5] is contiguous, but visiting all 25 integers via incrementing a pointer like int *p = *a is undefined behaviour: pointer arithmetics is only defined as long as all pointers invoved lie within (or one element past the last element of) the same array, as eg &a[2][1] and &a[3][1] do not (see C99 section 6.5.6).
In principle, you can work around this by casting &a - which has type int (*)[5][5] - to int (*)[25]. This is legal according to 6.3.2.3 §7, as it doesn't violate any alignment requirements. The problem is that accessing the integers through this new pointer is illegal as it violates the aliasing rules in 6.5 §7. You can work around this by using a union for type punning (see footnote 82 in TC3):
int *p = ((union { int multi[5][5]; int flat[25]; } *)&a)->flat;
This is, as far as I can tell, standards compliant C99.
If the array is static, like your int a[5][5] array, it's guaranteed to be contiguous.