I was browsing through a webpage which had some c FAQ's, I found this statement made.
Similarly, if a has 10 elements and ip
points to a[3], you can't compute or
access ip + 10 or ip - 5. (There is
one special case: you can, in this
case, compute, but not access, a
pointer to the nonexistent element
just beyond the end of the array,
which in this case is &a[10].
I was confused by the statement
you can't compute ip + 10
I can understand accessing the element out of bounds is undefined, but computing!!!.
I wrote the following snippet which computes (let me know if this is what the website meant by computing) a pointer out-of-bounds.
#include <stdio.h>
int main()
{
int a[10], i;
int *p;
for (i = 0; i<10; i++)
a[i] = i;
p = &a[3];
printf("p = %p and p+10 = %p\n", p, p+10);
return 0;
}
$ ./a.out
p = 0xbfa53bbc and p+10 = 0xbfa53be4
We can see that p + 10 is pointing to 10 elements(40 bytes) past p. So what exactly does the statement made in the webpage mean. Did I mis-interpret something.
Even in K&R (A.7.7) this statement is made:
The result of the + operator is the
sum of the operands. A pointer to an
object in an array and a value of any
integral type may be added. ... The
sum is a pointer of the same type as
the original pointer, and points to
another object in the same array,
appropriately offset from the original
object. Thus if P is a pointer to an
object in an array, the expression P+1
is a pointer to the next object in the
array. If the sum pointer points
outside the bounds of the array,
except at the first location beyond
the high end, the result is
undefined.
What does being "undefined" mean. Does this mean the sum will be undefined, or does it only mean when we dereference it the behavior is undefined. Is the operation undefined even when we do not dereference it and just calculate the pointer to element out-of-bounds.
Undefined behavior means exactly that: absolutely anything could happen. It could succeed silently, it could fail silently, it could crash your program, it could blue screen your OS, or it could erase your hard drive. Some of these are not very likely, but all of them are permissible behaviors as far as the C language standard is concerned.
In this particular case, yes, the C standard is saying that even computing the address of a pointer outside of valid array bounds, without dereferencing it, is undefined behavior. The reason it says this is that there are some arcane systems where doing such a calculation could result in a fault of some sort. For example, you might have an array at the very end of addressable memory, and constructing a pointer beyond that would cause an overflow in a special address register which generates a trap or fault. The C standard wants to permit this behavior in order to be as portable as possible.
In reality, though, you'll find that constructing such an invalid address without dereferencing it has well-defined behavior on the vast majority of systems you'll come across in common usage. Creating an invalid memory address will have no ill effects unless you attempt to dereference it. But of course, it's better to avoid creating those invalid addresses so that your code will work perfectly even on those arcane systems.
The web page wording is confusing, but technically correct. The C99 language specification (section 6.5.6) discusses additive expressions, including pointer arithmetic. Subitem 8 specifically states that computing a pointer one past the end of an array shall not cause an overflow, but beyond that the behavior is undefined.
In a more practical sense, C compilers will generally let you get away with it, but what you do with the resulting value is up to you. If you try to dereference the resulting pointer to a value, as K&R states, the behavior is undefined.
Undefined, in programming terms, means "Don't do that." Basically, it means the specification that defines how the language works does not define an appropriate behavior in that situation. As a result, theoretically anything can happen. Generally all that happens is you have a silent or noisy (segfault) bug in your program, but many programmers like to joke about other possible results from causing undefined behavior, like deleting all of your files.
The behaviour would be undefined in the following case
int a[3];
(a + 10) ; // this is UB too as you are computing &a[10]
*(a+10) = 10; // Ewwww!!!!
Related
Some code flattens multidimensional arrays like this:
int array[10][10];
int* flattened_array = (int*)array;
for (int i = 0; i < 10*10; ++i)
flattened_array[i] = 42;
This is, as far as I know, undefined behaviour.
I am trying to detect cases like this with gcc sanitizers, however, neither -fsanitize=address nor -fsanitize=undefined work.
Is there a sanitizer option that I'm missing, or perhaps a different way to detect this at run time? Or maybe I am mistaken and the code is legal?
Edit: the sanitizers detect this access as an error:
array[0][11] = 42;
but do not detect this:
int* first_element = array[0];
first_element[11] = 42;
Furthermore, clang detects the first access statically, and gives out a warning
warning: array index 11 is past the end of the array (which contains 10 elements) [-Warray-bounds]
Edit: the above does not change if int in the declaration is replaced with char.
Edit: There are two potential sources of UB.
Accessing an object (of type int[10]) through an lvalue of an incompatible type (int).
Out-of-bounds access with a pointer of type int* and an index >=10 where the size of the underlying array is 10 (rather than 100).
Sanitizers don't seem to detect the first kind of violation. There's a debate whether this is a violation at all. After all, there's also an object of type int at the same address.
As for the second potential UB, the UB sanitizer does detect such access, but only if it is done directly via the 2D array itself and not via another variable that points to its first element, as shown above. I don't think the two accesses should differ in legality. They should be either both legal (and then ubsan has a false positive) or both illegal (and then ubsan has a false negative).
Edit: Appendix J2 says array[0][11] should be UB, even though it is only informative.
From a language lawyer point of view, this is generally seen as invalid code because the integers arrays are only of size 10 and the code does access past the declared array size. Yet it used to be a common idiom, and I know no compiler that would not accept it. Still with all real world compilers I know, the resulting program will have the expected behaviour.
After a second (in reality much more) reading of the C11 standard draft (n1570) the intent of the standard is still not clear. 6.2.5 Types § 20 says:
An array type describes a contiguously allocated nonempty set of objects with a
particular member object type, called the element type.
It makes clear that an array contains contiguously allocated objects. But IMHO is unclear about whether a contiguously allocated set of objects is an array.
If you answer no, then the shown code does invoke UB by accessing an array past it last element
But if you answer yes, then a set of 10 contiguous sets of 10 contiguous integers gives 100 contiguous integers and can be seen as an array of 100 integers. Then the shown code would be legal.
That latter acception seems to be common in the real word because it is consistent with dynamic array allocation: you allocate enough memory for a number of objects, and you can access that as if it had been declared as an array - and the allocation function ensures no alignment problem.
My conclusion so far is:
is it nice and clean code: certainly not and I would avoid it in production code
does it invokes UB: I really do not know and my personal opinion is probably no
Let us look at the code added in the edit:
array[0][11] = 42;
The compiler knows that array is declared as int[10][10]. So it knows that both indexes must be less than 10, and it can raise a warning.
int* first_element = array[0];
first_element[11] = 42;
first_element is declared as a mere pointer. Statically, the compiler has to assume that it can point inside an array of unknown size, so outside of a specific context, it is much harder to raise a warning. Of course for a human programmer it is evident that both way should be seen the same, but as a compiler is not required to emit any diagnostic for out of bounds array, efforts to detect them are left to the minimum and only trivial cases are detected.
In addition, when a compiler internally codes pointer arithmetics on common platforms, it just computes a memory address which is the original address and a byte offset. So it could emit the same code as:
char *addr = (char *) first_element; // (1)
addr += 11 * sizeof(int); // (2)
*((int *) addr) = 42; // (3)
(1) is legal because a pointer to any objet (here an int) can be converter to a pointer to char, which is required to point to the first byte of the representation of the object
(2) the trick here is that (char *) first_element is the same as (char *) array because the first byte of the 10*10 array is the first byte of the first int of the first row, and an single byte can only have one single address. As the size of array is 10 * 10 * sizeof(int), 11 * sizeof(int) is a valid offset in it.
(3) for the very same reason, (char *) &array[1][1] is addr because elements in an array are contiguous so their byte representation are also contiguous. And as a forth and back conversion between 2 types is legal and required to give back the original pointer, (int *) addr is (int*) ((char*) &array[1][1]). That means that dereferencing (int *) addr is legal and shall have the same effect as array[1][1] = 42.
This does not mean that first_element[11] does not involve UB. array[0] has a declared size which is 10. It just explains why all known compilers accepts it (in addition to not wanting to break legacy code).
The sanitizers are not especially good at catching out-of-bounds access unless the array in question is a complete object.
For example, they do not catch out-of-bounds access in this case:
struct {
int inner[10];
char tail[sizeof(int)];
} outer;
int* p = outer.inner;
p[10] = 42;
which is clearly illegal. But they do catch access to p[11].
Array flattening is not really different in spirit from this kind of access. Code generated by the compiler, and the way it is instrumented by sanitizers, should be pretty similar. So there's little hope that array flattening can be detected by these tools.
Multidimensional arrays are required to be contiguously allocated (C uses row-major). And there can't be any padding between elements of an array - though not stated explicitly in the standard, this can be inferred with array definition that says "contiguously allocated nonempty set of objects" and the definition of sizeof operator.
So the "flattening" should be legal.
Re. accessing array[0][11]: although, Annex J2 directly gives an example, what exactly is the violation in the normative isn't obvious. Nevertheless, it's still possible to make it legal an intermediate cast to char*:
*((int*)((char*)array + 11 * sizeof(int))) = 42;
(writing such code is obviously not advised ;)
The problem here is that there Standard describes as equivalent two operations, one of which clearly should be defined and one of which the Standard expressly says is not defined.
The cleanest way to resolve this, which seems to coincide with what clang and gcc already do, which is to say that applying [] operator to an array lvalue or non-l value does not cause it to decay, but instead looks up an element directly, yielding an lvalue if the array operand was an lvalue, and a non-l value otherwise.
Recognizing the use of [] with an array as being a distinct operator would clean up a number of corner cases in the semantics, including accessing an array within a structure returned by a function, register-qualified arrays, arrays of bitfields, etc. It would also make clear what the inner-array-subscript limitations are supposed to mean. Given foo[x][y], a compiler would be entitled to assume that y would be within the bounds of the inner array, but given *(foo[x]+y) it would not be entitled to make such an assumption.
It is said in C that when pointers refer to the same array or one element past the end of that array the arithmetics and comparisons are well defined. Then what about one before the first element of the array? Is it okay so long as I do not dereference it?
Given
int a[10], *p;
p = a;
(1) Is it legal to write --p?
(2) Is it legal to write p-1 in an expression?
(3) If (2) is okay, can I assert that p-1 < a?
There is some practical concern for this. Consider a reverse() function that reverses a C-string that ends with '\0'.
#include <stdio.h>
void reverse(char *p)
{
char *b, t;
b = p;
while (*p != '\0')
p++;
if (p == b) /* Do I really need */
return; /* these two lines? */
for (p--; b < p; b++, p--)
t = *b, *b = *p, *p = t;
}
int main(void)
{
char a[] = "Hello";
reverse(a);
printf("%s\n", a);
return 0;
}
Do I really need to do the check in the code?
Please share your ideas from language-lawyer/practical perspectives, and how you would cope with such situations.
(1) Is it legal to write --p?
It's "legal" as in the C syntax allows it, but it invokes undefined behavior. For the purpose of finding the relevant section in the standard, --p is equivalent to p = p - 1 (except p is only evaluated once). Then:
C17 6.5.6/8
If both the pointer
operand and the result point to elements of the same array object, or one past the last
element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined.
The evaluation invokes undefined behavior, meaning it doesn't matter if you de-reference the pointer or not - you already invoked undefined behavior.
Furthermore:
C17 6.5.6/9:
When two pointers are subtracted, both shall point to elements of the same array object, or one past the last element of the array object;
If your code violates a "shall" in the ISO standard, it invokes undefined behavior.
(2) Is it legal to write p-1 in an expression?
Same as (1), undefined behavior.
As for examples of how this could cause problems in practice: imagine that the array is placed at the very beginning of a valid memory page. When you decrement outside that page, there could be a hardware exception or a pointer trap representation. This isn't a completely unlikely scenario for microcontrollers, particularly when they are using segmented memory maps.
The use of that kind of pointer arithmetic is bad coding practice, as it might lead to a significant bunch of hard to debug problems.
I only had to use this kind of thing once in more than 20 years. I was writing a call-back function, but I did not have access to the proper data. The calling function provided a pointer inside a proper array, and I needed the byte just before that pointer.
Considering that I had access to the entire source code, and I verified the behavior several times to prove that I get what I need, and I had it reviewed by other colleagues, I decided it is OK to let it go to production.
The proper solution would have been to change the caller function to return the proper pointer, but that was not feasible, time and money considering (that part of the software was licensed from a third-party).
So, a[-1] is possible, but should be used ONLY with very good care in very particular situations. Otherwise, there is no good reason to do that kind of self-hurting Voodoo ever.
Note: at a proper analysis, in my example, it is obvious that I did not access an element before the beginning of a proper array, but the element before a pointer, which was guaranteed to be inside of the same array.
Referring to the code provided:
it is NOT OK to use p[-1] with reverse(a);;
it is OK(-ish) to use it with reverse(a+1);, because you remain inside the array.
Apologies for the loopy question wording, but it boils down to this. I'm maintaining C code that does something like this:
char *foo = "hello"
int i;
char *bar;
i = strlen(foo);
bar = &foo[i];
Is this safe? Could there be a case where the foo[i] leads to a segmentation fault, before it gets a pointer, and isn't covered by the compiler?
Yes, you are allowed to set a pointer one beyond the end of an array (although you are not actually doing that in your code - you are pointing to the NUL terminator).
Note though that the behaviour on dereferencing a pointer set to one past the end of an array would be undefined.
(Note that &foo[i] is required by the C standard to be evaluated without dereferencing the pointer: i.e. &foo[i + 1] is valid, but foo[i + 1] on its own isn't.)
I'm maintaining C code that does something like this:
char *foo = "hello"
int i;
char *bar;
i = strlen(foo);
bar = &foo[i];
Is this safe?
The code presented conforms with the current and all past C language standards, and has the same well-defined behavior according to each. In particular, be aware that C string literals represent null-terminated character arrays, just like all other C strings, so in the example code. foo[i] refers to the terminator, and &foo[i] points into the array (to its last element), not outside it.
Moreover, C also allows obtaining a pointer to just past the end of an array, so even 1 + &foo[i] is valid and conforming. It is not permitted to dereference such a pointer, but you can perform pointer arithmetic and comparisons with it (subject to the normal constraints).
Could there be a case where the foo[i] leads to a
segmentation fault, before it gets a pointer, and isn't covered by the
compiler?
C does not have anything whatever to say about segmentation faults. Whenever you receive one, either your implementation has exhibited non-conformance, or (almost always) your program is exercising undefined behavior. The code presented conforms, subject to a correct declaration of strlen() being in scope, and if it appears as part of a complete program that conforms and does not otherwise exercise UB then there is no reason short of general skepticism to fear a segfault.
I have read
Directly assigning values to C Pointers
However, I am trying to understand this different scenario...
int *ptr = 10000;
printf("value: %d\n", ptr);
printf("value: %d\n", *ptr);
I got a segmentation fault on the second printf.
Now, I am under the impression that 10000 is a memory location because pointers point to the address in the memory. I am also aware that 10000 could be anywhere in the memory (which might already be occupied by some other process)
Therefore, I am thinking so the first print is just saying that "ok, just give me the value of the address as some integer value", so, ok, I got 10000.
Then I am saying "ok, now deference it for me", but I have not put anything in it so (or it is uninitialized) so I got a segmentation fault.
Maybe my logic is already totally off the track and this point.
UPDATED::::
Thanks for all the quick responses.. So here is my understanding.
First,
int *ptr = 10000;
is UB because I cannot assign a pointer to a constant value.
Second, the following is also UB because instead of using %p, I am using %d.
printf("value: %d\n", ptr)
Third, I have given an address (although it is UB), but I have not initialized to some value so, the following statement got seg fault.
print("value: %d\n", *ptr)
Is my understanding correct now ?
thanks.
int *ptr = 10000;
This is not merely undefined behavior. This is a constraint violation.
The expression 10000 is of type int. ptr is of type int*. There is no implicit conversion from int to int* (except for the special case of a null pointer constant, which doesn't apply here).
Any conforming C compiler, on processing this declaration, must issue a diagnostic message. It's permitted for that message to be a non-fatal warning, but once it's issued that message, the program's behavior is undefined.
A compiler could treat it as a fatal error and refuse to compile your program. (In my opinion, compilers should do this.)
If you really wanted to assign ptr to point to address 10000, you could have written:
int *ptr = (int*)10000;
There's no implicit conversion from int to int*, but you can do an explicit conversion with a cast operator.
That's a valid thing to do if you happen to know that 10000 is a valid address for the machine your code will run on. But in general the result of converting an integer to a pointer "is implementation-defined, might not be correctly aligned, might not point to an entity of the referenced type, and might be a trap representation" (N1570 section 6.3.2.3). If 10000 isn't a valid address (and it very probably isn't), then your program still has undefined behavior, even if you try to access the value of the pointer, but especially if you try to dereference it.
This also assumes that converting the integer value 10000 to a pointer type is meaningful. Commonly such a conversion copies the bits of the numeric value, but the C standard doesn't say so. It might do some strange implementation-defined transformation on the number to produce an address.
Addresses (pointer values) are not numbers.
printf("value: %d\n", ptr);
This definitely has undefined behavior. The %d format requires an int argument. On many systems, int and int* aren't even the same size. You might end up printing, say, the high-order half of the pointer value, or even some complete garbage if integers and pointers aren't passed as function arguments in the same way. To print a pointer, use %p and convert the pointer to void*:
printf("value: %p\n", (void)ptr);
Finally:
printf("value: %d\n", *ptr);
The format string is correct, but just evaluating *ptr has undefined behavior (unless (int*)10000 happens to be a valid address).
Note that "undefined behavior" doesn't mean your program is going to crash. It means that the standard says nothing about what will happen when you run it. (Crashing is probably the best possible outcome; it makes it obvious that there's a bug.)
No, the definition int *ptr = 10000 does not give undefined behaviour.
It converts the literal value 10000 into a pointer, and initialises ptr with that value.
However, in your example
int *ptr = 10000;
printf("value: %d\n", ptr);
printf("value: %d\n", *ptr);
both of the printf() statements give undefined behaviour.
The first gives undefined behaviour because the %d format tells printf() that the corresponding argument is of type int, which ptr is not. In practice (with most compilers/libraries) it will often happily print the value 10000, but that is happenstance. Essentially (and a little over-simplistically), for that to happen, a round-trip conversion (e.g. converting 10000 from int to pointer, and then converting that pointer value to an int) needs to give the same value. Surviving that round trip is NOT guaranteed, although it does happen with some implementations, so the first printf() might APPEAR well behaved, despite involving undefined behaviour.
Part of the problem with undefined behaviour is that one possible result is code behaving as the programmer expects. That doesn't make the behaviour defined. It simply means that a particular set of circumstances (behaviour of compiler, operating system, hardware, etc) happen to conspire to give behaviour that seems sensible to the programmer.
The second printf() statement gives undefined behaviour because it dereferences ptr. The standard gives no basis to expect that a pointer with value 10000 corresponds to anything in particular. It might be a location in RAM. It might be a location in video memory. It might be a value that does not correspond to any location in memory that exists on your computer. It might be a logical or physical memory location that your operating system deems your process is not allowed to access (which is actually what causes an access violation under several operating systems, which then send a signal to the process running your program directing it to terminate).
A lot of C compilers (if appropriately configured) will give a warning on the initialisation of ptr because of this - an initialisation like this is easier for the compiler to detect, and usually indicates problems in subsequent code.
This may cause undefined behavior since the pointer converted from 10000 may be invalid.
Your OS may not allow your program to access the address 10000, so it will raise Segmentation Fault.
int *x = some numerical value (i.e. 10, whatever)
may be for microcomputers or low-level (example: creating OS).
I followed the discussion on One-byte-off pointer still valid in C?.
The gist of that discussion, as far as I could gather, was that if you have:
char *p = malloc(4);
Then it is OK to get pointers up to p+4 by using pointer arithmetic. If you get a pointer by using p+5, then the behavior is undefined.
I can see why dereferencing p+5 could cause undefined behavior. But undefined behavior using just pointer arithmetic?
Why would the arithmetic operators + and - not be valid operations? I don’t see any harm by adding or subtracting a number from a pointer. After all, a pointer is a represented by a number that captures the address of an object.
Of course, I was not in the standardization committee :) I am not privy to the discussions they had before codifying the standard. I am just curious. Any insight will be useful.
The simplest answer is that it is conceivable that a machine traps integer overflow. If that were the case, then any pointer arithmetic which wasn't confined to a single storage region might cause overflow, which would cause a trap, disrupting execution of the program. C shouldn't be obliged to check for possible overflow before attempting pointer arithmetic, so the standard allows a C implementation on such a machine to just allow the trap to happen, even if chaos ensues.
Another case is an architecture where memory is segmented, so that a pointer consists of a segment address (with implicit trailing 0s) and an offset. Any given object must fit in a single segment, which means that valid pointer arithmetic can work only on the offset. Again, overflowing the offset in the course of pointer arithmetic might produce random results, and the C implementation is under no obligation to check for that.
Finally, there may well be optimizations which the compiler can produce on the assumption that all pointer arithmetic is valid. As a simple motivating case:
if (iter - 1 < object.end()) {...}
Here the test can be omitted because it must be true for any pointer iter whose value is a valid position in (or just after) object. The UB for invalid pointer arithmetic means that the compiler is not under any obligation to attempt to prove that iter is valid (although it might need to ensure that it is based on a pointer into object), so it can just drop the comparison and proceed to generate unconditional code. Some compilers may do this sort of thing, so watch out :)
Here, by the way, is the important difference between unspecified behaviour and undefined behaviour. Comparing two pointers (of the same type) with == is defined regardless of whether they are pointers into the same object. In particular, if a and b are two different objects of the same type, end_a is a pointer to one-past-the-end of a and begin_b is a pointer to b, then
end_a == begin_b
is unspecified; it will be 1 if and only if b happens to be just after a in memory, and otherwise 0. Since you can't normally rely on knowing that (unless a and b are array elements of the same array), the comparison is normally meaningless; but it is not undefined behaviour and the compiler needs to arrange for either 0 or 1 to be produced (and moreover, for the same comparison to consistently have the same value, since you can rely on objects not moving around in memory.)
One case I can think of where the result of a + or - might give unexpected results is in the case of overflow or underflow.
The question you refer to points out that for p = malloc(4) you can do p+4 for comparison. One thing this needs to guarantee is that p+4 will not overflow. It doesn't guarantee that p+5 wont overflow.
That is to say that the + or - themselves wont cause any problems, but there is a chance, however small, that they will return a value that is unsuitable for comparison.
Performing basic +/- arithmetic on a pointer will not cause a problem. The order of pointer values is sequential: &p[0] < &p[1] < ... &p[n] for a type n objects long. But pointer arithmetic outside this range is not defined. &p[-1] may be less or greater than &p[0].
int *p = malloc(80 * sizeof *p);
int *q = p + 1000;
printf("p:%p q:%p\n", p, q);
Dereferencing pointers outside their range or even inside the memory range, but unaligned is a problem.
printf("*p:%d\n", *p); // OK
printf("*p:%d\n", p[79]); // OK
printf("*p:%d\n", p[80]); // Bad, but &p[80] will be greater than &p[79]
printf("*p:%d\n", p[-1]); // Bad, order of p, p[-1] is not defined
printf("*p:%d\n", p[81]); // Bad, order of p[80], p[81] is not defined
char *r = (char*) p;
printf("*p:%d\n", *((int*) (r + 1)) ); // Bad
printf("*p:%d\n", *q); // Bad
Q: Why is p[81] undefined behavior?
A: Example: memory runs 0 to N-1. char *p has the value N-81. p[0] to p[79] is well defined. p[80] is also well defined. p[81] would need to the value N to be consistent, but that overflows so p[81] may have the value 0, N or who knows.
A couple of things here, the reason p+4 would be valid in such a case is because iteration to one past the last position is valid.
p+5 would not be a problem theoretically, but according to me the problem will be when you will try to dereference (p+5) or maybe you will try to overwrite that address.