I am a beginner with C. I am wondering, how's malloc working.
Here is a sample code, I wrote on while trying to understand it's working.
CODE:
#include<stdio.h>
#include<stdlib.h>
int main() {
int i;
int *array = malloc(sizeof *array);
for (i = 0; i < 5; i++) {
array[i] = i+1;
}
printf("\nArray is: \n");
for (i = 0; i < 5; i++) {
printf("%d ", array[i]);
}
free(array);
return 0;
}
OUTPUT:
Array is:
1 2 3 4 5
In the program above, I have only allocated space for 1 element, but the array now holds 5 elements. So as the programs runs smoothly without any error, what is the purpose of realloc().
Could anybody explain why?
Thanks in advance.
The fact that the program runs smoothly does not mean it is correct!
Try to increase the 5 in the for loop to some extent (500000, for instance, should suffices). At some point, it will stop working giving you a SEGFAULT.
This is called Undefined Behaviour.
valgrind would also warn you about the issue with something like the following.
==16812== Invalid write of size 4
==16812== at 0x40065E: main (test.cpp:27)
If you dont know what valgrind is check this out: How do I use valgrind to find memory leaks?. (BTW it's a fantastic tool)
This should help gives you some more clarifications: Accessing unallocated memory C++
This is typical undefined behavior (UB).
You are not allowed to code like that. As a beginner, think it is a mistake, a fault, a sin, something very dirty etc.
Could anybody explain why?
If you need to understand what is really happening (and the details are complex) you need to dive into your implementation details (and you don't want to). For example, on Linux, you could study the source code of your C standard library, of the kernel, of the compiler, etc. And you need to understand the machine code generated by the compiler (so with GCC compile with gcc -S -O1 -fverbose-asm to get an .s assembler file).
See also this (which has more references).
Read as soon as possible Lattner's blog on What Every C programmer should know about undefined behavior. Every one should have read it!
The worst thing about UB is that sadly, sometimes, it appears to "work" like you want it to (but in fact it does not).
So learn as quickly as possible to avoid UB systematically.
BTW, enabling all warnings in the compiler might help (but perhaps not in your particular case). Take the habit to compile with gcc -Wall -Wextra -g if using GCC.
Notice that your program don't have any arrays. The array variable is a pointer (not an array) so is very badly named. You need to read more about pointers and C dynamic memory allocation.
int *array = malloc(sizeof *array); //WRONG
is very wrong. The name array is very poorly chosen (it is a pointer, not an array; you should spend days in reading what is the difference - and what do "arrays decay into pointers" mean). You allocate for a sizeof(*array) which is exactly the same as sizeof(int) (and generally 4 bytes, at least on my machine). So you allocate space for only one int element. Any access beyond that (i.e. with any even small positive index, e.g. array[1] or array[i] with some positive i) is undefined behavior. And you don't even test against failure of malloc (which can happen).
If you want to allocate memory space for (let's say) 8 int-s, you should use:
int* ptr = malloc(sizeof(int) * 8);
and of course you should check against failure, at least:
if (!ptr) { perror("malloc"); exit(EXIT_FAILURE); };
and you need to initialize that array (the memory you've got contain unpredictable junk), e.g.
for (int i=0; i<8; i++) ptr[i] = 0;
or you could clear all bits (with the same result on all machines I know of) using
memset(ptr, 0, sizeof(int)*8);
Notice that even after a successful such malloc (or a failed one) you always have sizeof(ptr) be the same (on my Linux/x86-64 box, it is 8 bytes), since it is the size of a pointer (even if you malloc-ed a memory zone for a million int-s).
In practice, when you use C dynamic memory allocation you need to know conventionally the allocated size of that pointer. In the code above, I used 8 in several places, which is poor style. It would have been better to at least
#define MY_ARRAY_LENGTH 8
and use MY_ARRAY_LENGTH everywhere instead of 8, starting with
int* ptr = malloc(MY_ARRAY_LENGTH*sizeof(int));
In practice, allocated memory has often a runtime defined size, and you would keep somewhere (in a variable, a parameter, etc...) that size.
Study the source code of some existing free software project (e.g. on github), you'll learn very useful things.
Read also (perhaps in a week or two) about flexible array members. Sometimes they are very useful.
So as the programs runs smoothly without any error
That's just because you were lucky. Keep running this program and you might segfault soon. You were relying on undefined behaviour (UB), which is always A Bad Thing™.
What is the purpose of realloc()?
From the man pages:
void *realloc(void *ptr, size_t size);
The realloc() function changes the size of the memory block pointed to
by ptr to size bytes. The contents will be unchanged in the range
from the start of the region up to the minimum of the old and new sizes. If the new size is larger than the old size, the added
memory
will not be initialized. If ptr is NULL, then the call is equivalent to malloc(size), for all values of size; if size is equal
to zero,
and ptr is not NULL, then the call is equivalent to free(ptr). Unless ptr is NULL, it must have been returned by an
earlier call to malloc(), calloc() or realloc(). If the area pointed to was moved, a free(ptr) is done.
Related
I am learning C and am a bit confused about why I don't get any warnings/errors from GCC with the following snippet. I am allocating space of 1 char to a pointer to int, is it some changes done by GCC (like optimizing the allocated space for an int silently)?
#include <stdlib.h>
#include <stdio.h>
typedef int *int_ptr;
int main()
{
int_ptr ip;
ip = calloc(1, sizeof(char));
*ip = 1000;
printf("%d", *ip);
free(ip);
return 0;
}
Update
Having read the answers below, would it still be unsafe and risky if I did it the other way around, e.g. allocating space of an int to a pointer to char? The source of my confusion is the following answer in the Rosetta Code, in the function StringArray StringArray_new(size_t size) the coder seems to exactly be doing this this->elements = calloc(size, sizeof(int)); where this->elements is a char** elements.
The result of calloc is of the type void* which implicitly gets converted to an int* type. The C programming language and GCC simply trust the programmer to write sensible casts and thus do not produce any warnings. Your code is technically valid C, even though it produces an invalid memory write at runtime. So no, GCC does not implicitly allocate space for an integer.
If you would like to see warnings of this kind before running (or compilation), you may want to use, e.g., Clang Static Analyzer.
If you would like to see errors of this kind at runtime, run your program with Valgrind.
Update
Allocating space for 1 int (i.e. 4 bytes, generally) and then interpreting it as a char (1 char is 1 byte) will not result in any memory errors, as the space required for an int is larger than the space required for a char. In fact, you could use the result as an array of 4 char's.
The sizeof operator returns the size of that type as a number of bytes. The calloc function then allocates that number of bytes, it is not aware of what type will be stored in the allocated segment.
While this does not produce any errors, it can indeed be considered a "risky and unsafe" programming practice. Exceptions exist for advanced applications where you´d want to reuse the same memory segment for storing values of a different type.
The code on Rosetta Code you linked to contains a bug in exactly that line. It should allocate memory for a char* instead of an int. These are generally not equal. On my machine, the size of an int is 4 bytes, while the size of a char* is 8 bytes.
C has very little type safety and malloc has none. It allocates exactly as many bytes as you tell it to allocate. It's not the compiler's duty to warn about it, it is the programmer's duty to get the parameters right.
The reason why it "seems to work" is undefined behavior. *ip = 1000; might as well crash. What is undefined behavior and how does it work?
Also you should never hide pointers behind typedef. This is very bad practice and only serves to confuse the programmer and everyone reading the code.
The compiler only cares that you pass the right number and types of arguments to calloc - it doesn’t check to see if those arguments make sense, since that’s a runtime issue.
Yes, you could probably add some special case logic to the compiler when both arguments are constant expressions and sizeof operations like in this case, but how would it handle a case where both arguments are runtime variables like calloc( num, size );?
This is one of those cases where C assumes you’re smart enough to know what you’re doing.
Compiler only check Syntax, not Semantic.
Your code's Syntax is OK. But Semantic not.
Having recently switched to c, I've been told a thousand ways to Sunday that referencing a value that hasn't been initialized isn't good practice, and leads to unexpected behavior. Specifically, (because my previous language initializes integers as 0) I was told that integers might not be equal to zero when uninitialized. So I decided to put that to the test.
I wrote the following piece of code to test this claim:
#include <stdlib.h>
#include <stdio.h>
#include <stdbool.h>
#include <assert.h>
int main(){
size_t counter = 0;
size_t testnum = 2000; //The number of ints to allocate and test.
for(int i = 0; i < testnum; i++){
int* temp = malloc(sizeof(int));
assert(temp != NULL); //Just in case there's no space.
if(*temp == 0) counter++;
}
printf(" %d",counter);
return 0;
}
I compiled it like so (in case it matters):
gcc -std=c99 -pedantic name-of-file.c
Based on what my instructors had said, I expected temp to point to a random integer, and that the counter would not be incremented very often. However, my results blow this assumption out of the water:
testnum: || code returns:
2 2
20 20
200 200
2000 2000
20000 20000
200000 200000
2000000 2000000
... ...
The results go on for a couple more powers of 10 (*2), but you get the point.
I then tested a similar version of the above code, but I initialized an integer array, set every even index to plus 1 of its previous value (which was uninitialized), freed the array, and then performed the code above, testing the same amount of integers as the size of the array (i.e. testnum). These results are much more interesting:
testnum: || code returns:
2 2
20 20
200 175
2000 1750
20000 17500
200000 200000
2000000 2000000
... ...
Based on this, it's reasonable to conclude that c reuses freed memory (obviously), and sets some of those new integer pointers to point to addresses which contain the previously incremented integers. My question is why all of my integer pointers in the first test consistently point to 0. Shouldn't they point to whatever empty spaces on the heap that my computer has offered the program, which could (and should, at some point) contain non-zero values?
In other words, why does it seem like all of the new heap space that my c program has access to has been wiped to all 0s?
As you already know, you are invoking undefined behavior, so all bets are off. To explain the particular results you are observing ("why is uninitialized memory that I haven't written to all zeros?"), you first have to understand how malloc works.
First of all, malloc does not just directly ask the system for a page whenever you call it. It has an internal "cache" from which it can hand you memory. Let's say you call malloc(16) twice. The first time you call malloc(16), it will scan the cache, see that it's empty, and request a fresh page (4KB on most systems) from the OS. It then splits this page into two chunks, gives you the smaller chunk, and saves the other chunk in its cache. The second time you call malloc(16), it will see that it has a large enough chunk in its cache, and allocate memory by splitting that chunk again.
freeing memory simply returns it to the cache. There, it may (or may not be) be merged with other chunks to form a bigger chunk, and is then used for other allocations. Depending on the details of your allocator, it may also choose to return free pages to the OS if possible.
Now the second piece of the puzzle -- any fresh pages you obtain from the OS are filled with 0s. Why? Imagine it simply handed you an unused page that was previously used by some other process that has now terminated. Now you have a security problem, because by scanning that "uninitialized memory", your process could potentially find sensitive data such as passwords and private keys that were used by the previous process. Note that there is no guarantee by the C language that this happens (it may be guaranteed by the OS, but the C specification doesn't care). It's possible that the OS filled the page with random data, or didn't clear it at all (especially common on embedded devices).
Now you should be able to explain the behavior you're observing. The first time, you are obtaining fresh pages from the OS, so they are empty (again, this is an implementation detail of your OS, not the C language). However, if you malloc, free, then malloc again, there is a chance that you are getting back the same memory that was in the cache. This cached memory is not wiped, since the only process that could have written to it was your own. Hence, you just get whatever data was previously there.
Note: this explains the behavior for your particular malloc implementation. It doesn't generalize to all malloc implementations.
First off, you need to understand, that C is a language that is described in a standard and implemented by several compilers (gcc, clang, icc, ...). In several cases, the standard mentions that certain expressions or operations result in undefined behavior.
What is important to understand is that this means you have no guarantees on what the behavior will be. In fact any compiler/implementation is basically free to do whatever it wants!
In your example, this means you cannot make any assumptions of when the uninitialized memory will contain. So assuming it will be random or contain elements of a previously freed object are just as wrong as assuming that it is zero, because any of that could happen at any time.
Many compilers (or OS's) will consistently do the same thing (such as the 0s you observer), but that is also not guaranteed.
(To maybe see different behaviors, try using a different compiler or different flags.)
Undefined behavior does not mean "random behavior" nor does it mean "the program will crash." Undefined behavior means "the compiler is allowed to assume that this never happens," and "if this does happen, the program could do anything." Anything includes doing something boring and predictable.
Also, the implementation is allowed to define any instance of undefined behavior. For instance, ISO C never mentions the header unistd.h, so #include <unistd.h> has undefined behavior, but on an implementation conforming to POSIX, it has well-defined and documented behavior.
The program you wrote is probably observing uninitialized malloced memory to be zero because, nowadays, the system primitives for allocating memory (sbrk and mmap on Unix, VirtualAlloc on Windows) always zero out the memory before returning it. That's documented behavior for the primitives, but it is not documented behavior for malloc, so you can only rely on it if you call the primitives directly. (Note that only the malloc implementation is allowed to call sbrk.)
A better demonstration is something like this:
#include <stdio.h>
#include <stdlib.h>
int
main(void)
{
{
int *x = malloc(sizeof(int));
*x = 0xDEADBEEF;
free(x);
}
{
int *y = malloc(sizeof(int));
printf("%08X\n", *y);
}
return 0;
}
which has pretty good odds of printing "DEADBEEF" (but is allowed to print 00000000, or 5E5E5E5E, or make demons fly out of your nose).
Another better demonstration would be any program that makes a control-flow decision based on the value of an uninitialized variable, e.g.
int foo(int x)
{
int y;
if (y == 5)
return x;
return 0;
}
Current versions of gcc and clang will generate code that always returns 0, but the current version of ICC will generate code that returns either 0 or the value of x, depending on whether register EDX is equal to 5 when the function is called. Both possibilities are correct, and so generating code that always returns x, and so is generating code that makes demons fly out of your nose.
useless deliberations, wrong assumptions, wrong test. In your test every time you malloc sizeof int of the fresh memory. To see the that UB you wanted to see you should put something in that allocated memory and then free it. Otherwise you do not reuse it, you just leak it. Most of the OS-es clear all the memory allocated to the program before executing it for the security reasons (so when you start the program everything was zeroed or initialised to the static values).
Change your program to:
int main(){
size_t counter = 0;
size_t testnum = 2000; //The number of ints to allocate and test.
for(int i = 0; i < testnum; i++){
int* temp = malloc(sizeof(int));
assert(temp != NULL); //Just in case there's no space.
if(*temp == 0) counter++;
*temp = rand();
free(temp);
}
printf(" %d",counter);
return 0;
}
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int *arr = (int*)malloc(10);
int i;
for(i=0;i<100;i++)
{
arr[i]=i;
printf("%d", arr[i]);
}
return 0;
}
I am running above program and a call to malloc will allocate 10 bytes of memory and since each int variable takes up 2 bytes so in a way I can store 5 int variables of 2 bytes each thus making up my total 10 bytes which I dynamically allocated.
But on making a call to for-loop it is allowing me to enter values even till 99th index and storing all these values as well. So in a way if I am storing 100 int values it means 200 bytes of memory whereas I allocated only 10 bytes.
So where is the flaw with this code or how does malloc behave? If the behaviour of malloc is non-deterministic in such a manner then how do we achieve proper dynamic memory handling?
The flaw is in your expectations. You lied to the compiler: "I only need 10 bytes" when you actually wrote 100*sizeof(int) bytes. Writing beyond an allocated area is undefined behavior and anything may happen, ranging from nothing to what you expect to crashes.
If you do silly things expect silly behaviour.
That said malloc is usually implemented to ask the OS for chunks of memory that the OS prefers (like a page) and then manages that memory. This speeds up future mallocs especially if you are using lots of mallocs with small sizes. It reduces the number of context switches that are quite expensive.
First of all, in the most Operating Systems the size of int is 4 bytes. You can check that with:
printf("the size of int is %d\n", sizeof(int));
When you call the malloc function you allocate size at heap memory. The heap is a set aside for dynamic allocation. There's no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time. Because your program is small and you have no collision in the heap you can run this for with more values that 100 and it runs too.
When you know what are you doing with malloc then you build programs with proper dynamic memory handling. When your code has improper malloc allocation then the behaviour of the program is "unknown". But you can use gdb debugger to find where the segmentation will be revealed and how the things are in heap.
malloc behaves exactly as it states, allocates n number bytes of memory, nothing more. Your code might run on your PC, but operating on non-allocated memory is undefined behavior.
A small note...
Int might not be 2 bytes, it varies on different architectures/SDKs. When you want to allocate memory for n integer elements, you should use malloc( n * sizeof( int ) ).
All in short, you manage dynamic memory with other tools that the language provides ( sizeof, realloc, free, etc. ).
C doesn't do any bounds-checking on array accesses; if you define an array of 10 elements, and attempt to write to a[99], the compiler won't do anything to stop you. The behavior is undefined, meaning the compiler isn't required to do anything in particular about that situation. It may "work" in the sense that it won't crash, but you've just clobbered something that may cause problems later on.
When doing a malloc, don't think in terms of bytes, think in terms of elements. If you want to allocate space for N integers, write
int *arr = malloc( N * sizeof *arr );
and let the compiler figure out the number of bytes.
Why doesn't this segfault?
#include <stdio.h>
int main()
{
int i;
int arr[] = {1, 2, 3, 4};
for(i=0;i<8;i++)
{
arr[i] = i;
printf(" %d", arr[i]);
}
printf("\n");
return 0;
}
But it does when I replace 8 with 9 in the for loop.
Note: I am trying it on 32 bit crunchbang linux
Technically speaking, this program results in undefined behavior, meaning that there are absolutely no guarantees whatsoever about what this program is allowed to do. It could in principle format your hard drive, email nasty messages to all of your friends, set your computer on fire, or become sentient and enslave humanity.
In this case, the undefined behavior when n = 8 happens to not do anything bad, while the undefined behavior when n = 9 causes a segfault. Both are totally permissible behaviors for the program, but aren't at all guaranteed to be portable.
Hope this helps!
#templatetypedef is right; he beat me to the answer. Undefined behavior doesn't necessarily mean segfault.
But I would like to offer some speculation as for why it segfaults for n=9 but not 8. Normally, variables like int i and int arr[] reside on the stack, which grows down. So, i might reside at address 0x4000, and if it's a 4 byte int, then arr[0] is at 0x4004, arr[1] is at 0x4008, and so on. In this case, the compiler will most likely have allocated 16 bytes for arr, and it may be using addresses below 0x4014 (the first byte after arr, ie the address of arr[5]). But there are typically other things on the stack besides the variables you manually declare. For instance, the arguments to the printf call are probably on the stack, and there could be other bookkeeping information. So if arr[9] coincides with the stack location that the compiler is using for something else, you will inadvertently clobber that information, which can lead to segfault.
Alternatively, if arr[8] is at the bottom of your stack frame allocated from the OS, then your OS will have configured your processor to literally refuse to execute any instruction to load or store a value from the address that coincides with arr[9]. I think this scenario is unlikely since stack sizes are usually 4KB or so, and for such a short program you should be nowhere near the end of the allocated stack.
templatetypedef answer is correct, this is a case of undefined behavior (which is not as uncommon as you might think) but I would like to add something:
9*sizeof(int) is not a power of 2. When compilers allocate memory they usually allocate in power of two bytes to prevent fragmentation of the heap.
You might wonder why it didn't instead allocate 4*sizeof(int) as that would be exactly what you asked. I'm not sure but it might be that the compiler allocates a couple of extra bytes or that it has a minimum amount of memory it will allocate for arrays. It depends on the compiler and the options you compiled your code with.
Try to run your code without optimizations (-O0 on the command line), it might instead allocate the exact amount of memory you need and segfault for i>=4.
I might be wrong though, I'm not sure that C compilers allocate static arrays in the heap or the stack.
I've a question about the memory management in C (and GCC 4.3.3 under Debian GNU/Linux).
According to the C Programming Language Book by K&R, (chap. 7.8.5), when I free a pointer and then dereference it, is an error. But I've some doubts since I've noted that sometimes, as in the source I've pasted below, the compiler (?) seems to work according a well-defined principle.
I've a trivial program like this, that shows how to return an array dynamically allocated:
#include <stdio.h>
#include <stdlib.h>
int * ret_array(int n)
{
int * arr = (int *) malloc(10 * sizeof(int));
int i;
for (i = 0; i < n; i++)
{
arr[i] = i*2;
}
printf("Address pointer in ret_array: %p\n", (void *) arr);
return arr;
}
int * ret_oth_array(int n)
{
int * arr = (int *) malloc(10 * sizeof(int));
int i;
for (i = 0; i < n; i++)
{
arr[i] = i+n;
}
printf("Address pointer in ret_oth_array: %p\n", (void *) arr);
return arr;
}
int main(void)
{
int *p = NULL;
int *x = NULL;
p = ret_array(5);
x = ret_oth_array(6);
printf("Address contained in p: %p\nValue of *p: %d\n", (void *) p, *p);
free(x);
free(p);
printf("Memory freed.\n");
printf("*(p+4) = %d\n", *(p+4));
printf("*x = %d\n", *x);
return 0;
}
If I try to compile it with some arguments: -ansi -Wall -pedantic-errors, it doesn't raises errors or warning. Not only; it also runs fine.
Address pointer in ret_array: 0x8269008
Address pointer in ret_oth_array: 0x8269038
Address contained in p: 0x8269008
Value of *p: 0
Memory freed.
*p+4 = 8
*x = 0
*(p+4) is 8 and *x is 0.
Why does this happen?
If *(p+4) is 8, shouldn't *x be 6, since the first element of the x-array is 6?
Another strange thing happens if I try to change the order of the calls to free.
E.g.:
int main(int argc, char * argv[])
{
/* ... code ... */
free(p);
free(x);
printf("Memory freed.\n");
printf("*(p+4) = %d\n", *(p+4));
printf("*x = %d\n", *x);
return 0;
}
In fact in this case the output (on my machine) will be:
*p+4 = 8
*x = 142106624
Why does the x pointer is really "freed", while the p pointer is freed (I hope) "differently"?
Ok, I know that after freeing memory I should make the pointers to point to NULL, but I was just curious :P
It is undefined behaviour, so it is an error to deference freed pointer as strange things may (and will) happen.
free() doesn't change the value of the pointer so it keeps pointing to the heap in the process address space - that's why you don't get segfault, however it is not specified and in theory on some platforms you can get segfault when you try to dereference pointer immediately after freeing.
To prevent this it is a good habit to assign pointer to NULL after freeing so it will fail in predictable way - segfault.
Please note that on some OSes (HP-UX, may be some others as well) it is allowed to dereference NULL pointer, just to prevent segfault (and thus hiding problems). I find it rather stupid as it makes things much more difficult to diagnose, although I don't know the full story behind this.
free() (and malloc()) are not from gcc. They come from the C library, which on Debian is usually glibc. So, what you are seeing is glibc's behavior, not gcc's (and would change with a different C library, or a different version of the C library).
I particular, after you use free() you are releasing the memory block malloc() gave you. It's not yours anymore. Since it is not supposed to be used anymore, the memory manager within glibc is free to do whatever it wants with the memory block, including using parts of it as its own memory structures (which is probably why you are seeing its contents change; they have been overwritten with bookkeeping information, probaly pointers to other blocks or counters of some sort).
There are other things that can happen; in particular, if the size of your allocation was large enough, glibc can ask the kernel for a separate memory block for it (with mmap() or similar calls), and release it back to the kernel during the free(). In that case, your program would crash. This can in theory also happen in some circunstances even with small allocations (glibc can grow/shrink the heap).
This is probably not the answer you are looking for, but I'll give it a try anyway:
Since you're playing with undefined behaviour that you should never depend on in any way, shape or form, what good does it do to know how exactly one given implementation handles that?
Since gcc is free to change that handling at any given time, between versions, architectures or depending on the position and brightness of the moon, there's no use whatsoever in knowing how it handles it right now. At least not to the developer that uses gcc.
*(p+4) is 8 and *x is 0. Why does this happen? If *(p+4) is 8, shouldn't *x be 6, since the first element of the x-array is 6?
One possible explanation for this would be that printf("...%i..."...) might internally use malloc to allocate a temporary buffer for it's string interpolation. That would overwrite the contents of both arrays after the first output.
Generally, I would consider it an error if a program relies on the value of a pointer after it has been freed. I would even say that it's a very bad code smell if it keeps the value of a pointer after it has been freed (instead of letting it go out of scope or overwriting it with NULL). Even if it works under very special circumstances (single-threaded code with a specific heap manager).
Once you free the dynamic-memory variable, it is not yours. The memory manager is free to do what ever it sees better with that piece of memory you where pointing to. The compiler doesn't do anything as far as I know with the freed blocks of memory, because it is a function and not defined by the language. Even if it is defined by the languages, the compiler just inserts calls to the underlying OS functions.
Just wanna say, It is undefined by the language, So you have to check your OS and watch that piece of memory after freeing it. The behavior maybe random, because sometimes other programs ask for memory, sometimes not!
by the way, It is different on my machine, the value changes for both pointers.
Although the behavior you're seeing seems to be consistent, it is not guaranteed to be so. Unforeseen circumstances may causes this behavior to change (let alone the fact that this is completely implemetnatation dependent).
Specifically, in your example you free() the array and then get the old content when you access the array. If you'll have additional malloc() calls after the free() - chances are that the old contents will be lost.
Even if the memory is freed, it is not necessarily reused for some other purpose. Old pointers to your process memory are still valid pointers (though to unallocated memory) so you do not get segmentation faults either.