segmentation fault while accessing memory area [duplicate] - c

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Invalid read/write sometimes creates segmentation fault and sometimes does not
I was doing some experimentation with malloc and wrote this very small program on a linux m/c:
int main(){
int *p=NULL;
p = (int *)malloc(10);
*(p + 33*1000) = 5;
free(p);
return 0;
}
This program is not giving segmentation fault but if i change the line 5 to this
*(p + 34*1000) = 5;
Then it gives a segmentation fault. On my system the page size is 4K.
I am not able to explain why its giving a segmentation fault at around 128Kb(34*1000 is around 128K) after p.
If anyone can explain this with the perspective of memory management in linux that would be great.

You are accessing beyond the memory you allocated for p with both *(p+33*1000),*(p+34*1000) which is undefined behaviour. You can't reason out as it may "work" or crash or anything can happen.

You are modifying memory that you have not allocated yourself - the address you are writing to is way beyond the limits of your array. Whenever you write beyond an array bounds you run the risk of a segfault - it depends on the memory location. It may not segfault depending on the address, but there is no way that this is a good thing to do and results will be unpredictable.

This program exhibits undefined behavior (per the C standard) and, strictly speaking, there's nothing else to explain about it.
The language standard does not in any way describe how memory management is or should be implemented at the low level on any particular platform. Some memory areas can be accessible despite you not explicitly allocating them.

After allocating space for 10 integers you can only dereference those 10 by *(p+0), *(p+1), ... ,*p(p+8) ,*p(p+9). No more else you're beyond the extents of what you've allocated.
Dereferencing elsewhere may mean you're attempting to use an invalid pointer, and hence the segmentation fault.

May be that's the available memory in the system.
In this case all <= 33 * 1000 will pass and all >= 34 *1000 will fail.

Related

what happens when we only malloc size of 1 int and not n ints in an array? [duplicate]

I have this code in C which takes in bunch of chars
#include<stdio.h>
# define NEWLINE '\n'
int main()
{
char c;
char str[6];
int i = 0;
while( ((c = getchar()) != NEWLINE))
{
str[i] = c;
++i;
printf("%d\n", i);
}
return 0;
}
Input is: testtesttest
Output:
1
2
3
4
5
6
7
8
117
118
119
120
My questions are:
Why don't I get an out of bounds (segmentation fault) exception although I clearly exceed the capacity of the array?
Why do the numbers in the output suddenly jump to very big numbers?
I tried this in C++ and got the same behavior. Could anyone please explain what is the reason for this?
C doesn't check array boundaries. A segmentation fault will only occur if you try to dereference a pointer to memory that your program doesn't have permission to access. Simply going past the end of an array is unlikely to cause that behaviour. Undefined behaviour is just that - undefined. It may appear to work just fine, but you shouldn't be relying on its safety.
Your program causes undefined behaviour by accessing memory past the end of the array. In this case, it looks like one of your str[i] = c writes overwrites the value in i.
C++ has the same rules as C does in this case.
When you access an array index, C and C++ don't do bound checking. Segmentation faults only happen when you try to read or write to a page that was not allocated (or try to do something on a page which isn't permitted, e.g. trying to write to a read-only page), but since pages are usually pretty big (multiples of a few kilobytes; on Mac OS, multiples of 4 KB), it often leaves you with lots of room to overflow.
If your array is on the stack (like yours), it can be even worse as the stack is usually pretty large (up to several megabytes). This is also the cause of security concerns: writing past the bounds of an array on the stack may overwrite the return address of the function and lead to arbitrary code execution (the famous "buffer overflow" security breaches).
The values you get when you read are just what happens to exist at this particular place. They are completely undefined.
If you use C++ (and are lucky enough to work with C++11), the standard defines the std::array<T, N> type, which is an array that knows its bounds. The at method will throw if you try to read past the end of it.
C does not check array bounds.
In fact, a segmentation fault isn't specifically a runtime error generated by exceeding the array bounds. Rather, it is a result of memory protection that is provided by the operating system. It occurs when your process tries to access memory that does not belong to it, or if it tries to access a memory address that doesn't exist.
Writing outside array bounds (actually even just performing the pointer arithmetic/array subscripting, even if you don't use the result to read or write anything) results in undefined behavior. Undefined behavior is not a reported or reportable error; it measn your program could do anything at all. It's very dangerous and you are fully responsible for avoiding it. C is not Java/Python/etc.
Memory allocation is more complicated than it seems. The variable "str," in this case, is on the stack, next to other variables, so it's not followed by unallocated memory. Memory is also usually word-aligned (one "word" is four to eight bytes.) You were possibly messing with the value for another variable, or with some "padding" (empty space added to maintain word alignment,) or something else entirely.
Like R.. said, it's undefined behavior. Out-of-bounds conditions could cause a segfault... or they could cause silent memory corruption. If you're modifying memory which has already been allocated, this will not be caught by the operating system. That's why out-of-bounds errors are so insidious in C.
Because C/C++ doesn't check bounds.
Arrays are internally pointers to a location in memory. When you call arr[index] what it does is:
type value = *(arr + index);
The results are big numbers (not necessarily) because they're garbage values. Just like an uninitialized variable.
You have to compile like this:
gcc -fsanitize=address -ggdb -o test test.c
There is more information here.

What does static arr[] mean??? [In C language] [duplicate]

I have this code in C which takes in bunch of chars
#include<stdio.h>
# define NEWLINE '\n'
int main()
{
char c;
char str[6];
int i = 0;
while( ((c = getchar()) != NEWLINE))
{
str[i] = c;
++i;
printf("%d\n", i);
}
return 0;
}
Input is: testtesttest
Output:
1
2
3
4
5
6
7
8
117
118
119
120
My questions are:
Why don't I get an out of bounds (segmentation fault) exception although I clearly exceed the capacity of the array?
Why do the numbers in the output suddenly jump to very big numbers?
I tried this in C++ and got the same behavior. Could anyone please explain what is the reason for this?
C doesn't check array boundaries. A segmentation fault will only occur if you try to dereference a pointer to memory that your program doesn't have permission to access. Simply going past the end of an array is unlikely to cause that behaviour. Undefined behaviour is just that - undefined. It may appear to work just fine, but you shouldn't be relying on its safety.
Your program causes undefined behaviour by accessing memory past the end of the array. In this case, it looks like one of your str[i] = c writes overwrites the value in i.
C++ has the same rules as C does in this case.
When you access an array index, C and C++ don't do bound checking. Segmentation faults only happen when you try to read or write to a page that was not allocated (or try to do something on a page which isn't permitted, e.g. trying to write to a read-only page), but since pages are usually pretty big (multiples of a few kilobytes; on Mac OS, multiples of 4 KB), it often leaves you with lots of room to overflow.
If your array is on the stack (like yours), it can be even worse as the stack is usually pretty large (up to several megabytes). This is also the cause of security concerns: writing past the bounds of an array on the stack may overwrite the return address of the function and lead to arbitrary code execution (the famous "buffer overflow" security breaches).
The values you get when you read are just what happens to exist at this particular place. They are completely undefined.
If you use C++ (and are lucky enough to work with C++11), the standard defines the std::array<T, N> type, which is an array that knows its bounds. The at method will throw if you try to read past the end of it.
C does not check array bounds.
In fact, a segmentation fault isn't specifically a runtime error generated by exceeding the array bounds. Rather, it is a result of memory protection that is provided by the operating system. It occurs when your process tries to access memory that does not belong to it, or if it tries to access a memory address that doesn't exist.
Writing outside array bounds (actually even just performing the pointer arithmetic/array subscripting, even if you don't use the result to read or write anything) results in undefined behavior. Undefined behavior is not a reported or reportable error; it measn your program could do anything at all. It's very dangerous and you are fully responsible for avoiding it. C is not Java/Python/etc.
Memory allocation is more complicated than it seems. The variable "str," in this case, is on the stack, next to other variables, so it's not followed by unallocated memory. Memory is also usually word-aligned (one "word" is four to eight bytes.) You were possibly messing with the value for another variable, or with some "padding" (empty space added to maintain word alignment,) or something else entirely.
Like R.. said, it's undefined behavior. Out-of-bounds conditions could cause a segfault... or they could cause silent memory corruption. If you're modifying memory which has already been allocated, this will not be caught by the operating system. That's why out-of-bounds errors are so insidious in C.
Because C/C++ doesn't check bounds.
Arrays are internally pointers to a location in memory. When you call arr[index] what it does is:
type value = *(arr + index);
The results are big numbers (not necessarily) because they're garbage values. Just like an uninitialized variable.
You have to compile like this:
gcc -fsanitize=address -ggdb -o test test.c
There is more information here.

Under what circumstances does a segmentation fault occur? [duplicate]

What is a segmentation fault? Is it different in C and C++? How are segmentation faults and dangling pointers related?
Segmentation fault is a specific kind of error caused by accessing memory that “does not belong to you.” It’s a helper mechanism that keeps you from corrupting the memory and introducing hard-to-debug memory bugs. Whenever you get a segfault you know you are doing something wrong with memory – accessing a variable that has already been freed, writing to a read-only portion of the memory, etc. Segmentation fault is essentially the same in most languages that let you mess with memory management, there is no principal difference between segfaults in C and C++.
There are many ways to get a segfault, at least in the lower-level languages such as C(++). A common way to get a segfault is to dereference a null pointer:
int *p = NULL;
*p = 1;
Another segfault happens when you try to write to a portion of memory that was marked as read-only:
char *str = "Foo"; // Compiler marks the constant string as read-only
*str = 'b'; // Which means this is illegal and results in a segfault
Dangling pointer points to a thing that does not exist anymore, like here:
char *p = NULL;
{
char c;
p = &c;
}
// Now p is dangling
The pointer p dangles because it points to the character variable c that ceased to exist after the block ended. And when you try to dereference dangling pointer (like *p='A'), you would probably get a segfault.
It would be worth noting that segmentation fault isn't caused by directly accessing another process memory (this is what I'm hearing sometimes), as it is simply not possible. With virtual memory every process has its own virtual address space and there is no way to access another one using any value of pointer. Exception to this can be shared libraries which are same physical address space mapped to (possibly) different virtual addresses and kernel memory which is even mapped in the same way in every process (to avoid TLB flushing on syscall, I think). And things like shmat ;) - these are what I count as 'indirect' access. One can, however, check that they are usually located long way from process code and we are usually able to access them (this is why they are there, nevertheless accessing them in a improper way will produce segmentation fault).
Still, segmentation fault can occur in case of accessing our own (process) memory in improper way (for instance trying to write to non-writable space). But the most common reason for it is the access to the part of the virtual address space that is not mapped to physical one at all.
And all of this with respect to virtual memory systems.
A segmentation fault is caused by a request for a page that the process does not have listed in its descriptor table, or an invalid request for a page that it does have listed (e.g. a write request on a read-only page).
A dangling pointer is a pointer that may or may not point to a valid page, but does point to an "unexpected" segment of memory.
To be honest, as other posters have mentioned, Wikipedia has a very good article on this so have a look there. This type of error is very common and often called other things such as Access Violation or General Protection Fault.
They are no different in C, C++ or any other language that allows pointers. These kinds of errors are usually caused by pointers that are
Used before being properly initialised
Used after the memory they point to has been realloced or deleted.
Used in an indexed array where the index is outside of the array bounds. This is generally only when you're doing pointer math on traditional arrays or c-strings, not STL / Boost based collections (in C++.)
According to Wikipedia:
A segmentation fault occurs when a
program attempts to access a memory
location that it is not allowed to
access, or attempts to access a memory
location in a way that is not allowed
(for example, attempting to write to a
read-only location, or to overwrite
part of the operating system).
Segmentation fault is also caused by hardware failures, in this case the RAM memories. This is the less common cause, but if you don't find an error in your code, maybe a memtest could help you.
The solution in this case, change the RAM.
edit:
Here there is a reference: Segmentation fault by hardware
Wikipedia's Segmentation_fault page has a very nice description about it, just pointing out the causes and reasons. Have a look into the wiki for a detailed description.
In computing, a segmentation fault (often shortened to segfault) or access violation is a fault raised by hardware with memory protection, notifying an operating system (OS) about a memory access violation.
The following are some typical causes of a segmentation fault:
Dereferencing NULL pointers – this is special-cased by memory management hardware
Attempting to access a nonexistent memory address (outside process's address space)
Attempting to access memory the program does not have rights to (such as kernel structures in process context)
Attempting to write read-only memory (such as code segment)
These in turn are often caused by programming errors that result in invalid memory access:
Dereferencing or assigning to an uninitialized pointer (wild pointer, which points to a random memory address)
Dereferencing or assigning to a freed pointer (dangling pointer, which points to memory that has been freed/deallocated/deleted)
A buffer overflow.
A stack overflow.
Attempting to execute a program that does not compile correctly. (Some compilers will output an executable file despite the presence of compile-time errors.)
Segmentation fault occurs when a process (running instance of a program) is trying to access read-only memory address or memory range which is being used by other process or access the non-existent (invalid) memory address.
Dangling Reference (pointer) problem means that trying to access an object or variable whose contents have already been deleted from memory, e.g:
int *arr = new int[20];
delete arr;
cout<<arr[1]; //dangling problem occurs here
In simple words: segmentation fault is the operating system sending a signal to the program
saying that it has detected an illegal memory access and is prematurely terminating the program to prevent
memory from being corrupted.
There are several good explanations of "Segmentation fault" in the answers, but since with segmentation fault often there's a dump of the memory content, I wanted to share where the relationship between the "core dumped" part in Segmentation fault (core dumped) and memory comes from:
From about 1955 to 1975 - before semiconductor memory - the dominant technology in computer memory used tiny magnetic doughnuts strung on copper wires. The doughnuts were known as "ferrite cores" and main memory thus known as "core memory" or "core".
Taken from here.
"Segmentation fault" means that you tried to access memory that you do not have access to.
The first problem is with your arguments of main. The main function should be int main(int argc, char *argv[]), and you should check that argc is at least 2 before accessing argv[1].
Also, since you're passing in a float to printf (which, by the way, gets converted to a double when passing to printf), you should use the %f format specifier. The %s format specifier is for strings ('\0'-terminated character arrays).
Simple meaning of Segmentation fault is that you are trying to access some memory which doesn't belong to you. Segmentation fault occurs when we attempt to read and/or write tasks in a read only memory location or try to freed memory. In other words, we can explain this as some sort of memory corruption.
Below I mention common mistakes done by programmers that lead to Segmentation fault.
Use scanf() in wrong way(forgot to put &).
int num;
scanf("%d", num);// must use &num instead of num
Use pointers in wrong way.
int *num;
printf("%d",*num); //*num should be correct as num only
//Unless You can use *num but you have to point this pointer to valid memory address before accessing it.
Modifying a string literal(pointer try to write or modify a read only memory.)
char *str;
//Stored in read only part of data segment
str = "GfG";
//Problem: trying to modify read only memory
*(str+1) = 'n';
Try to reach through an address which is already freed.
// allocating memory to num
int* num = malloc(8);
*num = 100;
// de-allocated the space allocated to num
free(num);
// num is already freed there for it cause segmentation fault
*num = 110;
Stack Overflow -: Running out of memory on the stack
Accessing an array out of bounds'
Use wrong format specifiers when using printf() and scanf()'
Consider the following snippets of Code,
SNIPPET 1
int *number = NULL;
*number = 1;
SNIPPET 2
int *number = malloc(sizeof(int));
*number = 1;
I'd assume you know the meaning of the functions: malloc() and sizeof() if you are asking this question.
Now that that is settled,
SNIPPET 1 would throw a Segmentation Fault Error.
while SNIPPET 2 would not.
Here's why.
The first line of snippet one is creating a variable(*number) to store the address of some other variable but in this case it is initialized to NULL.
on the other hand,
The second line of snippet two is creating the same variable(*number) to store the address of some other and in this case it is given a memory address(because malloc() is a function in C/C++ that returns a memory address of the computer)
The point is you cannot put water inside a bowl that has not been bought OR a bowl that has been bought but has not been authorized for use by you.
When you try to do that, the computer is alerted and it throws a SegFault error.
You should only face this errors with languages that are close to low-level like C/C++. There is an abstraction in other High Level Languages that ensure you do not make this error.
It is also paramount to understand that Segmentation Fault is not language-specific.
There are enough definitions of segmentation fault, I would like to quote few examples which I came across while programming, which might seem like silly mistakes, but will waste a lot of time.
You can get a segmentation fault in below case while argument type mismatch in printf:
#include <stdio.h>
int main(){
int a = 5;
printf("%s",a);
return 0;
}
output : Segmentation Fault (SIGSEGV)
When you forgot to allocate memory to a pointer, but try to use it.
#include <stdio.h>
typedef struct{
int a;
} myStruct;
int main(){
myStruct *s;
/* few lines of code */
s->a = 5;
return 0;
}
output : Segmentation Fault (SIGSEGV)
In computing, a segmentation fault or access violation is a fault, or failure condition, raised by hardware with memory protection,
notifying an operating system the software has attempted to access a
restricted area of memory. -WIKIPEDIA
You might be accessing the computer memory with the wrong data type. Your case might be like the code below:
#include <stdio.h>
int main(int argc, char *argv[]) {
char A = 'asd';
puts(A);
return 0;
}
'asd' -> is a character chain rather than a single character char data type. So, storing it as a char causes the segmentation fault. Stocking some data at the wrong position.
Storing this string or character chain as a single char is trying to fit a square peg in a round hole.
Terminated due to signal: SEGMENTATION FAULT (11)
Segm. Fault is the same as trying to breath in under water, your lungs were not made for that. Reserving memory for an integer and then trying to operate it as another data type won't work at all.
Segmentation fault occurs when a process (running instance of a program) is trying to access a read-only memory address or memory range which is being used by another process or access the non-existent memory address.
seg fault,when type gets mismatched
A segmentation fault or access violation occurs when a program attempts to access a memory location that is not exist, or attempts to access a memory location in a way that is not allowed.
/* "Array out of bounds" error
valid indices for array foo
are 0, 1, ... 999 */
int foo[1000];
for (int i = 0; i <= 1000 ; i++)
foo[i] = i;
Here i[1000] not exist, so segfault occurs.
Causes of segmentation fault:
it arise primarily due to errors in use of pointers for virtual memory addressing, particularly illegal access.
De-referencing NULL pointers – this is special-cased by memory management hardware.
Attempting to access a nonexistent memory address (outside process’s address space).
Attempting to access memory the program does not have rights to (such as kernel structures in process context).
Attempting to write read-only memory (such as code segment).

Why ARR[-1] does not give segmentation fault in C? [duplicate]

I have this code in C which takes in bunch of chars
#include<stdio.h>
# define NEWLINE '\n'
int main()
{
char c;
char str[6];
int i = 0;
while( ((c = getchar()) != NEWLINE))
{
str[i] = c;
++i;
printf("%d\n", i);
}
return 0;
}
Input is: testtesttest
Output:
1
2
3
4
5
6
7
8
117
118
119
120
My questions are:
Why don't I get an out of bounds (segmentation fault) exception although I clearly exceed the capacity of the array?
Why do the numbers in the output suddenly jump to very big numbers?
I tried this in C++ and got the same behavior. Could anyone please explain what is the reason for this?
C doesn't check array boundaries. A segmentation fault will only occur if you try to dereference a pointer to memory that your program doesn't have permission to access. Simply going past the end of an array is unlikely to cause that behaviour. Undefined behaviour is just that - undefined. It may appear to work just fine, but you shouldn't be relying on its safety.
Your program causes undefined behaviour by accessing memory past the end of the array. In this case, it looks like one of your str[i] = c writes overwrites the value in i.
C++ has the same rules as C does in this case.
When you access an array index, C and C++ don't do bound checking. Segmentation faults only happen when you try to read or write to a page that was not allocated (or try to do something on a page which isn't permitted, e.g. trying to write to a read-only page), but since pages are usually pretty big (multiples of a few kilobytes; on Mac OS, multiples of 4 KB), it often leaves you with lots of room to overflow.
If your array is on the stack (like yours), it can be even worse as the stack is usually pretty large (up to several megabytes). This is also the cause of security concerns: writing past the bounds of an array on the stack may overwrite the return address of the function and lead to arbitrary code execution (the famous "buffer overflow" security breaches).
The values you get when you read are just what happens to exist at this particular place. They are completely undefined.
If you use C++ (and are lucky enough to work with C++11), the standard defines the std::array<T, N> type, which is an array that knows its bounds. The at method will throw if you try to read past the end of it.
C does not check array bounds.
In fact, a segmentation fault isn't specifically a runtime error generated by exceeding the array bounds. Rather, it is a result of memory protection that is provided by the operating system. It occurs when your process tries to access memory that does not belong to it, or if it tries to access a memory address that doesn't exist.
Writing outside array bounds (actually even just performing the pointer arithmetic/array subscripting, even if you don't use the result to read or write anything) results in undefined behavior. Undefined behavior is not a reported or reportable error; it measn your program could do anything at all. It's very dangerous and you are fully responsible for avoiding it. C is not Java/Python/etc.
Memory allocation is more complicated than it seems. The variable "str," in this case, is on the stack, next to other variables, so it's not followed by unallocated memory. Memory is also usually word-aligned (one "word" is four to eight bytes.) You were possibly messing with the value for another variable, or with some "padding" (empty space added to maintain word alignment,) or something else entirely.
Like R.. said, it's undefined behavior. Out-of-bounds conditions could cause a segfault... or they could cause silent memory corruption. If you're modifying memory which has already been allocated, this will not be caught by the operating system. That's why out-of-bounds errors are so insidious in C.
Because C/C++ doesn't check bounds.
Arrays are internally pointers to a location in memory. When you call arr[index] what it does is:
type value = *(arr + index);
The results are big numbers (not necessarily) because they're garbage values. Just like an uninitialized variable.
You have to compile like this:
gcc -fsanitize=address -ggdb -o test test.c
There is more information here.

function to free memory of 1D Array [duplicate]

This question already has answers here:
How do malloc() and free() work?
(13 answers)
Closed 8 years ago.
I am new at programming and i just don't get this. I am supposed to make a function which takes an 1d Array as argument, and frees this Array.
I've got this:
void destroy(double A[])
{
free(A);
}
and my main:
void main()
{
swrmeg = (double *)malloc ((10)*sizeof(double));
swrmeg[0] = 3,2;
destroy(swrmeg);
printf("%lf\n",swrmeg[0]);
}
This is supposed to give a segmentation fault, but it does not, it prints the first double of the array. This means the array has not been freed.. Any ideas why does this happen?
Any proper ways to do the freeing in a function?
You're freeing it correctly.
Doing something wrong, like accessing a piece of memory after it's been freed, doesn't necessarily mean you'll get a segmentation fault, any more than driving on the wrong side of the road means you'll necessarily have an accident.
Segfaults cannot be guaranteed when doing undefined operations, they just sometimes occur when doing undefined operations.
What is actually occurring in your case is that the memory has been assigned to your program in the malloc and then your program has decided it doesn't need it in the free statement; however, the operating system has decided not to move it's memory fences in such a manner to cause a segfault.
Why it doesn't do so includes a lot of reasons:
It could be far more expensive to move the fence rather than just to let your program get away with having a few extra bytes for a little while.
It could be that you'll ask for some memory in a few minutes, and if you do (and it's small enough) then the same memory will be returned, without the need to move memory fences.
It could be that until you hit some hardware dependent limit (like a full page of memory) the OS can't reset the memory fence.
It could be ...
That's the reason why it is undefined, because it is basically dependent on so many things that all the implementations do not need to align. It is the defined part that needs to align.
It appears you are being asked to investigate undefined behavior (UB). ( This is supposed to give a segmentation fault ) What you are doing is not guaranteed to get a seg fault, but you can increase your chances by writing to places you do not own:
void main()
{
swrmeg = (double *)malloc ((10)*sizeof(double));
swrmeg[0] = 3,2;
destroy(swrmeg);
printf("%lf\n",swrmeg[0]);
}
void destroy(double *A)
{
int i;
for(i=0;i<3000;i++)//choose a looping index that will have higher likelyhood of encroaching on illegal memory
{
A[i] = i*2.0; //make assignments to places you have not allocated memory for
}
free(A);
}
Regarding using free'd memory: This post is an excellent description of why free'd memory will sometimes work. (albeit, dealing directly with stack as opposed to heap memory, concepts discussed still illustrative on using free'd memory in general)

Resources