I am trying to get code that was working on Linux to also work on my Windows 7.
When I retried the same code, it crashed with stack overflow. I then removed everything I could to find out the line which is causing it to crash, and it left me with this:
#include <stdio.h>
#include <stdlib.h>
#include <cuda_runtime.h>
/* 256k == 2^18 */
#define ARRAY_SIZE 262144
#define ARRAY_SIZE_IN_BYTES (sizeof(float) * (ARRAY_SIZE))
int main(void)
{
float a[ARRAY_SIZE] = { };
float result = 0;
printf("sum was: %f (should be around 1 300 000 with even random distribution)\n", result);
return 0;
}
If I change ARRAY_SIZE to 256, the code runs fine. However with the current value, the float a[ARRAY_SIZE] line crashes runtime with stack overflow. It doesn't matter if I use float a[ARRAY_SIZE]; or float a[ARRAY_SIZE] = { };, they both crash the same way.
Any ideas what could be wrong?
Using Visual Studio 2010 for compiling.
Ok, the stack sizes seem to be explained here, saying 1M is the default on Windows.
Apparently it can be increased in VS 2010 by going Properties -> Linker -> System -> Stack Reserve Size and giving it some more. I tested and the code works by pumping up the stack to 8M.
In the long run I should probably go the malloc way.
Your array is too large to fit into the stack, try using the heap:
float *a;
a = malloc(sizeof(float) * ARRAY_SIZE);
Segmentation fault when allocating large arrays on the stack
Well, let me guess. I've heard default stack size on Windows is 1 MB. Your ARRAY_SIZE_IN_BYTES is exactly 1 MB btw (assuming float is 4 bytes). So probably that's the reason
See this link: C/C++ maximum stack size of program
Related
I don't know how to ask this question as it was little confusing to me. i was having a problem with this code
#include <stdio.h>
#include <stdlib.h>
#define ull unsigned long long
#define SIZE 1000000001
#define min(a,b) ((a<b?a:b))
#define max(a,b) ((a>b?a:b))
int solve(void) {
// unsigned *count = malloc(sizeof(unsigned) * SIZE);
int k;
scanf("%d", &k);
unsigned count[SIZE];
for (ull i = 0; i < SIZE; i++){
count[i] = 0;
}
return 0;
}
int main(void){
unsigned t;
if (scanf("%u", &t) != 1) return 1;
while (t-- > 0){
if (solve() != 0) return 1;
}
}
This code for me is giving segfault.
What my observation is
it is running fine until it is in solve function.
on calling solve function it is giving segfault.
It has nothing to do with scanf("%d", &k) as by removing this line gives the same error
But if we decrease the SIZE value it will run fine.
Other thing which i can do is instead of creating an array on stack i can use heap and this is working fine.
If i only declare array count in solve function instead of taking k as input and initializing all the values of array count to 0. i am not getting any segfault
So i have some questions regarding this.
Is this due to memory limitation to array or because of memory limitation for a stack frame for the function solve (or possibly another reason which i can't find).
If this is due to any kind of memory limitation than isn't it is too low for a program?
How compiler checks for such errors as adding any kind of print statement won't run before array declaration as i am getting segfault when program reaches solve. So compiler somehow knows that their is a problem with code without even getting there.
and specifically for the 6th point, as per my knowledge when declaring array it reserves memory for the array. So by initializing it i am doing nothing which will increase the size of array. So why i am not getting any kind of error when declaring array while i am getting segfault when i am initializing all those values in array
Maybe i am seeing it in totally wrong way but this is how i think it is, So please if you know any reason for this please answer me about that too
It depends on your operating system. On Windows, the typical maximum size for a stack is 1MB, whereas it is 8MB on a typical modern Linux, although those values are adjustable in various ways.
For me it's working properly check with other platform or other system.
This question already has answers here:
Getting a stack overflow exception when declaring a large array
(8 answers)
Closed 10 months ago.
the following code caused segmentation fault (core dumped) with 1000000000 times loop.
but by reducing the looping time to 100000, it goes ok.
so is it causing any thing wrong in cpu, hardware, or anywhere? is it caused by watchdog timer?
can anybody help to explain it for this? what happened when cpu goes to huge loops(finite loops with huge number repeating)? how does cpu tell the computing is infinite? many thanks.
#include <stdio.h>
int main () {
int a[1000000000];
int i = 0;
for (i = 0;i < 1000000000; i++){
if(i % 4 == 0){
a[i] = i;
}else {
a[i] = 321;
}
}
printf("run over");
return 0;
}
The overflowing of the stack is observed here. 1000000000 * sizeof(int) memory is supposed to be there for storing this array. In short, the problem is coming from the size of the array not the number iterations.
You can either make the array static or dynamically allocate the memory.
Are you perhaps running out of memory ? Your 1 billion int array weighs 30 Gb if using 32bit ints.
This happened because stack has a memory limit. Your array will occupy a total of 1000000000 * sizeof(int) bytes, which will be equal to 3.725 Gigabytes on 64 bit machine.
You need to dynamically store the array on the heap memory like this:
int *array = malloc(1000000000 * sizeof(int));
Or, better break your array into several parts and process them and after processing store those results on a hard disk.
Also, you can see the maximum stack size on linux by using ulimit:
ulimit -s # stack size
ulimit -a # full details
I can find plenty of examples of developers complaining that a big array initialized on the stack create a stack overflow error
int main(int argc, const char * argv[])
{
int v[100000000];
memset(&v, 0, sizeof(v));
}
When compiling on Apple LLVM 7.0, this does not cause a stack overflow, this puzzles me as the array has a size of ~400Mb, significantly more than what is usually the size of the stack.
Why does the above code not cause stack overflow?
Since you are not using v then probably the compiler is not allocating it, try something like
int v[100000000];
for (int i = 0 ; i < sizeof(v) / sizeof(*v) ; ++i)
v[i] = 0;
Your array is more than 100 Mb (*) but assuming it is 100 Mb, that means that either your stack size is larger than 100 Mb, or either your compiler ignored it because you do not use it. That's a compiler optimization.
(*) Indeed 1M = 1024 * 1024, not 1000 * 1000. And one int is more than 1 bit, more than 1 Byte too. And finally, Mb means Megabit and MB means Megabyte.
I am writing in C this simple program, using xcode.
#include <stdio.h>
int main()
{
double A[200][200][2][1][2][2];
int B[200][200][2][1][2];
int C[200][200][2][1][2];
double D[200][200][2][1][2];
double E[200][200][2][1][2];
double F[200][200][2][1][2];
double G[200][200];
double H[200][200][2];
double I[200][200];
double L[50];
printf("%d",B);
return 0;
}
I get the following message attached to printf("%d",B);
Thread 1: EXC_BAD_ACCESS (code=2, address= ….)
So basically is telling me that I messed up with the memory. How can that be possible?
BUT, if I comment
// int C[200][200][2][1][2];
it works perfectly.
Any clue? It should not be a problem with Xcode since in Eclipse does not printf anything in any case.
The default stack size on Mac OS X is 8 MiB (8,192 KiB) — try ulimit -s or ulimit -a in a terminal.
You have an array of doubles running at about 2.5 MiB (200 x 200 x 2 x 1 x 2 x 2 x sizeof(double)). You have 3 other arrays of double that are half that size; you have 2 arrays of int that are 1/4 the size. These add up to 7.560 MB (7.4 MiB). Even G, H and I are using a moderate amount of stack space: in aggregate, they're as big as D, so they use about 1 MiB.
The sum of these sizes is too big for the stack. You can make them file scope arrays, or you can dynamically allocate them with malloc() et al.
Why on earth would you have a dimension of [1]? You can only ever write 0 as a valid subscript, but then why bother?
I'm not quite sure why you observe EXC_BAD_ACCESS. But your code is quite broken. For a start, you pass B to printf and ask it to format it as an integer. It is not an integer. You should use %p if you want to treat it as a pointer.
The other problem is that your local variables will be allocated on the stack. And they are so big that they will overflow the stack. The biggest is A which is sizeof(double)*200*200*2*1*2*2 which is 2,560,000 bytes. You cannot expect to allocate such large arrays on the stack. You'll need to switch to using dynamically allocated arrays.
I used the following code to find it out but I always get 1 as the answer. is there something wrong. Thanks
#include <stdio.h>
#include <stdlib.h>
int main(){
int mult = 0;
int chk =8;
do{
mult+=1;
int *p = (int*)malloc(1024*1024*1024*mult);
if(p==0){
chk =0;
}else{
free(p);
}
}while(chk !=0);
mult = mult -1;
printf("The number of gigs allocated is : %d\n",mult);
return 0;
}
Just to help, I have a 64 bit system with both windows and linux installed. Thus, is the above logic correct even though I am getting just 1 gb as the answer on a 64 bit system?
If it is a 32-bit OS, then it is not surprising that the largest contiguous block would be 1GB (or somewhere between that and 2GB). On a 64-bit OS, larger blocks would be possible.
If you change your code to allocate smaller individual pieces, you will likely be able to allocate more than 1GB total.
These questions might help you a bit: How much memory was actually allocated from heap for an object? and How do I find out how much free memory is left in GNU C++ on Linux
int main(void){
int MB = 0;
while(malloc(1<<30)){
++MB;
}
printf("The number of gigs allocated is : %d\n",MB);
return EXIT_SUCCESS;
}