I started to learn C recently. I use Code::Blocks with MinGW and Cygwin GCC.
I made a very simple prime sieve for Project Euler problem 10, which prints primes below a certain limit to stdout. It works fine until roughly 500000 as limit, but above that my minGW-compiled .exe crashes and the GCC-compiled one throws a "STATUS_STACK_OVERFLOW" exception.
I'm puzzled as to why, since the code is totally non-recursive, consisting of simple for loops.
#include <stdio.h>
#include <math.h>
#define LIMIT 550000
int main()
{
int sieve[LIMIT+1] = {0};
int i, n;
for (i = 2; i <= (int)floor(sqrt(LIMIT)); i++){
if (!sieve[i]){
printf("%d\n", i);
for (n = 2; n <= LIMIT/i; n++){
sieve[n*i] = 1;
}
}
}
for (i; i <= LIMIT; i++){
if (!sieve[i]){
printf("%d\n", i);
}
}
return 0;
}
Seems like you cannot allocate 550000 ints on the stack, allocate them dynamically instead.
int * sieve;
sieve = malloc(sizeof(int) * (LIMIT+1));
Your basic options are to store variables in data segment when your memory chunk is bigger than stack:
allocating memory for array in heap with malloc (as #Binyamin explained)
storing array in Data/BSS segments by declaring array as static int sieve[SIZE_MACRO]
All the memory in that program is allocated on the stack. When you increase the size of the array you increase the amount of space required on the stack. Eventually the method cannot be called as there isn't enough space on the stack to accomodate it.
Either experiement with mallocing the array (so it's allocated on the heap). Or learn how to tell the compiler to allocate a larger stack.
Related
I'm trying to create a graph with 264346 positions. Would you know why calloc when it reaches 26,000 positions it stops generating memory addresses (ex: 89413216) and starts generating zeros (0) and then all the processes on my computer crash?
The calloc function should generate zeros but not at this position on my code.
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <time.h>
#include <string.h>
#include <limits.h>
int maxV;
struct grafo {
int NumTotalVertices;
int NumVertices;
int NumArestas;
int **p;
};
typedef struct grafo MGrafo;
MGrafo* Aloca_grafo(int NVertices);
int main(){
MGrafo *MatrizGrafo;
MatrizGrafo = Aloca_grafo(264346);
return 0;
}
MGrafo* Aloca_grafo(int NVertices) {
int i, k;
MGrafo *Grafo ;
Grafo = (MGrafo*) malloc(sizeof(MGrafo));
Grafo->p = (int **) malloc(NVertices*sizeof(int*));
for(i=0; i<NVertices+1; i++){
Grafo->p[i] = (int*) calloc(NVertices,sizeof(int));// error at this point
//printf("%d - (%d)\n", i, Grafo->p[i]); // see impression
}
printf("%d - (%d)\n", i, Grafo->p[i]);
Grafo->NumTotalVertices = NVertices;
Grafo->NumArestas = 0;
Grafo->NumVertices = 0;
return Grafo;
}
You surely dont mean what you have in your code
Grafo = (MGrafo*)malloc(sizeof(MGrafo));
Grafo->p = (int**)malloc(NVertices * sizeof(int*)); <<<<=== 264000 int pointers
for (i = 0; i < NVertices + 1; i++) { <<<<< for each of those 264000 int pointers
Grafo->p[i] = (int*)calloc(NVertices, sizeof(int)); <<<<<=== allocate 264000 ints
I ran this on my machine
its fans turned on, meaning it was trying very very hard
after the inner loop got to only 32000 it had already allocated 33 gb of memory
I think you only need to allocate one set of integers, since I cant tell what you are trying to do it hard to know which to remove, but this is creating a 2d array 264000 by 264000 which is huge (~70billion = ~280gb of memory), surely you dont mean that
OK taking a comment from below, maybe you do mean it
If this is what you really want then you are going to need a very chunky computer and a lot of time.
Plus you are definitely going to have to test the return from those calloc and malloc calls to make sure that every alloc works.
A lot of the time you will see answers on SO saying 'check the return from malloc' but in fact most modern OS with modern hardware will rarely fail memory allocations. But here you are pushing the edge, test every one.
'Generating zeros' is how calloc tells you it failed.
https://linux.die.net/man/3/calloc
Return Value
The malloc() and calloc() functions return a pointer to the allocated memory that is suitably aligned for any kind of variable. On error, these functions return NULL. NULL may also be returned by a successful call to malloc() with a size of zero, or by a successful call to calloc() with nmemb or size equal to zero.
This code runs for values of n of the order of 100k but when it gets to a million it stops and crashes.
#include <stdio.h>
int main()
{
int i;
long int n, sum;
n = 1000000;
int f[];
f[0] = 1;
f[1] = 2;
sum = 0;
for (i = 2; f[i-1] < n; i++)
{
f[i] = f[i-1] + f[i-2];
printf("%ld \n", f[i]);
if(f[i] % 2 == 0)
{
sum = sum + f[i];
}
}
printf("%d \n", sum);
getchar();
}
Yes, you cannot declare a very big local array because its sits in the call stack.
I'm sure your local variable int f[]; is a typo (that won't compile). You probably meant (after having set n) something like int f[n]; so you are using a VLA.
The call stack has a limited size (typically a couple of megabytes on current desktops running Linux).
You should allocate your big array in the heap (so use a pointer):
unsigned n = 1000000;
int *f = malloc(n*sizeof(int));
if (!f) { perror("malloc"); exit(EXIT_FAILURE); };
then you'll better clear it (because heap malloc-allocated memory zones contain garbage values):
memset(f, 0, n*sizeof(int));
then you can use it as you did.
At the end of your program (near end of main in your case) be sure to call free(p);; actually you should free a heap-allocated memory zone once you are sure to never use it. But beware (i.e. take care) of pointer aliasing!
Read about C dynamic memory allocation. Be scared of memory leaks and buffer overflows. Use valgrind if your system has it. Read also the wikipage on garbage collection. When you'll be more fluent with C programming, you might be interested in sometimes using Boehm conservative garbage collector for C.
How to work with large integer, do I need GMP libraries or something?
I want an array that has elements starting from 0 to 2^32
How to get this to work:
#include <stdio.h>
int main(){
unsigned int i,j=0,sz=4294967295;
unsigned int A[sz];
A[j]=0;
for(i=1;i<=sz;i++){
A[i]=A[j]+1 ;
j++;
printf("%u\n",A[i]);
}
return 0;
}
error: process exited with return value 3221225725
is it that the array is too big or something??
According to Google, your A array is approximately 17 gigabytes. That's a lot. You're probably overflowing the stack.
If you really need this much memory, you may be able to malloc() it instead, but on older 32-bit architectures, you're basically out of luck (address space has a hard upper limit of 4 GB, minus kernel space).
You are allocating an array of 16-17GB which overflows the stack.
As haccks said you can try allocating on heap.
unsigned int *A = malloc(sizeof(int)*sz);
if(A == NULL) {
printf("Unable to allocate memory for array.\n");
exit(1);
}
Don't forget to free afterwards:
...
free(A);
return 0;
}
And you also have a bug in your code. Array is indexed from 0 to size - 1.
This will when i becomes sz write to invalid memory.
for(i=1;i<=sz;i++) { // Will cause invalid memory write
A[i]=A[j]+1 ;
j++;
printf("%u\n",A[i]);
}
Change to:
for(i=1; i < sz; i++) {
A[i] = A[j] + 1;
j++;
printf("%u\n", A[i]);
}
Memory for arrays is allocated on stack and its size is generally small and will result in stack overflow. You need to allocate memory on heap for such a large array. Either place
unsigned int A[429496729];
out side the main or use dynamic memory allocation
unsigned int *A = malloc(sizeof(int)*sz);
if(A == NULL)
exit(0);
Use free(A) to free the allocated memory once you are done with A.
Better use define constants from limits.h, such as UINT_MAX or ULONG_MAX, and check what type is used for indexing of arrays (perhaps your unsigned int transformed to int)
I run in a problem with a program and I'm not sure how to solve it. I'm processing a file and to do so I get the size with ftell and store it in M_size. After that I declare a unsigned char pointer array with N. The array is then used in two functions a() and b().
...
unsigned long N = (M_size/ x);
int LstElemSize = M_size % x;
if(LstElemSize != 0){
N += 1;
}
unsigned char *ptr_M[N]
a(ptr_M)
b(ptr_M)
...
Function a() actually initializes each element of ptr_M in a for loop:
a(){
int i;
for(i = 0; i < N-1; i ++){
ptr_M[i] = malloc(sizeof(unsigned char) * x);
}
}
Function b() iterates then over each element and calculates stuff and at the end each element is freed.
My problem is now that when I try to process a file e.g. 1 GB the array size will be around 4 000 000 and a Segmentation error occurs (In the line i declare my array). If I calculated it correctly that is 8 byte (char pointer) times 4 000 000 = 32MB. The server running the program has enough memory to hold the file, but i guess as mentioned in Response 1 the stack space is not enough.
What can I do to solve my problem? Increase my stack space? Thanks!
The problem is that you're defining ptr_M in the stack, which has a small size limit. The heap does not have such a small size limit and is able to use more of your system's memory. You need to use malloc() to allocate ptr_M just like you allocate the subarrays. (Make sure to free it at some point too along with all those subarrays!)
unsigned char **ptr_M = malloc(sizeof(unsigned char*) * N);
Also, your a() has an off-by-one error. It ignores the last item of the array. Use this:
for(i = 0; i < N; i ++){
unsigned char *ptr_M[N] is a variable-length array declaring N number of unsigned char on the stack in your case. You should dynamically allocate the space for the array as well.
unsigned char **ptr_M = malloc(sizeof(unsigned char*) * N);
a(ptr_M);
b(ptr_M);
...
//After you free each element in ptr_M
free(ptr_M);
malloc allocates space from heap, not from stack. You may try increasing your heapsize looking at the compiler option. Check the upper limit of heapsize that is supported there.
I need to fill 2-d array with 0s. But compiled program falls with this error. What's wrong?
int main()
{
int vert[1001][1001];
int hor[1001][1001];
int dudiag[1416][1416];
int uddiag[1416][1416];
int n, k, m;
int row, col;
int i, j;
int answer = 0;
for(i = 0; i <= 1000; i++){
for(j = 0; j <= 1000; j++){
vert[i][j] = 0;
hor[i][j] = 0;
}
}
...
}
When cycle is commented out, it works properly.
The problem is that you are trying to allocate too much memory in the automatic store (AKA "on the stack"). When you comment out the cycle, the compiler optimizes out the allocation along with the now-unused variables, so you do not get a crash.
You need to change the allocation to either static or dynamic memory (AKA "the heap") to fix this problem. Since the issue is inside main, making the arrays static would be an appropriate choice.
int main()
{
static int vert[1001][1001];
static int hor[1001][1001];
static int dudiag[1416][1416];
static int uddiag[1416][1416];
...
}
In functions other than main you could make these arrays dynamic, allocate them using malloc/calloc, and then free them once your program has finished using them.
What's wrong?
You are trying to reserve on stack several 4MB arrays. On many Linux distributions, the default stack size is 8MB (you can change this with ulimit -s unlimited).
Even with unlimited stack, the Linux kernel will not extend stack by more than some constant, so ulimit -s unlimited may not help avoiding the crash.
As dasblinkenlight's answer says, if you need arrays that large, allocate them dynamically.
Finally, an explicit for loop is an inefficient way to zero out an array. Using memset is likely to be much more efficient, and requires much less code.