I keep getting a segmentation with the following code. Changing the 4000 to 1000 makes the code run fine. I would think that I have enough memory here... How can I fix this?
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <string.h>
#define MAXLEN 4000
void initialize_mx(float mx[][MAXLEN])
{
int i, j;
float c=0;
for(i=0;i<MAXLEN;i++){
for(j=0;j<MAXLEN;j++) mx[i][j]=c;
}
}
int main(int ac, char *av[])
{
int i, j;
float confmx[MAXLEN][MAXLEN];
initialize_mx(confmx);
return 0;
}
The problem is you're overflowing the stack.
When you call initialize_mx() it allocates stack space for it's local variables (confmx in your case). This space, which is limited by your OS (check ulimit if you're on linux), can get overflowed if local variables are too big.
Basically you can:
Declare confmx as a global variable as cnicutar suggests.
Allocate memory space for your array dynamically. and pass a pointer to initialize_mx()
EDIT: Just realized you must still allocate memory space if you pass a pointer so you have those two options :)
You are using 4000*4000*4 bytes on your stack, if I didn't make any calculation errors, that's 61MB, which is a lot. It works with 1000 because in that case you are only using nearly 4MB on your stack.
4000*4000*sizeof(float)==64000000. I suspect your operating system may have a limit on the stack size between 4 and 64 MB.
As others have noted smallish isn't small for auto class variables which are allocated on the stack.
Depending on your needs, you could
static float confmx[MAXLEN][MAXLEN];
which would allocate the storage in the BSS. You might want to consider a different storage system as one often only needs a sparse matrix and there are more efficient ways to store and access matrices where many of the cells are zero.
Related
I'm implementing a sequential program for sorting like quicksort. I would like to test the performance of my program in a huge array of 1 or 10 billions of integers.
But the problem is that I obtain a segmentation error due to the size of the array.
A sample code of declaration of this array:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define N 1000000000
int main(int argc, char **argv)
{
int list[N], i;
srand(time(NULL));
for(i=0; i<N; i++)
list[i] = rand()%1000;
return 0;
}
I got a proposition to use mmap function. But I don't know how to use it ? can anybody help me to use it ?
I'm working on Ubuntu 10.04 64-bit, gcc version 4.4.3.
Thanks for your replies.
Michael is right, you can't fit that much on the stack. However, you can make it global (or static) if you don't want to malloc it.
#include <stdlib.h>
#include <time.h>
#define N 1000000000
static int list[N];
int main(int argc, char **argv)
{
size_t i;
srand(time(NULL));
for(i=0; i<N; i++)
list[i] = rand()%1000;
return 0;
}
You must use malloc for this sort of allocation. That much on the stack will fail nearly every time.
int *list;
list = malloc(N * sizeof(int));
This puts the allocation on the heap where there is a lot more memory available.
You probably don't create so large an array and if you do you certainly don't create it on the stack; the stack just isn't that big.
If you have a 32-bit address space and a 4-byte int, then you can't create an array with a billion ints; there just won't be enough contiguous space in memory for that large an object (there probably won't be enough contiguous space for an object a fraction of that size). If you have a 64-bit address space, you might get away with allocating that much space.
If you really want to try, you'll need either to create it statically (i.e., declare the array at file scope or with the static qualifier in the function) or dynamically (using malloc).
On linux systems malloc of very large chunks just does a mmap under the hood, so it is perhaps too tedious to look into that.
Be careful that you don't have neither overflow (signed integers) nor silent wrap (unsigned integers) for your array bounds and indices. Use size_t as a type for that, since you are on a 64bit machine, this then should work.
But as a habit you should definitively check your bounds against SIZE_MAX, something like assert(N*sizeof(data[0]) <= SIZE_MAX), to be sure.
The stack allocations makes it break. N=1Gig ints => 4Gig of memory (both with a 32-bit and a 64-bit compiler). But
if you want to measure the performance of quicksort, or a similar algorithm of yours, this is not the way to go about it.
Try instead to use multiple quicksorts in sequence on prepared samples with a large size.
-create a large random sample not more than half your available memory.
make sure it doesn''t fill your ram!
If it does all measuring efforts are in vain.
500 M elements is more than enough on a 4 gig system.
-decide on a test size ( e.g. N = 100 000 elements)
-start timer
--- do the algoritm for ( *start # i*N, *end # (i+1)*N)
(rinse repeat for next i until the large random sample is depleted)
-end timer
Now you have a very precise answer to how much time your algorithm has consumed. Run it a few times to get a feel of "how precise" (use a new srand(seed) seed each time). And change the N for more inspection.
Another option is to dynamically allocate a linked list of smaller arrays. You'll have to wrap them with accessor functions, but it's far more likely that you can grab 16 256 MB chunks of memory than a single 4 GB chunk.
typedef struct node_s node, *node_ptr;
struct node_s
{
int data[N/NUM_NODES];
node_ptr next;
};
So,i was solving a problem on SPOJ.I have to handle large number of values of order 2^18.I am on a system with 4 GB ( 2**34 Bytes) ram. While allocating memory for my program , i observed something strange. I am using long long qualifier for storing the input in two arrays sum1[1<<17] and sum2[1<<17].So, total memory allocated by the two arrays is 2^17*8 + 2^17*8 = 2^21 Bytes.The part of code i want to refer is :
#include <stdio.h>
int main()
{
// some code
int a=17,b=17;
long long sum1[1<<a], sum2[1<<b];
// some more code
}
The problem is that whenever a+b >= 34 , the program stops working, else it works fine.I guess it is due to unavailability of large space. But if i make the two arrays global like that :
#include <stdio.h>
long long sum1[1<<18], sum2[1<<18];
int main()
{
// some code
}
It works great and doesn't bother about a+b <= 34 , as you can see it works fine for 1<<18.So what's happening under the hood.
Regards.
The local variables for your function go into the stack. Your stack size is not large enough to hold your arrays.
Additional info:
You can see your stack size with the following:
ulimit -s
Local variables are usually allocated on the stack, and most systems have a relatively small limit on the size of a stack frame. Large arrays should be either global or allocated dynamically with malloc().
When I run the following code in C language, my compiler shows the error "xxx has stopped working ".
However, when I take array sizes as 1000 instead of 100000 it runs fine. What is the problem and how can I fix it? If there is some memory problem then how can I take input of 100000 numbers in these arrays without exceeding it?
Code I tried :
int main()
{
int a[100000],l[100000],r[100000],ans[100000],x[100000],y[100000];
/*
some code
*/
return 0;
}
Declare a, l, r, ans, x and y as global variables so that they will be allocated in the heap instead of the stack.
int a[100000], l[100000], r[100000], ans[100000], x[100000], y[100000];
int main()
{
The stack is typically a limited resource. Use dynamic allocation (such as malloc) instead.
Most systems limits the stack to something between one and four megabytes. Since your arrays are well over 2MB you are most likely going over the stack limit of your system.
In C there are a couple of ways to solve that problem:
Make the arrays global
Make the arrays static
Dynamically allocate the memory for them of the heap (e.g. malloc and friends)
Simply make the arrays smaller
Welcome in stackoverflow ;)
use dynamic allocation (malloc/free) in order to use all your ram capacities.
Most systems have a limited stack size and since your arrays are local(automatic) variables they will be allocated on the stack, so you are very likely overflowing the stack. If you need to allocated large arrays malloc is going to be the better choice.
I am trying to create an array of size 2^25 in c and then perform some elementary operations on it (memsweep function). The c code is
#include <stdio.h>
#include <time.h>
#define S (8191*4096)
main()
{
clock_t start = clock();
unsigned i;
volatile char large[S];
for (i = 0; i < 10*S; i++)
large[(4096*i+i)%S]=1+large[i%S];
printf("%f\n",((double)clock()-start)/CLOCKS_PER_SEC);
}
I am able to compile it but on execution it gives segmentation fault.
That might be bigger than your stack. You can
Make large global
Use malloc
The array is too big to fit on your stack. Use the heap with char *large = malloc(S) instead.
You don't have that much stack space to allocate an array that big ... on Linux for instance, the stack-size is typically 8192 bytes. You've definitely exceeded that.
The best option would be to allocate the memory on the heap using malloc(). So you would write char* large = malloc(S);. You can still access the array using the [] notation.
Optionally, if you're on Linux, you could on the commandline call sudo ulimit -s X, where X is some number large enough for your array to fit on the stack ... but I'd generally discourage that solution.
Large is being allocated on the stack and you are overflowing it.
Try using char *large = malloc(S)
I'm implementing a sequential program for sorting like quicksort. I would like to test the performance of my program in a huge array of 1 or 10 billions of integers.
But the problem is that I obtain a segmentation error due to the size of the array.
A sample code of declaration of this array:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define N 1000000000
int main(int argc, char **argv)
{
int list[N], i;
srand(time(NULL));
for(i=0; i<N; i++)
list[i] = rand()%1000;
return 0;
}
I got a proposition to use mmap function. But I don't know how to use it ? can anybody help me to use it ?
I'm working on Ubuntu 10.04 64-bit, gcc version 4.4.3.
Thanks for your replies.
Michael is right, you can't fit that much on the stack. However, you can make it global (or static) if you don't want to malloc it.
#include <stdlib.h>
#include <time.h>
#define N 1000000000
static int list[N];
int main(int argc, char **argv)
{
size_t i;
srand(time(NULL));
for(i=0; i<N; i++)
list[i] = rand()%1000;
return 0;
}
You must use malloc for this sort of allocation. That much on the stack will fail nearly every time.
int *list;
list = malloc(N * sizeof(int));
This puts the allocation on the heap where there is a lot more memory available.
You probably don't create so large an array and if you do you certainly don't create it on the stack; the stack just isn't that big.
If you have a 32-bit address space and a 4-byte int, then you can't create an array with a billion ints; there just won't be enough contiguous space in memory for that large an object (there probably won't be enough contiguous space for an object a fraction of that size). If you have a 64-bit address space, you might get away with allocating that much space.
If you really want to try, you'll need either to create it statically (i.e., declare the array at file scope or with the static qualifier in the function) or dynamically (using malloc).
On linux systems malloc of very large chunks just does a mmap under the hood, so it is perhaps too tedious to look into that.
Be careful that you don't have neither overflow (signed integers) nor silent wrap (unsigned integers) for your array bounds and indices. Use size_t as a type for that, since you are on a 64bit machine, this then should work.
But as a habit you should definitively check your bounds against SIZE_MAX, something like assert(N*sizeof(data[0]) <= SIZE_MAX), to be sure.
The stack allocations makes it break. N=1Gig ints => 4Gig of memory (both with a 32-bit and a 64-bit compiler). But
if you want to measure the performance of quicksort, or a similar algorithm of yours, this is not the way to go about it.
Try instead to use multiple quicksorts in sequence on prepared samples with a large size.
-create a large random sample not more than half your available memory.
make sure it doesn''t fill your ram!
If it does all measuring efforts are in vain.
500 M elements is more than enough on a 4 gig system.
-decide on a test size ( e.g. N = 100 000 elements)
-start timer
--- do the algoritm for ( *start # i*N, *end # (i+1)*N)
(rinse repeat for next i until the large random sample is depleted)
-end timer
Now you have a very precise answer to how much time your algorithm has consumed. Run it a few times to get a feel of "how precise" (use a new srand(seed) seed each time). And change the N for more inspection.
Another option is to dynamically allocate a linked list of smaller arrays. You'll have to wrap them with accessor functions, but it's far more likely that you can grab 16 256 MB chunks of memory than a single 4 GB chunk.
typedef struct node_s node, *node_ptr;
struct node_s
{
int data[N/NUM_NODES];
node_ptr next;
};