This code runs for values of n of the order of 100k but when it gets to a million it stops and crashes.
#include <stdio.h>
int main()
{
int i;
long int n, sum;
n = 1000000;
int f[];
f[0] = 1;
f[1] = 2;
sum = 0;
for (i = 2; f[i-1] < n; i++)
{
f[i] = f[i-1] + f[i-2];
printf("%ld \n", f[i]);
if(f[i] % 2 == 0)
{
sum = sum + f[i];
}
}
printf("%d \n", sum);
getchar();
}
Yes, you cannot declare a very big local array because its sits in the call stack.
I'm sure your local variable int f[]; is a typo (that won't compile). You probably meant (after having set n) something like int f[n]; so you are using a VLA.
The call stack has a limited size (typically a couple of megabytes on current desktops running Linux).
You should allocate your big array in the heap (so use a pointer):
unsigned n = 1000000;
int *f = malloc(n*sizeof(int));
if (!f) { perror("malloc"); exit(EXIT_FAILURE); };
then you'll better clear it (because heap malloc-allocated memory zones contain garbage values):
memset(f, 0, n*sizeof(int));
then you can use it as you did.
At the end of your program (near end of main in your case) be sure to call free(p);; actually you should free a heap-allocated memory zone once you are sure to never use it. But beware (i.e. take care) of pointer aliasing!
Read about C dynamic memory allocation. Be scared of memory leaks and buffer overflows. Use valgrind if your system has it. Read also the wikipage on garbage collection. When you'll be more fluent with C programming, you might be interested in sometimes using Boehm conservative garbage collector for C.
Related
I'm trying to create a graph with 264346 positions. Would you know why calloc when it reaches 26,000 positions it stops generating memory addresses (ex: 89413216) and starts generating zeros (0) and then all the processes on my computer crash?
The calloc function should generate zeros but not at this position on my code.
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <time.h>
#include <string.h>
#include <limits.h>
int maxV;
struct grafo {
int NumTotalVertices;
int NumVertices;
int NumArestas;
int **p;
};
typedef struct grafo MGrafo;
MGrafo* Aloca_grafo(int NVertices);
int main(){
MGrafo *MatrizGrafo;
MatrizGrafo = Aloca_grafo(264346);
return 0;
}
MGrafo* Aloca_grafo(int NVertices) {
int i, k;
MGrafo *Grafo ;
Grafo = (MGrafo*) malloc(sizeof(MGrafo));
Grafo->p = (int **) malloc(NVertices*sizeof(int*));
for(i=0; i<NVertices+1; i++){
Grafo->p[i] = (int*) calloc(NVertices,sizeof(int));// error at this point
//printf("%d - (%d)\n", i, Grafo->p[i]); // see impression
}
printf("%d - (%d)\n", i, Grafo->p[i]);
Grafo->NumTotalVertices = NVertices;
Grafo->NumArestas = 0;
Grafo->NumVertices = 0;
return Grafo;
}
You surely dont mean what you have in your code
Grafo = (MGrafo*)malloc(sizeof(MGrafo));
Grafo->p = (int**)malloc(NVertices * sizeof(int*)); <<<<=== 264000 int pointers
for (i = 0; i < NVertices + 1; i++) { <<<<< for each of those 264000 int pointers
Grafo->p[i] = (int*)calloc(NVertices, sizeof(int)); <<<<<=== allocate 264000 ints
I ran this on my machine
its fans turned on, meaning it was trying very very hard
after the inner loop got to only 32000 it had already allocated 33 gb of memory
I think you only need to allocate one set of integers, since I cant tell what you are trying to do it hard to know which to remove, but this is creating a 2d array 264000 by 264000 which is huge (~70billion = ~280gb of memory), surely you dont mean that
OK taking a comment from below, maybe you do mean it
If this is what you really want then you are going to need a very chunky computer and a lot of time.
Plus you are definitely going to have to test the return from those calloc and malloc calls to make sure that every alloc works.
A lot of the time you will see answers on SO saying 'check the return from malloc' but in fact most modern OS with modern hardware will rarely fail memory allocations. But here you are pushing the edge, test every one.
'Generating zeros' is how calloc tells you it failed.
https://linux.die.net/man/3/calloc
Return Value
The malloc() and calloc() functions return a pointer to the allocated memory that is suitably aligned for any kind of variable. On error, these functions return NULL. NULL may also be returned by a successful call to malloc() with a size of zero, or by a successful call to calloc() with nmemb or size equal to zero.
I wanted to create a function that deletes from an array of segments the ones that are longer than a given number, by freeing the memory I don't need anymore. The problem is that the function I've created frees also all the memory allocated after the given point. How can I limit it, so that it frees just one pointer without compromising the others?
Here is the code I've written so far:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
typedef struct
{
double x1;
double y1;
double x2;
double y2;
} Segment;
double length(Segment* s)
{
return sqrt(pow(s->x1 - s->x2, 2) + pow(s->y1 - s->y2, 2));
}
// HERE IS THE PROBLEM!!
void delete_longer(Segment* as[], int n, double max_len)
{
for(int i = 0; i < n; i++)
{
if(length(as[i]) > max_len)
{
as[i] = NULL; // Those two lines should be swapped, but the problem remains
free(as[i]);
}
}
}
int main()
{
const int SIZE = 5;
Segment** arr = (Segment**)calloc(SIZE, sizeof(Segment*));
for(int i = 0; i < SIZE; i++)
{
arr[i] = (Segment*)malloc(sizeof(Segment));
}
srand(time(0));
for(int i = 0; i < SIZE; i++)
{
arr[i]->x1 = rand() % 100;
arr[i]->x2 = rand() % 100;
arr[i]->y1 = rand() % 100;
arr[i]->y2 = rand() % 100;
printf("Lungezza: %d\n", (int)length(arr[i]));
}
delete_longer(arr, SIZE, 80);
for(int i = 0; i < SIZE && arr[i]; i++)
{
printf("Lunghezza 2: %d\n", (int)length(arr[i]));
}
return 0;
}
First of all the free function should come after the instruction that sets the pointer to NULL, but that's not the main cause of the problem.
What causes the behaviour I described was the fact that the second for loop in the main stops after finding the first NULL pointer. Instead I should have written:
for(int i = 0; i < SIZE ; i++)
{
if(arr[i])
printf("Lunghezza 2: %d\n", (int)length(arr[i]));
}
You have two main problems:
In the delete function you write:
as[i] = NULL;
free(as[i]);
This is the wrong order. You must first free the memory and then set the element to null. But note that this is not the cause of your perceived problem, it only causes a memory leak (i.e. the memory of as[i] becomes inaccessible). You should write:
free(as[i]);
as[i] = NULL;
Your second problem is in your for loop, which now stops at the first null element. So not all the memory after it is deleted, you just don't print it. The loop should be for example:
for(int i = 0; i < SIZE; i++)
{
printf("Lunghezza 2: %d\n", arr[i]?(int)length(arr[i]):0);
}
Note: I agree with the discussion that free(NULL) may be implementation dependent in older implementations of the library function. In my personal opinion, never pass free a null pointer. I consider it bad practice.
There's no way to change the size of an array at runtime. The compiler assigns the memory statically, and even automatic arrays are fixed size (except if you use the last C standard, in which you can specify a different size at declaration time, but even in that case, the array size stands until the array gets out of scope). The reason is that, once allocated, the memory of an array gets surrounded of other declarations that, being fixed, make it difficult ot use the memory otherwise.
The other alternative is to allocate the array dynamically. You allocate a fixed number of cells, and store with the array, not only it's size, but also its capacity (the maximum amount of cell it is allow to grow) Think that erasing an element of an array requires moving all the elements behind to the front one place, and this is in general an expensive thing to do. If your array is filled with references to other objects, a common technique is to use NULL pointers on array cells that are unused, or to shift all the elements one place to the beginning.
Despite the technique you use, arrays are a very efficient way to access multiple objects of the same type, but they are difficult to shorten or enlengthen.
Finally, a common technique to handle arrays in a way you can consider them as variable length is to allocate a fixed amount of cells (initially) and if you need more memory to allocate double the space of the original (there are other approaches, like using a fibonacci sequence to grow the array) and use the size of the array and the actual capacity of it. Only in case your array is full, you call a function that will allocate a new array of larger size, adjust the capacity, copy the elements to the new copy, and deallocate the old array. This will work until you fill it again.
You don't post any code, so I shall do the same. If you have some issue with some precise code, don't hesitate to post it in your question, I'll try to provide you with a working solution.
I am relatively new to C and have coded (or more precise: copied from here and adapted) the functions below. The first one takes a numpy array and converts it to a C int array:
int **pymatrix_to_CarrayptrsInt(PyArrayObject *arrayin) {
int **result, *array, *tmpResult;
int i, n, m, j;
n = arrayin->dimensions[0];
m = arrayin->dimensions[1];
result = ptrvectorInt(n, m);
array = (int *) arrayin->data; /* pointer to arrayin data as int */
for (i = 0; i < n; i++) {
result[i] = &array[i * m];
}
return result;
}
The second one is used within the first one to allocate the necessary memory of the row vectors:
int **ptrvectorInt(long dim1, long dim2) {
int **result, i;
result = malloc(dim1 * sizeof(int*));
for (i = 0; i < dim1; i++) {
if (!(result[i] = malloc(dim2 * sizeof(int)))){
printf("In **ptrvectorInt. Allocation of memory for int array failed.");
exit(0);
}
}
return result;
}
Up to this point everything works quite fine. Now I want to free the memory occupied by the C array. I have found multiple threads about how to do it, e.g. Allocate and free 2D array in C using void, C: Correctly freeing memory of a multi-dimensional array, or how to free c 2d array. Inspired by the respective answers I wrote my freeing function:
void free_CarrayptrsInt(int **ptr, int i) {
for (i -= 1; i >= 0; i--) {
free(ptr[i]);
}
free(ptr);
}
Nontheless, I found out that already the first call of free fails - no matter whether I let the for loop go down or up.
I looked for explenations for failing free commands: Can a call to free() in C ever fail? and free up on malloc fails. This suggests, that there may have been a problem already at the memory allocation. However, my program works completely as expected - except memory freeing. Printing the regarded array shows that everything should be fine. What could be the issue? And even more important: How can I properly free the array?
I work on a Win8 64 bit machine with Visual Studio 10 64bit compiler. I use C together with python 3.4 64bit.
Thanks for all help!
pymatrix_to_CarrayptrsInt() calls ptrvectorInt() and this allocation is made
if (!(result[i] = malloc(dim2 * sizeof(int)))){
then pymatrix_to_CarrayptrsInt() writes over that allocation with this assignment
result[i] = &array[i * m];
causing a memory leak. If array is free()'d then attempting to free() result will fail
I wrote a C code that usea a matrix of double:
double y[LENGTH][4];
whith LENGTH=200000 I have no problem.
I have to increase the numbers of rows to LENGTH=1000000 but when I enter this value and execute the program it returns me segmentation fault.
I tried to allocate more memory using malloc:
double **y = (double **)malloc(LENGTH * sizeof(double*));
for(int i = 0; i < LENGTH; i++){
y[i] = (double *)malloc(4 * sizeof(double));
}
I run the the code above and after some second of calculations it still gives me "segmentation fault".
Could anyone help me?
If you want a dynamic allocated 2D array of the specified row-width, just do this:
double (*y)[4] = malloc(LENGTH * sizeof(*y));
There is no need to malloc each row in the matrix. A single malloc and free will suffice. Only if you need dynamic row width (each row can vary in width independent of others) or the column count is arbitrary should a nested malloc loop be considered. Neither appears to be your case here.
Notes:
Don't cast malloc in C programs
Be sure to free(y); when finished with this little tryst.
The reason your statically allocated array is segfaulting with a million elements is (presumably) because it's being allocated on the stack. To have your program have a larger stack, pass appropriate switches to your compiler.
ProTip: You will experience less memory fragmentation and better performance if you flip your loop around, allocating
(double *)malloc(LENGTH * sizeof(double));
four times. This will require changing the order of your indices.
I ran the the code with this definition and after some second of calculations it still gives me "segmentatin fault"
If you're getting a segmentation fault after allocating the memory, you're writing outside of your memory bounds.
I run this code
#include <stdio.h>
#include <stdlib.h>
// We return the pointer
int **get(int N, int M) /* Allocate the array */
{
/* Check if allocation succeeded. (check for NULL pointer) */
int i, **table;
table = malloc(N*sizeof(int *));
for(i = 0 ; i < N ; i++)
table[i] = malloc( M*sizeof(int) );
return table;
}
void free2Darray(int** p, int N) {
int i;
for(i = 0 ; i < N ; i++)
free(p[i]);
free(p);
}
int main(void)
{
const int LENGTH = 1000000;
int **p;
p = get(LENGTH, 4);
printf("ok\n");
free2Darray(p ,LENGTH);
printf("exiting ok\n");
return 0;
}
and it was executed normally.
I got the code from my pseudo-site.
You should not cast what malloc returns. Why?
Also notice, that since you need a dynamic allocation only for the number of rows, since you know the number of columns. So, you can modify the code yourself (so that you have some fun too. :) )
I hope you didn't forget to **free** your memory.
I started to learn C recently. I use Code::Blocks with MinGW and Cygwin GCC.
I made a very simple prime sieve for Project Euler problem 10, which prints primes below a certain limit to stdout. It works fine until roughly 500000 as limit, but above that my minGW-compiled .exe crashes and the GCC-compiled one throws a "STATUS_STACK_OVERFLOW" exception.
I'm puzzled as to why, since the code is totally non-recursive, consisting of simple for loops.
#include <stdio.h>
#include <math.h>
#define LIMIT 550000
int main()
{
int sieve[LIMIT+1] = {0};
int i, n;
for (i = 2; i <= (int)floor(sqrt(LIMIT)); i++){
if (!sieve[i]){
printf("%d\n", i);
for (n = 2; n <= LIMIT/i; n++){
sieve[n*i] = 1;
}
}
}
for (i; i <= LIMIT; i++){
if (!sieve[i]){
printf("%d\n", i);
}
}
return 0;
}
Seems like you cannot allocate 550000 ints on the stack, allocate them dynamically instead.
int * sieve;
sieve = malloc(sizeof(int) * (LIMIT+1));
Your basic options are to store variables in data segment when your memory chunk is bigger than stack:
allocating memory for array in heap with malloc (as #Binyamin explained)
storing array in Data/BSS segments by declaring array as static int sieve[SIZE_MACRO]
All the memory in that program is allocated on the stack. When you increase the size of the array you increase the amount of space required on the stack. Eventually the method cannot be called as there isn't enough space on the stack to accomodate it.
Either experiement with mallocing the array (so it's allocated on the heap). Or learn how to tell the compiler to allocate a larger stack.