Max size to allocate to matrix in c - c

I'am doing a homework on matrix multiplication. The problem that i want to find the largest matrix that i can handle (allocate). So i wrote the following code:
int n = 1;
while(1){
n++;
A=malloc(sizeof(double)*n*n);
B=malloc(sizeof(double)*n*n);
C=malloc(sizeof(double)*n*n);
if (C==NULL) {printf("Error No more size to allocate! n=%d\n",n); return 1;}
// and the rest of the code including freeing the allocate
the result:
Error No more size to allocate! n=21785
Now i want to use another method: using A as the result instead of C.
So that i only need 2(n**2)+n instead of 3(n**2).
So the new code should loke like this :
int n = 1;
while(1){
n++;
A=malloc(sizeof(double)*n*n);
B=malloc(sizeof(double)*n*n);
C=malloc(sizeof(double)*n);
if (C==NULL) {printf("Error No more size to allocate! n=%d\n",n); return 1;}
// and the rest of the code including freeing the allocated space.
The problem that when i run this code it wont stop incrementing n but if i change the condition from (C==NULL) to (A==NULL||B==NULL||C==NULL)
the result is:
Error No more size to allocate! n=21263
Any idea??
Edit
Do I cast the result of malloc?
PS: My professor tell us to always use cast in malloc!!

Your program fails to allocate A or B long before it fails to allocate C, which is much smaller. With n being approximately 21263, n*n is 21263 times larger than n. The loop will continue for about 10000 repetitions. If you free C after successful allocation, the loop will even continue for a few hundred million repetitions until n reaches about 21263*21263. You just have to wait long enough for your program to exit.

there are a few things to note
1) the main consideration is that memory, once allocated (by malloc and not free'd) means the amount of available heap to allocate goes down, even as 'n' goes up.
2) any call to malloc() can fail, including the first call
this means the failure could be any of the calls to malloc()
3) in C, the returned value from malloc (and family) should not be cast.
4) the returned value from malloc() should always be checked,
not just one in three calls to malloc()
regarding the posted code.
1) all the above considerations should be implemented
2) in the second example of posted code, just because a
larger memory request (n*n) fails does not mean a
smaller request (n) would fail.
that is why the modification catches the failure of 'A'
3) the heap address and the size of the heap are normally
available at compile time, so there is no need
for the kind of code you posted

You could try doing a single allocation, then assigning pointers for B and C as offsets from A. Rather than starting at a small value, start at a large value that should fail on the first loops, so that when the allocation does succeed, the heap will not be fragmented.

Related

When using MPI, how can I fix the error: "_int_malloc: Assertion `(unsigned long) (size) >= (unsigned long) (nb)' failed"?

I'm using a scientific simulation code that my supervisor wrote about 10 years ago to run some calculations. An intermittent issue keeps arising when running it in parallel on our cluster (which has hyperthreading enabled) using mpirun. The error it produces is very terse, and simply tells me that a memory assignment has failed.
program-name: malloc.c:4036: _int_malloc: Assertion `(unsigned long) (size) >= (unsigned long) (nb)' failed.
[MKlabgroup:3448077] *** Process received signal ***
[MKlabgroup:3448077] Signal: Aborted (6)
[MKlabgroup:3448077] Signal code: (-6)
I've used the advice here to start the program and halt it so that I can attach a debugger session to one of the instances on a single core. The error occurs during the partitioning of the input mesh (using metis) when a custom matrix function is called for the first time, and requests space for ~4000 rows and 4 columns, with each element being an 8 byte integer. This particular function (below) uses an array of n pointers addressing m arrays of integer pointers:
int **matrix_int(int n, int m)
{
int i;
int **mat;
// First: Assign the rows [n]
mat = (int **) malloc(n*sizeof(int*));
if (!mat)
{
program_error("ERROR[0]: Memory allocation failure for the rows #[matrix_int()]");
}
// Second: Assign the columns [m]
for (i = 0; i <= n-1; i++)
{
mat[i] = (int *) malloc(m*sizeof(int));
if (!mat[i])
{
program_error("ERROR[0]: Memory allocation failure for the columns #[matrix_int()]");
}
}
return mat;
}
My supervisor thinks that the issue has to do with automatic resource allocation on the CPU. As such, I've recently tried using the -rf option in mpirun in conjunction with a rankfile specifying which cores to use, but this has produced similarly intermittent results; sometimes a single process crashes, sometimes several, and sometimes it runs fine. It always runs reliably in serial, but the calculations are extremely slow on a single core.
Does anyone know of a change to the server configuration or the code itself that I can make (aside from globally disabling hyperthreading) that would allow this to run for certain every time?
(Any general tips on debugging in parallel would also be greatly appreciated! I'm still pretty new to C/C++ and MPI, and have another bug to chase after this one which is probably related.)
After using the compiler flags suggested by n. 1.8e9-where's-my-share m. to diagnose memory access violations I've discovered that the memory corruption is indeed caused by a function that is called just before the one in my original question.
The offending function reads in data from a text file using sscanf, and would allocate a 3-element array for each line of the file (for the 3 numbers to be read in per line). The next part is conjecture, but I think that the problem arose because sscanf returns a NULL at the end of a sequence it reads. I'm surmising that this NULL was written to the next byte along from the 3 allocated, such that the next time malloc tried to allocate data the first thing it saw was a NULL, causing it to return without actually having allocated any space. Then the next function to try and use the allocated memory would come along and crash because it's trying to access unassigned memory that malloc had reported to be allocated.
I was able to fix the bug by changing the size of the allocated array in the read function from 3 to 4 elements. This would seem to allow the NULL character to be stored without it interfering with subsequent memory allocations.

Random numbers using malloc function

Why aren't the numbers after the first one of each line of the matrix random numbers? Why are they zeros? For example if I print a line of this matrix I get4 0 0 0 0 but I should get the numbers after 4 as random numbers instead.
void readfile(FILE *input,int **matrix){
int i=0, num;
while(fscanf(input, "%d ", &num) == 1){
matrix[i] = malloc((num+1)*sizeof(int));
matrix[i][0] = num;
i++;
}
}
Why aren't the numbers after the first one of each line of the matrix random numbers?
Why should they be?
Yes, malloc returns a newly allocated block of uninitialized memory, but nobody said that it had to be random.
Indeed, typically at process start you are going to get blank pages, just zeroed out by the operating system and provided to your process (the OS cannot recycle pages from other processes without blanking them out for security reasons), while later you are more likely to get back pages that your process has freed, so generally containing old data from your own program.
All this is strictly non-contractual, and is often violated - for example, so-called "debug heaps" generally fill in free pages with a known pattern (e.g. 0xCDCDCDCD on Visual C++) to spot usages of uninitialized memory.
Long story short: don't make any kind of assumption about the content of memory provided by malloc.

Array & segmentation fault

I'm creating the below array:
int p[100];
int
main ()
{
int i = 0;
while (1)
{
p[i] = 148;
i++;
}
return (0);
}
The program aborts with a segmentation fault after writing 1000 positions of the array, instead of the 100. I know that C doesn't check if the program writes out of bounds, this is left to the OS. I'm running it on ubuntu, the size of the stack is 8MB (limit -s). Why is it aborting after 1000? How can I check how much memory my OS allocates for an array?
Sorry if it's been asked before, I've been googling this but can't seem to find a specific explanation for this.
Accessing an invalid memory location leads to Undefined Behavior which means that anything can happen. It is not necessary for a segmentation-fault to occur.
...the size of the stack is 8MB (limit -s)...
variable int p[100]; is not at the stack but in data area because it is defined as global. It is not initialized so it is placed into BSS area and filled with zeros. You can check that printing array values just at the beginning of main() function.
As other said, using p[i] = 148; you produced undefined behaviour. Filling 1000 position you most probably reached end of BSS area and got segmentation fault.
It appear that you clearly get over the 100 elements defined (int p[100];) since you make a loop without any limitation (while (1)).
I would suggest to you to use a for loop instead:
for (i = 0; i < 100; i++) {
// do your stuff...
}
Regarding you more specific question about the memory, consider that any outside range request (in your situation over the 100 elements of the array) can produce an error. The fact that you notice it was 1000 in your situation can change depending on memory usage by other program.
It will fail once the CPU says
HEY! that's not Your memory, leave it!
The fact that the memory is not inside of the array does not mean that it's not for the application to manipulate.
The program aborts with a segmentation fault after writing 1000 positions of the array, instead of the 100.
You do not reason out Undefined Behavior. Its like asking If 1000 people are under a coconut tree, will 700 hundred of them always fall unconscious if a Coconut smacks each of their heads?

two mallocs returning same pointer value

I'm filling a structure with data from a line, the line format could be 3 different forms:
1.-"LD "(Just one word)
2.-"LD A "(Just 2 words)
3.- "LD A,B "(The second word separated by a coma).
The structure called instruccion has only the 3 pointers to point each part (mnemo, op1 and op2), but when allocating memory for the second word sometimes malloc returns the same value that was given for the first word. Here is the code with the mallocs pointed:
instruccion sepInst(char *linea){
instruccion nueva;
char *et;
while(linea[strlen(linea)-1]==32||linea[strlen(linea)-1]==9)//Eliminating spaces and tabs at the end of the line
linea[strlen(linea)-1]=0;
et=nextET(linea);//Save the direction of the next space or tab
if(*et==0){//If there is not, i save all in mnemo
nueva.mnemo=malloc(strlen(linea)+1);
strcpy(nueva.mnemo,linea);
nueva.op1=malloc(2);
nueva.op1[0]='k';nueva.op1[1]=0;//And set a "K" for op1
nueva.op2=NULL;
return nueva;
}
nueva.mnemo=malloc(et-linea+1);<-----------------------------------
strncpy(nueva.mnemo,linea,et-linea);
nueva.mnemo[et-linea]=0;printf("\nj%xj",nueva.mnemo);
linea=et;
while(*linea==9||*linea==32)//Move pointer to the second word
linea++;
if(strchr(linea,',')==NULL){//Check if there is a coma
nueva.op1=malloc(strlen(linea)+1);//Do this if there wasn't any coma
strcpy(nueva.op1,linea);
nueva.op2=NULL;
}
else{//Do this if there was a coma
nueva.op1=malloc(strchr(linea,',')-linea+1);<----------------------------------
strncpy(nueva.op1,linea,strchr(linea,',')-linea);
nueva.op1[strchr(linea,',')-linea]=0;
linea=strchr(linea,',')+1;
nueva.op2=malloc(strlen(linea)+1);
strcpy(nueva.op2,linea);printf("\n2j%xj2",nueva.op2);
}
return nueva;
}
When I print the pointers it happens to be the same number.
note: the function char *nextET(char *line) returns the direction of the first space or tab in the line, if there is not it returns the direction of the end of the line.
sepInst() is called several times in a program and only after it has been called several times it starts failing. These mallocs across all my program are giving me such a headache.
There are two main possibilities.
Either you are freeing the memory somewhere else in your program (search for calls to free or realloc). In this case the effect that you see is completely benign.
Or, you might be suffering from memory corruption, most likely a buffer overflow. The short term cure is to use a specialized tool (a memory debugger). Pick one that is available on your platform. The tool will require recompilation (relinking) and eventually tell you where exactly is your code stepping beyond previously defined buffer limits. There may be multiple offending code locations. Treat each one as a serious defect.
Once you get tired of this kind of research, learn to use the const qualifier and use it with all variable/parameter declarations where you can do it cleanly. This cannot completely prevent buffer overflows, but it will restrict them to variables intended to be writable buffers (which, for example, those involved in your question apparently are not).
On a side note, personally, I think you should work harder to call malloc less. It's a good idea for performance, and it also causes corruption less.
nueva.mnemo=malloc(strlen(linea)+1);
strcpy(nueva.mnemo,linea);
nueva.op1=malloc(2);
should be
// strlen has to traverse your string to get the length,
// so if you need it more than once, save its value.
cbLineA = strlen(linea);
// malloc for the string, and the 2 bytes you need for op1.
nueva.mnemo=malloc(cbLineA + 3);
// strcpy checks for \0 again, so use memcpy
memcpy(nueva.mnemo, linea, cbLineA);
nueva.mnemo[cbLineA] = 0;
// here we avoid a second malloc by pointing op1 to the space we left after linea
nueva.op1 = nueva.mnemo + cbLinea + 1;
Whenever you can reduce the number of mallocs by pre-calculation....do it. You are using C! This is not some higher level language that abuses the heap or does garbage collection!

Is there any hard-wired limit on recursion depth in C

The program under discussion attempts to compute sum-of-first-n-natural-numbers using recursion. I know this can be done using a simple formula n*(n+1)/2 but the idea here is to use recursion.
The program is as follows:
#include <stdio.h>
unsigned long int add(unsigned long int n)
{
return (n == 0) ? 0 : n + add(n-1);
}
int main()
{
printf("result : %lu \n", add(1000000));
return 0;
}
The program worked well for n = 100,000 but when the value of n was increased to 1,000,000 it resulted in a Segmentation fault (core dumped)
The following was taken from the gdb message.
Program received signal SIGSEGV, Segmentation fault.
0x00000000004004cc in add (n=Cannot access memory at address 0x7fffff7feff8
) at k.c:4
My question(s):
Is there any hard-wired limit on recursion depth in C? or does the recursion depth depends on the available stack memory?
What are the possible reasons why a program would receive a reSIGSEGV signal?
Generally the limit will be the size of the stack. Each time you call a function, a certain amount of stack is eaten (usually dependent on the function). The eaten amount is the stack frame, and it is recovered when the function returns. The stack size is almost almost fixed when the program starts, either from being specified by the operating system (and often adjustable there), or even being hardcoded in the program.
Some implementations may have a technique where they can allocate new stack segments at run time. But in general, they don't.
Some functions will consume stack in slightly more unpredictable ways, such as when they allocate a variable-length array there.
Some functions may be compiled to use tail-calls in a way that will preserve stack space. Sometimes you can rewrite your function so that all calls (Such as to itself) happen as the last thing it does, and expect your compiler to optimise it.
It's not that easy to see exactly how much stack space is needed for each call to a function, and it will be subject to the optimisation level of the compiler. A cheap way to do that in your case would be to print &n each time its called; n will likely be on the stack (especially since the progam needs to take its address -- otherwise it could be in a register), and the distance between successive locations of it will indicate the size of the stack frame.
1)Consumption of the stack is expected to be reduced and written as tail recursion optimization.
gcc -O3 prog.c
#include <stdio.h>
unsigned long long int add(unsigned long int n, unsigned long long int sum){
return (n == 0) ? sum : add(n-1, n+sum); //tail recursion form
}
int main(){
printf("result : %llu \n", add(1000000, 0));//OK
return 0;
}
There is no theoretical limit to recursion depth in C. The only limits are those of your implementation, generally limited stack space.
(Note that the C standard doesn't actually require a stack-based implementation. I don't know of any real-world implementations that aren't stack based, but keep that in mind.)
A SIGSEGV can be caused by any number of things, but exceeding your stack limit is a relatively common one. Dereferencing a bad pointer is another.
The C standard does not define the minimum supported depth for function calls. If it did, which is quite hard to guarantee anyway, it would have it mentioned somewhere in section 5.2.4 Environmental limits.

Resources