calloc v/s malloc and time efficiency - c

I've read with interest the post C difference between malloc and calloc. I'm using malloc in my code and would like to know what difference I'll have using calloc instead.
My present (pseudo)code with malloc:
Scenario 1
int main()
{
allocate large arrays with malloc
INITIALIZE ALL ARRAY ELEMENTS TO ZERO
for loop //say 1000 times
do something and write results to arrays
end for loop
FREE ARRAYS with free command
} //end main
If I use calloc instead of malloc, then I'll have:
Scenario2
int main()
{
for loop //say 1000 times
ALLOCATION OF ARRAYS WITH CALLOC
do something and write results to arrays
FREE ARRAYS with free command
end for loop
} //end main
I have three questions:
Which of the scenarios is more efficient if the arrays are very large?
Which of the scenarios will be more time efficient if the arrays are very large?
In both scenarios,I'm just writing to arrays in the sense that for any given iteration in the for loop, I'm writing each array sequentially from the first element to the last element. The important question: If I'm using malloc as in scenario 1, then is it necessary that I initialize the elements to zero? Say with malloc I have array z = [garbage1, garbage2, garbage 3]. For each iteration, I'm writing elements sequentially i.e. in the first iteration I get z =[some_result, garbage2, garbage3], in the second iteration I get in the first iteration I get z =[some_result, another_result, garbage3] and so on, then do I need specifically to initialize my arrays after malloc?

Assuming the total amount of memory being initialized in your two examples is the same, allocating the memory with calloc() might be faster than allocating the memory with malloc() and then zeroing them out in a separate step, especially if in the malloc() case you zero the elements individually by iterating over them in a loop. A malloc() followed by a memset() will likely be about as fast as calloc().
If you do not care that the array elements are garbage before you actually store the computation results in them, there is no need to actually initialize your arrays after malloc().

For 1 and 2, both do the same thing: allocate and zero, then use the arrays.
For 3, if you don't need to zero the arrays first, then zeroing is unnecessary and not doing it is faster.
There is a possibility that calloc's zeroing is more efficient than the code you write, but this difference will be small compared to the rest of the work the program does. The real savings of calloc is not having to write that code yourself.

Your point stated in 3. seems to indicate a case or unnecessary initialization. That is pretty bad speed wise, not only the time spent doing it is wasted but a whole lot of cache eviction happened because of it.
Doing a memset() or bzero() (that are called by calloc() anyway) is a good way to invalidate huge portion of your cache. Don't do it unless you are sure you won't overwrite everything yet can read parts of the buffer that will not have been written (as if 0 is an acceptable default value). If you write over everything anyway by all mean don't initialize your memory unnecessarily.
Unnecessary memory writing will not only ruin your app performance but also the performance of all applications sharing the same CPU with it.

The calloc and memset approaches should be about the same, and maybe slightly faster than zeroing it yourself.
Regardless, it's all relative to what you do inside your main loop, which could be orders of magnitude larger.

malloc is faster than Calloc because the reason is that malloc return memory as it is from an operating system. But when you will call Calloc it gets memory from the kernel or operating system and its initializes with its zero and then its return to you.
so, the initialization takes time. that's why malloc faster than Calloc

I dont know for linux. But on Windows there is something called the zero-page thread... calloc use those pages already initialized to zero. There is no difference in speed between malloc and calloc.

malloc differ by calloc by two reason
malloc takes one argument whereas calloc takes two argument
malloc is faster than calloc reason is that malloc processed single dimensional array to pointer format whereas calloc takes double dimensional array and before processed it converts to single dimensional array then to pointer format.
I think that, that's why malloc processing faster as compared to calloc

Related

Dynamically allocating multiple big arrays in C

I'm writing a program in C on windows that launches 30 threads, each of which needs an array of int16_t.
The size is calculated before the thread function is called and in the example I'm working with it's around 250 millions. This is around 15GB, which should not be a problem, because I have 128GB ram available.
I've tried using both malloc and calloc inside the thread function, but over half of the allocations return NULL with errno set to 12 (enomem).
With a small number of threads (up to 3) it works fine though, same if I just use 1 thread and allocating an unreasonably big array.
My next attempt to solve this issue was to create an array of pointers in the main, allocate the arrays there and pass them as argument to the thread, same thing happened.
So from these results my best guess would be it can't allocate contiguous blocks of memory of that size, so I also tried allocating many smaller arrays, which obviously didn't work either. Is this an expected behaviour or am I doing something wrong?

Result of using millions of malloc()s and free()s in your C code?

I was recently asked this question in an interview.
Suppose there is a large library of C programs and each program constantly malloc()s and free()s blocks of data. What do you think will happen if there are a million calls to malloc() and free() in one run of your program. What will you add to your answer if you have been given a very large memory heap storage?
One thing that may happen is that your memory will be fragmented, especially if you allocate block of different sizes.
Thus, if your memory size is not large, some malloc may fail, even if the total free memory is bigger that requested.
This is really a stupid question without more qualifiers. Suppose you do
for (;;)
{
free (malloc(SOMEVALUE)) ;
}
In that case very little is going to happen.
Let's assume that mallocs and frees occur in a random order. If you have an malloc implementation that uses fixed sized blocks, you are going to get a different result than if you use one with variable sized blocks (=memory fragmentation).
The result you get is entirely dependent upon the malloc implementation and the sequence of the calls to malloc and free.

Initializing to zero after malloc or calling calloc

I have a small confusion in using calloc over malloc. I remember somewhere I have read that calloc is slower than malloc because calloc performs initialization to zero after performing memory allocation.
In the project I am working, I see that after malloc they are assigning zero to all the values as shown below,
str* strptr = (str*)malloc(sizeof(str));
memset(strptr,0,sizeof(str));
Here str is a structure.
This is similar to
str* strptr =(str*)calloc(1,sizeof(str));
I want to know whether using malloc over calloc has any advantages and which method is preferred.
I want to know whether using malloc over calloc has any advantages
The differences between them are just
calloc also takes object count as opposed to malloc which only takes byte count
calloc zeros memory; malloc leaves the memory uninitialized
So no exceptional advantages except for the zeroing part.
which method is preferred.
Why not use malloc the way its used in the code base you're looking at? To avoid duplication of work and code; when an API already does that why reinvent the wheel? You could have seen code bases with a utility function that does just that: allocate and zero memory. This shows that the snippet will be used many times and hence they wrap it in a macro/function to call it from different places. However, why do it when calloc already does that?
The best code is no code at all. Lesser code is better, and thus you should prefer calloc over malloc here. May be the optimizer would do the same thing underneath, but why take the chance? Apparently, the optimizer may not be that smart, which is the reason for this question: Why malloc+memset is slower than calloc?
Also the calloc route requires lesser key strokes.

Segfault while allocating staticaly an int array

If I do this
int wsIdx[length];
I've a segFault
but if I do this
int *wsIdx;
wsIdx = (int *)malloc(sizeof(int) * length );
there's no problem.
This problem appears only when length is high, 2560000 during my tests. I've widely enough memory. Could you explain me the differences between the two allocation method, and why the first does not work? Thank you.
The first one gets allocated on the "stack" (an area usually used for local variables), while the second one gets allocated on the "heap" an area for dynamically allocated memory.
You don't have enough stack space to allocate in the first way, your heap is large.
This SO discussion might be helpful: What and where are the stack and heap?.
When you are allocating memory dynamically, you can always check for success or failure of the allocation by examining the return value of malloc/calloc/etc .. no such mechanism exists unfortunately for allocating memory on the stack.
Aside: You might enjoy reading this in the context of this question, especially this part :)
Assuming length is not a constant, then the first form is a variable-length array (VLA), and you have just encountered one of their biggest problems.
Best practice is to avoid VLAs for "large" arrays, and use malloc instead, for two reasons:
There is no mechanism for them to report allocation failure, other than to crash or cause some other undefined behaviour.
VLAs are typically allocated on the stack, which is typically relatively limited in size. So the chance of it failing to allocate is much higher!
implied auto considered harmful
To concur with the answers already given, if the faulting code had been written with an explicit storage class, this common problem might be more obvious.
void
not_enough_stack(void)
{
auto int on_stack[2560 * 1000];
printf("sizeof(stack) %d\n", sizeof(on_stack));
}

When should i use calloc over malloc

This is from Beej's guide to C
"The drawback to using calloc() is that it takes time to clear memory, and in most cases, you don't need it clear since you'll just be writing over it anyway. But if you ever find yourself malloc()ing a block and then setting the memory to zero right after, you can use calloc() to do that in one call."
so what is a potential scenario when i will want to clear memory to zero.
When the function you are passing a buffer to states in its documentation that a buffer must be zero-filled. You may also always zero out the memory for safety; it doesn't actually take that much time unless the buffers are really huge. Memory allocation itself is the potentially expensive part of the operation.
One scenario is where you are allocating an array of integers, (say, as accumulators or counter variables) and you want each element in the array to start at 0.
In some case where you are allocating memory for some structure and some member of that structure are may going to evaluation in some expression or in conditional statement without initializing that structure in that case it would be harmful or will give you undefined behavior . So overcome form this better you
1> malloc that structure and memset it with 0 before using that structure
or
2> calloc that structure
Note: some advance memory management program with malloc also reset memory with 0
There are lots of times when you might want memory zeroed!
Some examples:
Allocating memory to contain a structure, where you want all the
members initialised to zero
Allocating memory for an array of chars which you are later going to write some number of chars into, and then treat as a NULL
terminated string
Allocating memory for an array of pointers which you want initialised to NULL
If all allocated memory is zero-filled, the program's behavior is much more reproducible (so the behavior is more likely the same if you re-run your program). This is why I don't use uninitialized malloc zones.
(for similar reasons, when debugging a C or C++ program on Linux, I usually do echo 0 > /proc/sys/kernel/randomize_va_space so that mmap behavior is more reproducible).
And if your program does not allocate huge blocks (i.e. dozens of megabytes), the time spent inside malloc is much bigger than the time to zero it.

Resources