Why is sizeof() used in the argument to malloc()? [duplicate] - c

This question already has answers here:
Is the sizeof operator needed for malloc?
(5 answers)
Closed 8 months ago.
Like this is what the book showed me
int *list = malloc(3 * sizeof(int))
But what's wrong with this?
int *list = malloc(3)
My understanding says that malloc accepts "size" as a parameter and my goal is to only let pointer list to accept 3 values in it, but then why would I include sizeof(int) when using malloc?

Let's look at this line of code:
int *list = malloc(3 * sizeof(int))
It creates a pointer to an int, and allocated three times the size of an int worth of memory for it. So we have enough room to store three ints in that block of memory.
I'll assume one int takes up four bytes of memory. So to store three of them, we need 12 bytes of memory (four bytes per int times three ints). malloc allocates space in bytes, so that will give you 12 bytes of memory (sizeof(int) will return four as there are four bytes per int*).
Now let's look at the other version:
int *list = malloc(3)
You allocate three bytes of memory. Sadly, one int is four bytes... so you have enough space for 3/4 of an int (again, assuming one int is four bytes). If you wanted to store three ints, you need to allocate memory equal to three times however big an int is, hence 3 * sizeof(int).
*Technically, there are platforms where an int isn't four bytes. So it is better to write sizeof(int) instead of 4. Don't worry about that for now, though.

The parameter to malloc() is the number of bytes you want to allocate.
int* list = malloc(3) will only allocate 3 bytes
Each integer on a 32-bit platform is 32 bits (4 bytes). This can be obtained with sizeof(int)
An array of 3 integers will take 3*sizeof(int) which is 12 bytes on a 32-bit platform.
So
int* list = malloc(3*sizeof(int))
will allocate the space for 3 integers, which is 12 bytes on a 32-bit platform

Related

What are the difference between the following two statements? ptr = malloc (400); ptr = malloc (100 * sizeof(int))

What is the difference between the following two statements?
ptr = malloc (400);
and
ptr = malloc (100 * sizeof(int))
How does it work? Is there any difference between them?
types of ptr 'int'
It depends on your architecture. On a 64-bit machine, the int size should be 8 bytes and 4 bytes on a 32-bit. Though it is not a rule, your 64-bit compiler might register having 4 bytes instead.
So the difference is that the allocated memory might vary depending on the architecture you are targeting and what the compiler decides.
This is also true for the size of your pointers.
No there should not be any. Malloc allocates contiguous memory chunk in bytes at runtime, so as long as integer takes 4 bytes, then no problem.
You can also refer this, for more clarity.
Also since you did not mention the type of pointer you are using .. the 2 ways makes no difference. Suppose you wanted an integer type then here is an example:
int *ptr;
ptr = malloc(100 * sizeof *ptr);
The first form allocates 400 bytes.
The second form allocates enough space for 100 int objects.
These are not necessarily the same thing, as the size of an int can be 2, 4, or more bytes wide depending on the platform.
If you need to set aside space for N objects of a given type, use the following idiom:
T *ptr = malloc( N * sizeof *ptr ); // for any object type T
sizeof *ptr is equivalent to sizeof (T).

Why is malloc giving me 8 bytes when I request 20? [duplicate]

This question already has answers here:
Newbie questions about malloc and sizeof
(8 answers)
Closed 7 years ago.
I've just been playing around in C for the first time, and I'm at a loss as to why malloc is not giving me the amount of memory that I'd expect it to. The following code:
printf("Allocating %ld bytes of memory\n", 5*sizeof(int));
int *array = (int *) malloc(5*sizeof(int));
printf("%ld bytes of memory allocated\n", sizeof(array));
results in:
Allocating 20 bytes of memory
8 bytes of memory allocated
I've checked that I'm indeed calling malloc to give me 20 bytes, but don't understand why after calling malloc, the pointer only has 8 bytes.
array is not an array but an int *. So it's size will always be the size of the pointer.
The sizeof operator does not tell you how much memory was dynamically allocated at a pointer.
If on the other hand you had this:
int array2[5];
Then sizeof(array2) would be 20, assuming an int is 4 bytes.
The sizeof operator tells you the size of its operand. array has type int* (pointer to int) which occupies eight bytes on your platform. The sizeof operator cannot find out how long the array array points to actually is. What is returns is not indicative about how much memory has been allocated.
The malloc() function either fails (in which case it returns NULL) or succeeds in which case it returns a pointer to a memory region at least as large as you need it.

Malloc, and size on 32 bit machines

#include<stdio.h>
#include<stdlib.h>
int main()
{
int *p;
p = (int *)malloc(20);
printf("%d\n", sizeof(p));
free(p);
return 0;
}
On my 64-bit machine, 4 is printed as the size of p. I'm assuming this is because integers take up 4 bytes in memory, and p is an integer pointer. What if I was running a 32-bit machine? Also, what would happen if I replaced
int *p with double *p
and
(int *)malloc(20) with (double *)malloc(20)?
You are assuming wrong.
In printf("%d\n", sizeof(p)); you are not printing size of integer.You are printing here sizeof 'pointer to an integer' which is 4 bytes in your case.Most probably you will get same result on 32-bit machine.
Now about malloc, It allocates number of bytes and returns pointer to it.So same size of memory will be allocated even if you cast the pointer from int* to double*.
In pointers, It will take four bytes for all pointers.
So while you are checking with sizeof, even it is a character pointer it will give four bytes. If you need the value of that pointer use like this.
printf("%d\n", sizeof(p));// It will give the pointer size.
malloc is allocating the given bytes. And it will give equal to all the pointers. Then don't cast the result of malloc. Refer this link.
You have some misunderstanding let me point them,
p = (int *)malloc(20);
You are allocating memory of 20 bytes and malloc returns a void pointer and but compiler does the casting for you and (int *) is not needed. Even though you have a pointer to a double or an int it takes the same amount of bytes (In a 32bit system this merely for mapping 4GB memory space).
// Should this be 4 or 8?
printf("%d\n", sizeof(p));
This should be 8 on a x64 platform if your executable or the build is x64 only. I assume your build is 32bit and it returns 4.
Above printf has a wrong specifier. sizeof returns size_t and not an int. So correct form should be,
printf("%zu\n", sizeof(p));
Irrespective of whether 64-bit system or 32-bit system, size of a pointer variable is 4 bytes by default since 64-bit build settings also have Debug32 by default.
If we specifically change build settings on 64-bit, then the pointer variable can hold 8 bytes.
I'm assuming this is because integers take up 4 bytes in memory, and p is an integer pointer.
Your assumption is not correct ! To answer your doubt not only integer pointer take up 4 bytes.
int *ptrint;
char *ptrchar;
float *ptrfloat;
double *ptrdouble;
Here all ptrint, ptrchar, ptrfloat, ptrdouble takes 4 bytes of memory since it would be the address stored in that variable.
And if you replace int *p with double *p and (int *)malloc(20) with (double *)malloc(20) , the size would be still 4 bytes. I hope this ans cleared your doubts.

How does alignment work with pointers to zero-sized arrays?

I was investigating some code I saw that deals with 0-size arrays. Specifically, they are used in the case of dynamically allocated size of a struct such as in https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
Specifically, this is the code in Callgrind:
struct _EventGroup {
int size;
const char* name[0];
};
I was playing around with this, using gcc 4.1.2 on a 64-bit Red Hat, and noticed that the sizeof function returns a size of 8 bytes for the entire struct _EventGroup when the name field is a pointer. Now if I change it to:
struct _EventGroup {
int size;
const char name[0];
};
I get 4 bytes because gcc identifies the 0-size array as taking up no space, and the int as taking up 4 bytes. That makes sense. If I then change it to:
struct _EventGroup {
int size;
const char* name;
};
sizeof returns 16 bytes because the char* takes up 8 bytes in a 64 bit system, and the int has to get padded to 8 bytes. This makes sense. Now, if I do one last change:
struct _EventGroup {
const char* name[0];
};
I get 0 bytes because gcc is detecting my zero-size array. What I want clarified is what's happening in the first case I presented. How much space is gcc allocating for a pointer to a zero size array and why? I'm asking because this code seems designed to be efficient with memory allocation, however it would make sense that gcc either gives the pointer a size of 0 bytes if it detects it points to essentially nothing, or 8 bytes if it is being treated as a normal pointer. Alignment would dictate that I get either 4 bytes or 16 bytes with my original struct. Why am I getting 8 bytes?
Gcc is adding the right amount of padding such that if you actually allocate extra space for the array, the pointer &(eventGroup->name) will be properly aligned.
It seems you're on a platform that has 4-byte ints and 8-byte pointers, so this means you have:
bytes 0-3 -- the int
bytes 4-7 -- padding
bytes 8-15 -- where the first (char *) would be stored, 8-byte aligned
Since it's actually an array of zero size, you don't actually have that first char *, but you do have the padding. Hence, the struct has size 8.
In your second example, there is no alignment requirement for a one-byte char, so you have:
bytes 0-3 -- the int
byte 4 -- where the first (char) would be stored, "1-byte aligned"
Again, no actual char in the struct, so it has size 4.

Memory Allocation Problem

This question was asked in the written round of a job interview:
#include<alloc.h>
#define MAXROW 3
#define MAXCOL 4
main()
{
int (*p)[MAXCOL];
p = (int (*)[MAXCOL]) malloc(MAXROW*(sizeof(*p)));
}
How many bytes are allocated in the process?
To be honest, I did not answer the question. I did not understand the assignment to p.
Can anybody explain me what would be the answer and how it can be deduced?
It's platform dependent.
int (*p)[MAXCOL]; declares an pointer to an array of integers MAXCOL elements wide (MAXCOL of course is 4 in this case). One element of this pointer is therefore 4*sizeof(int) on the target platform.
The malloc statement allocates a memory buffer MAXROW times the size of the type contained in P. Therefore, in total, MAXROW*MAXCOL integers are allocated. The actual number of bytes will depend on the target platform.
Also, there's probably additional memory used by the C runtime (as internal bookeeping in malloc, as well as the various process initialization bits which happen before main is called), which is also completely platform dependant.
p is a pointer to an array of MAXCOL elements of type int, so sizeof *p (parentheses were redundant) is the size of such an array, i.e. MAXCOL*sizeof(int).
The cast on the return value of malloc is unnecessary, ugly, and considered harmful. In this case it hides a serious bug: due to missing prototype, malloc is assumed implicitly to return int, which is incompatible with its correct return type (void *), thus resulting in undefined behavior.
sizeof(*p) will be MAXCOL*sizeof(int). So totally MAXROW*MAXCOL*sizeof(int) number of bytes are alloctaed.
You might want to check out cdecl for help translating C declarations into English. In this instance, int (*p)[4]; becomes declare p as pointer to array 4 of int.
#include<alloc.h>
#define MAXROW 3
#define MAXCOL 4
main() {
int (*p)[MAXCOL];
p = (int (*)[MAXCOL]) malloc(MAXROW*(sizeof(*p));
}
How many bytes are allocated in the process ?
p is a pointer, so will occupy sizeof(int(*)[MAXCOL]) on the stack, which might look daunting but it's almost always the same as sizeof(void*), sizeof(int*) or any other pointer. Obviously pointer sizes are what give applications their classification as 16-, 32-, 64-etc. bits, and this pointer will be correspondingly sized.
Then p is pointed at some memory obtained from malloc...
malloc( MAXROW * sizeof(*p) )
sizeof(*p) is the size of the int array that p points to, namely sizeof(int) * MAXCOL, so we get
malloc( MAXROW * (sizeof(int) * MAXCOL) )
requested from the heap. For illustrative purposes, if we assume the common 32-bit int size, we're looking at 48 bytes. The actual usage may be rounded up to whatever the heap routines feel like (heap routines often used fixed-sized "buckets" to speed their operations).
To confirm this expectation, simply substitute a logging function for malloc():
#include <stdio.h>
#define MAXROW 3
#define MAXCOL 4
void* our_malloc(size_t n)
{
printf("malloc(%ld)\n", n);
return 0;
}
int main()
{
int (*p)[MAXCOL];
p = (int (*)[MAXCOL]) our_malloc(MAXROW*(sizeof(*p)));
}
Output on my Linux box:
malloc(48)
The fact that malloc's returned pointer is cast to p's type doesn't affect the amount of memory allocation done.
As R sharply observes, lack of a malloc prototype would cause the compiler to expect malloc to return int rather than the actually-returned void*. In practice, it's probable that the lowest sizeof(int) bytes from the pointer would survive the conversion, and if sizeof(void*) happened to be equal to sizeof(int), or - more tenuous yet - the heap memory happens to start at an address representable in an int despite the size of pointers being larger (i.e. all the truncated bits were 0s anyway), then later dereferencing of the pointer just might work. Cheap plug: C++ won't compile unless it's seen the prototype.
That said, perhaps your alloc.h contains a malloc prototype... I don't have an alloc.h so I guess it's non-Standard.
Any program will also allocate memory for many other things, such as a stack frame providing some context within which main() may be called. The amount of memory for that varies with the compiler, version, compiler flags, operating system etc..
int (*p)[MAXCOL] == int (*p)[4] == "pointer to array 4 of int" (see Note below)
sizeof(*p) would then be what p points to, i.e. 4 * sizeof(int). Multiply that by MAXROW and your final answer is:
12 * sizeof(int)
Note: This is in contrast to:
int *p[MAXCOL] == int *p[4] == "array 4 of pointer to int"
The parentheses make quite a bit of difference!
It should be MAXROW*MAXCOL*sizeof(int) number of bytes
I really dislike questions like this, because I think it's far better as a working engineer to run the experiment than to assume that you know what you are doing - especially if there's reason for suspicion such as a program not working as expected or someone crafting trick questions.
#include <stdlib.h>
#include <stdio.h>
#define MAXROW 3
#define MAXCOL 4
main()
{
int (*p)[MAXCOL];
int bytes = MAXROW * (sizeof(*p));
p = (int (*)[MAXCOL]) malloc(bytes);
printf("malloc called for %d bytes\n", bytes);
}
On a 32 bit linux system:
gcc test.c
./a.out
malloc called for 48 bytes
(edited to remove pasting accident of multiplying by maxrow twice, yielding mistaken size of 144 bytes)
Running the following in codepad.org:
//#include<alloc.h>
#define MAXROW 3
#define MAXCOL 4
int main()
{
int (*p)[MAXCOL];
p = (int (*)[MAXCOL]) malloc(MAXROW*(sizeof(*p)));
int x = MAXROW*(sizeof(*p));
printf("%d", x);
return 0;
}
prints out 48.
Why? Because MAXROW is 3, and sizeof(*p) is 16, so we get 3 * 16.
Why is sizeof(*p) 16? Because MAXCOL is 4, so p is a pointer to an array of 4 ints. Each int is 32 bits = 4 bytes. 4 ints in the array * 4 bytes = 16.
Why is sizeof(*p) not 4? Because it is the size of what p points to, not the size of p. To be the size of p it would have to be sizeof(p), which would be 4, as p is a pointer.
Pedantically you could add:
If the machine is 64 bit (say) the answer would be 96.
As the question states "How many bytes are allocated in the process?", you need to add the 4 bytes for the pointer p.
malloc can allocate more than you ask for (but not less) so the question cannot be answered.
In a similar vein as 2, you could argue that as the process is also loading system dlls such as the C runtime dll in order to run, it is allocating the space for those too. Then you could argue for the space allocated by dlls that are being injected into the process by other (non system) processes, such as those injected by Actual Window Manager and its ilk. But how pedantic do we want to get?
But I think the question is really asking for 48, with possible extra credit for explaining 96.
How about zero bytes because that won't even compile without the non-standard alloc.h. If you can't compile it, you can't run it and if you can't run it, it can't allocate any memory at all.

Resources