So, I am creating a structure that currently needs a lot of memory. I hope to reduce it in the future, but for now, it is what it is. Hence, I need to allocate some of its elements on the heap because I get a stack overflow if they are put on the stack. And yes, I increased the stack size but on the target platform I only have so much.
In this case, would it be 'better' to allocate every structure element on the heap, or put some on the stack and the big stuff on the heap? For instance:
typedef struct my_structure_s{
int bounds[2];
int num_values;
int* values; //needs to be very large
} my_structure_t;
Vs:
typedef struct my_structure_s{
int* bounds;
int* num_values;
int* values;
} my_structure_t;
I know 'better' is largely subjective, and could quite possibly incite a riot here. So, what are the pros and cons of both examples? What do you usually do? Why?
Also, forgive the _s, _t stuff...I know some of you may find it in bad taste but that is the convention for the legacy codebase this will be integrated into.
Thanks everyone!
It is better to keep the simple members as direct values, and allocate just the array. Using the extra two pointers just slows down access for no benefit.
One other option to consider if you have C99 or C11 is to use a flexible array member (FAM).
You'd define your structure using the notation:
typedef struct my_structure_s
{
int bounds[2];
int num_values;
int values[];
} my_structure_t;
You'd allocate enough memory for the structure and an N-element array in values all in a single operation, using:
my_structure_t *np = malloc(sizeof(*np) + N * sizeof(np->values[0]));
This then means you only have to free one block of memory to free.
You can find references to the 'struct hack' if you search. This notation is effectively the standardized form of the struct hack.
In comments, the discussion continued:
This is an interesting approach; however, I can't guarantee I will have C99.
If need be, you can use the 'struct hack' version of the code, which would look like:
typedef struct my_structure_s
{
int bounds[2];
int num_values;
int values[1];
} my_structure_t;
The rest of the code remains unchanged. This uses slightly more memory (4-8 bytes more) than the FAM solution, and isn't strictly supported by the standard, but it was used extensively before the C99 standard so it is unlikely that a compiler would invalidate such code.
Okay, but how about:
typedef struct my_structure_s
{
int bounds[2];
int num_values;
int values[MAX_SIZE];
} my_structure_t;
And then: my_structure_t *the_structure = malloc(sizeof(my_structure_t));
This will also give me a fixed block size on the heap right? (Except here, my block size will be bigger than it needs to be, in some instances, because I won't always get to MAX_SIZE).
If there is not too much wasted space on average, then the fixed-size array in the structure is simpler still. Further, it means that if the MAX_SIZE is not too huge, you can allocate on the stack or on the heap, whereas the FAM approach mandates dynamic (heap) allocation. The issue is whether the wasted space is enough of a problem, and what you do if MAX_SIZE isn't big enough after all. Otherwise, this is much the simplest approach; I simply assumed you'd already ruled it out.
Note that every one of the suggested solutions avoids the pointers to bounds and num_values suggested in option 2 in the question.
do the first one. It is simpler and less error prone (you have to remember to allocate and release more things in the second one)
BTW - not that the first example will not put num_values on the stack. IT will go wherever you allocate the struct, stack, heap of static
Related
So I'm looking at a solution to some coding interview type questions, and there's an array inside a struct
#define MAX_SIZE 1000000
typedef struct _heap {
int data[MAX_SIZE];
int heap_size;
}heap;
heap* init(heap* h) {
h = (heap*)malloc(sizeof(heap));
h->heap_size = 0;
return h;
}
This heap struct is later created like so
heap* max_heap = NULL;
max_heap = init(max_heap);
First of all, I'd wish this was written in C++ style than C, but secondly if I'm just conscerned about the array, I'm assuming it is equivalent to solely analyze the array portion by changing the code like this
int* data = NULL;
data = (int*)malloc(1000000 * sizeof(int));
Now in that case, is there any problems with declaring the array with the max size if you are probably just using a little bit of it?
I guess this boils down to the question of when an array is created in the heap, how does the system block out that portion of the memory? In which case does the system prevent you from accessing memory that is part of the array? I wouldn't want a giant array holding up space if I'm not using much of it.
is there any problems with declaring the array with the max size if you are probably just using a little bit of it?
Yes. The larger the allocation size the greater the risk of an out-of-memory error. If not here, elsewhere in code.
Yet some memory allocation systems handle this well as real memory allocations do not immediately occur, but later when needed.
I guess this boils down to the question of when an array is created in the heap, how does the system block out that portion of the memory?
That is an implementation defined issue not defined by C. It might happen immediately or deferred.
For maximum portability, code would take a more conservative approach and allocate large memory chunks only as needed, rather than rely on physical allocation occurring in a delayed fashion.
Alternative
In C, consider a struct with a flexible member array.
typedef struct _heap {
size_t heap_size;
int data[];
} heap;
I'm writing an R extension that requires me to allocate memory for an array of structs. The structs contain integers, SEXPs, character pointers, e.g. something like:
struct my_struct {
int a;
SEXP b;
const char * d;
const char * e[3];
};
I'm hoping to allocate memory for the array using something like:
struct my_struct * arr = (struct my_struct *) R_alloc(10, sizeof(my_struct));
This works in the strictest sense of the word, but I'm given pause by the following comment from WRE:
The memory returned is only guaranteed to be aligned as required for double pointers: take precautions if casting to a pointer which needs more.
I am not concerned about speed or space as I don't expect my arrays to be large or accessed frequently. I do however want to avoid crashes. My understanding is that misaligned memory access is really only a performance issue for x86 architectures. Furthermore, since it seems nowadays R is primarily for x86 architectures (based on CRAN tests), and I'm not concerned about performance, I shouldn't have any problems doing this.
Am I setting myself up for trouble?
EDIT: I'm assuming here that "double pointers" means pointers to doubles, and not pointers to pointers as seems to be the informal convention in some places. FWIW the code in R_alloc (src/main/memory.c#~2700) wants to allocate in multiples of sizeof(VECREC), where VECREC is union(SEXP, double) (src/include/Defn.h#~410), plus header offset. Presumably this is where the alignment guarantee for doubles comes from, but I'm not sure why this would be a problem for larger structs. Granted, I'm not particularly experienced at this.
I'm still not an expert on the matter, but according to at least this site:
In general, a struct instance will have the alignment of its widest scalar member. Compilers do this as the easiest way to ensure that all the members are self-aligned for fast access.
So in this case, the struct should have an alignment compatible with what R_alloc allocates, since it's largest member will be the SEXP (probably, I guess it might be possible for char * to be bigger on some systems, but seems unlikely), and R_alloc allocates in multiples of union(SEXP, double), as documented in the question.
One other thing perhaps worth pointing out is that the recommended ordering of the elements in a struct is from largest alignment requirement to lowest, and right now we start with an int, which is the smallest.
I am working on refactoring some old code and have found few structs containing zero length arrays (below). Warnings depressed by pragma, of course, but I've failed to create by "new" structures containing such structures (error 2233). Array 'byData' used as pointer, but why not to use pointer instead? or array of length 1? And of course, no comments were added to make me enjoy the process...
Any causes to use such thing? Any advice in refactoring those?
struct someData
{
int nData;
BYTE byData[0];
}
NB It's C++, Windows XP, VS 2003
Yes this is a C-Hack.
To create an array of any length:
struct someData* mallocSomeData(int size)
{
struct someData* result = (struct someData*)malloc(sizeof(struct someData) + size * sizeof(BYTE));
if (result)
{ result->nData = size;
}
return result;
}
Now you have an object of someData with an array of a specified length.
There are, unfortunately, several reasons why you would declare a zero length array at the end of a structure. It essentially gives you the ability to have a variable length structure returned from an API.
Raymond Chen did an excellent blog post on the subject. I suggest you take a look at this post because it likely contains the answer you want.
Note in his post, it deals with arrays of size 1 instead of 0. This is the case because zero length arrays are a more recent entry into the standards. His post should still apply to your problem.
http://blogs.msdn.com/oldnewthing/archive/2004/08/26/220873.aspx
EDIT
Note: Even though Raymond's post says 0 length arrays are legal in C99 they are in fact still not legal in C99. Instead of a 0 length array here you should be using a length 1 array
This is an old C hack to allow a flexible sized arrays.
In C99 standard this is not neccessary as it supports the arr[] syntax.
Your intution about "why not use an array of size 1" is spot on.
The code is doing the "C struct hack" wrong, because declarations of zero length arrays are a constraint violation. This means that a compiler can reject your hack right off the bat at compile time with a diagnostic message that stops the translation.
If we want to perpetrate a hack, we must sneak it past the compiler.
The right way to do the "C struct hack" (which is compatible with C dialects going back to 1989 ANSI C, and probably much earlier) is to use a perfectly valid array of size 1:
struct someData
{
int nData;
unsigned char byData[1];
}
Moreover, instead of sizeof struct someData, the size of the part before byData is calculated using:
offsetof(struct someData, byData);
To allocate a struct someData with space for 42 bytes in byData, we would then use:
struct someData *psd = (struct someData *) malloc(offsetof(struct someData, byData) + 42);
Note that this offsetof calculation is in fact the correct calculation even in the case of the array size being zero. You see, sizeof the whole structure can include padding. For instance, if we have something like this:
struct hack {
unsigned long ul;
char c;
char foo[0]; /* assuming our compiler accepts this nonsense */
};
The size of struct hack is quite possibly padded for alignment because of the ul member. If unsigned long is four bytes wide, then quite possibly sizeof (struct hack) is 8, whereas offsetof(struct hack, foo) is almost certainly 5. The offsetof method is the way to get the accurate size of the preceding part of the struct just before the array.
So that would be the way to refactor the code: make it conform to the classic, highly portable struct hack.
Why not use a pointer? Because a pointer occupies extra space and has to be initialized.
There are other good reasons not to use a pointer, namely that a pointer requires an address space in order to be meaningful. The struct hack is externalizeable: that is to say, there are situations in which such a layout conforms to external storage such as areas of files, packets or shared memory, in which you do not want pointers because they are not meaningful.
Several years ago, I used the struct hack in a shared memory message passing interface between kernel and user space. I didn't want pointers there, because they would have been meaningful only to the original address space of the process generating a message. The kernel part of the software had a view to the memory using its own mapping at a different address, and so everything was based on offset calculations.
It's worth pointing out IMO the best way to do the size calculation, which is used in the Raymond Chen article linked above.
struct foo
{
size_t count;
int data[1];
}
size_t foo_size_from_count(size_t count)
{
return offsetof(foo, data[count]);
}
The offset of the first entry off the end of desired allocation, is also the size of the desired allocation. IMO it's an extremely elegant way of doing the size calculation. It does not matter what the element type of the variable size array is. The offsetof (or FIELD_OFFSET or UFIELD_OFFSET in Windows) is always written the same way. No sizeof() expressions to accidentally mess up.
I saw some people use this structure for Trie nodes:
struct trie_node_st {
int count;
struct trie_node_st *next[TREE_WIDTH];
};
Is is low efficiency, since we don't always need TREE_WIDTH as length for each array.
Or am I misunderstanding something?
It's a CPU/memory trade off. By allocating it up front you use a certain minimum amount of memory to store those pointers (TREE_WIDTH * sizeof (struct trie_node_st *)) bytes, you use less CPU later because this is done at compile time (unless you allocate the struct with malloc()). However, this is hardly an overhead. Even if you had a ton of pointers, it wouldn't matter. The likely design decision was just that the programmer did not feel like having to dynamically allocate an array of pointers to struct trie_node_st each time he used this structure.
I have a piece of code written by a very old school programmer :-) . it goes something like this
typedef struct ts_request
{
ts_request_buffer_header_def header;
char package[1];
} ts_request_def;
ts_request_def* request_buffer =
malloc(sizeof(ts_request_def) + (2 * 1024 * 1024));
the programmer basically is working on a buffer overflow concept. I know the code looks dodgy. so my questions are:
Does malloc always allocate contiguous block of memory? because in this code if the blocks are not contiguous, the code will fail big time
Doing free(request_buffer) , will it free all the bytes allocated by malloc i.e sizeof(ts_request_def) + (2 * 1024 * 1024),
or only the bytes of the size of the structure sizeof(ts_request_def)
Do you see any evident problems with this approach, I need to discuss this with my boss and would like to point out any loopholes with this approach
To answer your numbered points.
Yes.
All the bytes. Malloc/free doesn't know or care about the type of the object, just the size.
It is strictly speaking undefined behaviour, but a common trick supported by many implementations. See below for other alternatives.
The latest C standard, ISO/IEC 9899:1999 (informally C99), allows flexible array members.
An example of this would be:
int main(void)
{
struct { size_t x; char a[]; } *p;
p = malloc(sizeof *p + 100);
if (p)
{
/* You can now access up to p->a[99] safely */
}
}
This now standardized feature allowed you to avoid using the common, but non-standard, implementation extension that you describe in your question. Strictly speaking, using a non-flexible array member and accessing beyond its bounds is undefined behaviour, but many implementations document and encourage it.
Furthermore, gcc allows zero-length arrays as an extension. Zero-length arrays are illegal in standard C, but gcc introduced this feature before C99 gave us flexible array members.
In a response to a comment, I will explain why the snippet below is technically undefined behaviour. Section numbers I quote refer to C99 (ISO/IEC 9899:1999)
struct {
char arr[1];
} *x;
x = malloc(sizeof *x + 1024);
x->arr[23] = 42;
Firstly, 6.5.2.1#2 shows a[i] is identical to (*((a)+(i))), so x->arr[23] is equivalent to (*((x->arr)+(23))). Now, 6.5.6#8 (on the addition of a pointer and an integer) says:
"If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined."
For this reason, because x->arr[23] is not within the array, the behaviour is undefined. You might still think that it's okay because the malloc() implies the array has now been extended, but this is not strictly the case. Informative Annex J.2 (which lists examples of undefined behaviour) provides further clarification with an example:
An array subscript is out of range, even if an object is apparently accessible with the
given subscript (as in the lvalue expression a[1][7] given the declaration int
a[4][5]) (6.5.6).
3 - That's a pretty common C trick to allocate a dynamic array at the end of a struct. The alternative would be to put a pointer into the struct and then allocate the array separately, and not forgetting to free it too. That the size is fixed to 2mb seems a bit unusual though.
This is a standard C trick, and isn't more dangerous that any other buffer.
If you are trying to show to your boss that you are smarter than "very old school programmer", this code isn't a case for you. Old school not necessarily bad. Seems the "old school" guy knows enough about memory management ;)
1) Yes it does, or malloc will fail if there isn't a large enough contiguous block available. (A failure with malloc will return a NULL pointer)
2) Yes it will. The internal memory allocation will keep track of the amount of memory allocated with that pointer value and free all of it.
3)It's a bit of a language hack, and a bit dubious about it's use. It's still subject to buffer overflows as well, just may take attackers slightly longer to find a payload that will cause it. The cost of the 'protection' is also pretty hefty (do you really need >2mb per request buffer?). It's also very ugly, although your boss may not appreciate that argument :)
I don't think the existing answers quite get to the essence of this issue. You say the old-school programmer is doing something like this;
typedef struct ts_request
{
ts_request_buffer_header_def header;
char package[1];
} ts_request_def;
ts_request_buffer_def* request_buffer =
malloc(sizeof(ts_request_def) + (2 * 1024 * 1024));
I think it's unlikely he's doing exactly that, because if that's what he wanted to do he could do it with simplified equivalent code that doesn't need any tricks;
typedef struct ts_request
{
ts_request_buffer_header_def header;
char package[2*1024*1024 + 1];
} ts_request_def;
ts_request_buffer_def* request_buffer =
malloc(sizeof(ts_request_def));
I'll bet that what he's really doing is something like this;
typedef struct ts_request
{
ts_request_buffer_header_def header;
char package[1]; // effectively package[x]
} ts_request_def;
ts_request_buffer_def* request_buffer =
malloc( sizeof(ts_request_def) + x );
What he wants to achieve is allocation of a request with a variable package size x. It is of course illegal to declare the array's size with a variable, so he is getting around this with a trick. It looks as if he knows what he's doing to me, the trick is well towards the respectable and practical end of the C trickery scale.
As for #3, without more code it's hard to answer. I don't see anything wrong with it, unless its happening a lot. I mean, you don't want to allocate 2mb chunks of memory all the time. You also don't want to do it needlessly, e.g. if you only ever use 2k.
The fact that you don't like it for some reason isn't sufficient to object to it, or justify completely re-writing it. I would look at the usage closely, try to understand what the original programmer was thinking, look closely for buffer overflows (as workmad3 pointed out) in the code that uses this memory.
There are lots of common mistakes that you may find. For example, does the code check to make sure malloc() succeeded?
The exploit (question 3) is really up to the interface towards this structure of yours. In context this allocation might make sense, and without further information it is impossible to say if it's secure or not.
But if you mean problems with allocating memory bigger than the structure, this is by no means a bad C design (I wouldn't even say it's THAT old school... ;) )
Just a final note here - the point with having a char[1] is that the terminating NULL will always be in the declared struct, meaning there can be 2 * 1024 * 1024 characters in the buffer, and you don't have to account for the NULL by a "+1". Might look like a small feat, but I just wanted to point out.
I've seen and used this pattern frequently.
Its benefit is to simplify memory management and thus avoid risk of memory leaks. All it takes is to free the malloc'ed block. With a secondary buffer, you'll need two free. However one should define and use a destructor function to encapsulate this operation so you can always change its behavior, like switching to secondary buffer or add additional operations to be performed when deleting the structure.
Access to array elements is also slightly more efficient but that is less and less significant with modern computers.
The code will also correctly work if memory alignment changes in the structure with different compilers as it is quite frequent.
The only potential problem I see is if the compiler permutes the order of storage of the member variables because this trick requires that the package field remains last in the storage. I don't know if the C standard prohibits permutation.
Note also that the size of the allocated buffer will most probably be bigger than required, at least by one byte with the additional padding bytes if any.
Yes. malloc returns only a single pointer - how could it possibly tell a requester that it had allocated multiple discontiguous blocks to satisfy a request?
Would like to add that not is it common but I might also called it a standard practice because Windows API is full of such use.
Check the very common BITMAP header structure for example.
http://msdn.microsoft.com/en-us/library/aa921550.aspx
The last RBG quad is an array of 1 size, which depends on exactly this technique.
This common C trick is also explained in this StackOverflow question (Can someone explain this definition of the dirent struct in solaris?).
In response to your third question.
free always releases all the memory allocated at a single shot.
int* i = (int*) malloc(1024*2);
free(i+1024); // gives error because the pointer 'i' is offset
free(i); // releases all the 2KB memory
The answer to question 1 and 2 is Yes
About ugliness (ie question 3) what is the programmer trying to do with that allocated memory?
the thing to realize here is that malloc does not see the calculation being made in this
malloc(sizeof(ts_request_def) + (2 * 1024 * 1024));
Its the same as
int sz = sizeof(ts_request_def) + (2 * 1024 * 1024);
malloc(sz);
YOu might think that its allocating 2 chunks of memory , and in yr mind they are "the struct", "some buffers". But malloc doesnt see that at all.