I am working with a Hashtable struct that maps keys to values, where the values are (void *) so that Hashtables can hold any kind of value.
To be able to free those values, the deconstructor of a Hashtable takes in a pointer to a freeing function as an argument. In my case, I know I am going to be freeing basic types, like char* and int*. Is it possible to pass in a pointer to the free() function, since this can deal with basic types?
Something like this:
FreeHashTable(hashtable_name, free);
You can (and should) pass to free every pointer that has returned by malloc, no matter which type (or struct) it points to. Be careful to not pass to free pointers that you didn't get from malloc. (Middle of arrays, local variables, etc)
BTW, unless some of your data types need some work before freeing, you can do it without pass pointer to function - just call free.
You don't need to pass the pointer to the function.
Just loop through the values and call free.
Related
I'm new to C (and structures in C). And I've seen varying code examples across the internet, but what is the benefit of doing this:
void foo(LargeStruct* struct);
instead of this
void foo(LargeStruct struct);
Does it make memory management easier?
The former passes a pointer to the structure to the function. The latter makes a copy of the structure and passes it to the function. If the structure is large, making a copy of it for the function is expensive (uses lots of resources), so it should be avoided unless it's necessary.
C passes structs by value. What it means is that the function with the second signature would make a copy of the entire LargeStruct in order to pass it to foo. This is not economical in terms of memory use.
What's worse, the allocation of LargeStruct would happen in automatic memory (also known as "on the stack"). Depending on the actual size of your struct, the call may not be possible on some systems, because it would cause stack overflow.
The first approach, on the other hand, passes the struct by pointer. Pointer's size does not depend on the size of LargeStruct.
Since C is passing arguments by value, there are two major points:
In function body, you will receive a copy of the parameter passed in, so in the case of void foo(LargeStruct struct);, you get a copy of the struct, when you modify the members of the struct, it's actually not seen outside, because it's a temporary copy, which is destroyed when the function returns. So if you want to modify a struct, you will have to pass in a pointer to that struct.
Since arguments are copied and if the struct is really large, there is some memory overhead. In this if you don't want to modify the struct, just to minimize the memory overhead, you can pass a const pointer:
foo(const LargeStruct *p);
I am learning C and get confused about something I read online.
At http://www.cs.bu.edu/teaching/c/stack/array/
I could read:
Let's look at the functions that determine emptiness and fullness.
Now, it's not necessary to pass a stack by reference to these
functions, since they do not change the stack. So, we could prototype
them as:
int StackIsEmpty(stackT stack);
int StackIsFull(stackT stack);
However, then some of the stack functions would take pointers (e.g.,
we need them for StackInit(), etc.) and some would not. It is more
consistent to just pass stacks by reference (with a pointer) all the
time
(I am not showing the code for what a stackT is, it is just a dynamic array)
From my (maybe limited) understanding, the disadvantage of passing by value is that the data is duplicated in the stack memory of the function. Since a stackT might be big, passing by value rather than pointer would be time consuming.
Do I get it right or am I still not clear with the basics ?
Correct, if you pass something "large" by value that item is copied onto the stack.
Passing a pointer to the data avoids the copy.
It is doubtful that the performance difference will be meaningful in most real-world applications, unless "large" is actually "huge" (which in turn may overflow the stack).
You are correct. Passing by value causes the program to copy, in the entirety, all of the data in that parameter. If it's only one or two ints, no problem, but copying multiple kilobytes is very expensive. Passing by reference only copies the pointer.
However, you must watch out for changing the data pointed at by the pointer and then expecting to return to it unchanged. C++ has passing by "const reference", which is like a guarantee that the data will not be changed, but C does not.
When using some library functions (e.g. strftime(), strcpy(), MultiByteToWideChar()) that deal with character arrays (instead of std::string's) one has 2 options:
use a fixed size array (e.g. char buffer[256];) which is obviously bad because of the string length limit
use new to allocate required size which is also bad when one wants to create a utility function like this:
char * fun(void)
{
char * array = new char[exact_required_size];
some_function(array);
return array;
}
because the user of such function has to delete the array.
And the 2nd option isn't even always possible if one can't know the exact array size/length before using the problematic function (when one can't predict how long a string the function will return).
The perfect way would be to use std::string since it has variable length and its destructor takes care of deallocating memory but many library functions just don't support std::string (whether they should is another question).
Ok, so what's the problem? Well - how should I use these functions? Use a fixed size array or use new and make the user of my function worry about deallocating memory? Or maybe there actually is a smooth solution I didn't think of?
You can use std::string's data() method to get a pointer to a character array with the same sequence of characters currently contained in the string object. The character pointer returned points to a constant, non-modifiable character array located somewhere in internal memory. You don't need to worry about deallocating the memory referenced by this pointer as the string object's destructor will do so automatically.
But as to your original question: depends on how you want the function to work. If you're modifying a character array that you create within the function, it sounds like you'll need to allocate memory on the heap and return a pointer to it. The user would have to deallocate the memory themselves - there are plenty of standard library functions that work this way.
Alternatively, you could force the user to pass in character pointer as a parameter, which would ensure they've already created the array and know that they will need to deallocate the memory themselves. That method is used even more often and is probably preferable.
If I store a pointer to a function, and then at some later point during my program's execution, compare it to the address of the same function, are the two addresses guaranteed to be equal.
E.g.
int foo(void){return 0;}
int (*foo_p)(void) = &foo;
assert(foo_p == &foo);
In the above code is the assertion always guaranteed to succeed? Are there any circumstances under which the address of a function can change?
Per 6.5.9:
Two pointers compare equal if and only if both are null pointers, both are pointers to the same object (including a pointer to an object and a subobject at its beginning) or function, both are pointers to one past the last element of the same array object, or one is a pointer to one past the end of one array object and the other is a pointer to the start of a different array object that happens to immediately follow the first array object in the address space.
(Boldface added for emphasis.)
A function's address will never change. Many programs are built around the concept of callbacks, which would not work if a function's address could change.
If, hypothetically, a function's location changed, for example by a self-modifying program, then all calls to that function would cause a segfault or very undefined behaviour any way. Edit: Clarification - function symbols are like pointers, if you free memory pointed to by a pointer, that will not zero the actual pointer variable, it will still point there, just as your function calls will still point to the old location of the moved function.
Self-modifying programs are very big exceptions though, and in these days the code section of a binary is write protected rendering this very, very hard.
How are arrays scoped in C? When do they ever get destroyed? (Note, also talk about if they can be passed even in p_threads)
They're scoped just like normal variables:
// in a function
int arrayA[300];
{
float arrayB[100];
// arrayA and arrayB available here
}
// arrayB out of scope, inaccessible.
// arrayA still available.
If you were to pass the array to a function, it would be valid as long as the array was still in scope at the calling site.
Arrays are scoped like any other primitive, struct or union. Nothing ever gets destroyed in C, though arrays can go out of scope.
Also, just like other types, arrays can be allocated on the heap by calling malloc() to allocate enough space to hold the desired number of elements, and treating the returned void * as a pointer to the first element. Such an array will be valid until you call free().
WRT Pthreads, again the rules are just as for any other type. If you define an array as an automatic (function-scope) variable, it will go out of scope as soon as the function returns; you cannot safely pass a pointer to such an array to another thread. But if you allocate an array on the heap, then you can pass a pointer to this array (to anything inside the array) anywhere you please, including another thread. Of course, you still need to ensure thread-safe access to the contents using appropriate synchronisation mechanisms.
There's absolutely no difference in scoping and lifetime rules between arrays and any other named entities in C language. I.e. arrays are not special in any way when it comes to scope and lifetime. They behave in that regard just like an ordinary int variable would.