I was just wondering if it was sensible to create a task-local unordered_map that I later move to global space like this:
void StatesRegister(std::vector<global_states_t>states)
{
// create temporary notify map
local_map tmp_map;
// fill map...
// move to global task map
TaskHandle_t handle = (void*)0x50;
// MUTEX
// emplace to global map
task_map.emplace(handle, std::move(tmp_map));
// /MUTEX
}
The question is whether or not I can use std::move here... Afaik an unordered_map is a RAII-object, so while the object is "hosted" on the taskstack, buckets are kept on the heap. So my hope is that the move-constructor of std::unordered_map will understand what I'm trying to do and just pass over buckets to the newly created instance on the heap, but will it?
Your expectation and code are fine - you can move-assign to a variable elsewhere in your program from your local variable (tmp_map). More generally, there's more to an unordered_map than just "buckets" - the buckets typically contain iterators to forward list nodes per stored element. Regardless, ownership of all the existing elements and the buckets/nodes related to them will be transferred to the move-assigned-to variable. You should of course ensure that any other thread accessing the move-assigned-to variable locks the mutex similarly.
Related
Imagine a list "a", and there's a copy constructor for lists which performs deep copying. If "b" is a list deep copied from "a", then both can be destroyed using simple destructors. This destructor should use deep destruction.
typedef struct list { void * first; struct list * next } list;
struct list * list_copy_constructor(const struct list * input)
REQUIRE_RETURNED_VALUE_CAPTURE;
void list_destructor(struct list * input);
Now imagine that you rename the copy constructor for lists as a deep copy constructor for lists, and add another shallow copy constructor for lists.
/** Performs shallow copy. */
struct list * list_shallow_copy_constructor(const struct list * input)
REQUIRE_RETURNED_VALUE_CAPTURE;
/** Performs deep copy. */
struct list * list_deep_copy_constructor(const struct list * input)
REQUIRE_RETURNED_VALUE_CAPTURE;
/** Be warned performs deep destruction. */
void list_destructor(struct list * input);
The destructor, which performs a deep destruction, can be used paired with the deep copy constructor calls.
Once you used the shallow copy constructor for lists, you would need to know which of both lists own the elements, and then one of the lists (the list owning the elements), can be destroyed with the destructor, but for the list that doesn't own the elements, I would need to destroy it using a shallow destructor I would need to create, before destroying the list owning the elements.
/** Performs shallow copy. */
struct list * list_shallow_copy_constructor(const struct list * input)
REQUIRE_RETURNED_VALUE_CAPTURE;
/** Performs deep copy. */
struct list * list_deep_copy_constructor(const struct list * input)
REQUIRE_RETURNED_VALUE_CAPTURE;
/** Performs shallow destruction. */
void list_shallow_destructor(struct list * input);
/** Performs deep destruction. */
void list_deep_destructor(struct list * input);
But, the problem is, I don't recognize shallow destructor as a term in bibliography, so I thought I might be doing something wrong. Am I doing something wrong? E.g. should I be using smart pointers already instead of deep and shallow destructors?
The concept of deep or shallow exists only in the mind of the programmer, and in C++ it is very arbitrary. By default raw pointer members are not deep destroyed when an object is destroyed, which you might call shallow, but you can write extra code in your object destructor to destroy deeply. On the other hand any members that have destructors get their destructors called, which you might call deep, and there is no way to avoid that. Exactly the same mechanism applies for the default copy and assignment, so is equally impossible to say an object is wholly deep or shallow copied or destroyed at a glance.
So the distinction is not really a property of object destructors, but their members.
Of course, now the question is about C again, but still mentions smart pointers. In C you have to decide what philosophy you want to implement, as there is no concept of destruction. If you follow a C++-like philosophy of having destruction functions for each type of member, and having them deep-call.
Having said that there are a number of strategies you might consider that would potentially produce a leaner model:
If /all/ the members of a particular container are owned or /all/ not owned, then a simple flag in the container for whether to destroy /all/ children is an adequate model.
If /all/ the objects are shared with another container, or this might be the last/only such container for a particular set of content, you could keep a circular list of sharing containers. When the destructor realises it is the last container it could destroy /all/ the content. On the other hand, you could simply implement this model with a shared_ptr to the one container instance, and when the last pointer is released then the container is destroyed.
If individual items in the container may be shared in arbitrary ways, then make it a container of shared_ptr to each item of content. This is the most robust model, but may have costs in terms of memory usage. Ultimately, somewhere there needs to be a reference count (though circular lists of referees are also good, it is much harder to mutex across threads) In C++ shared_ptr this is implemented using a stub, but in your own C objects, this is probably a counter member in the child object.
If you want to have lists with shared elements be correctly destructed, you cannot simply use "shallow destructors". Destructing them all with shallow destructors results in elements still residing in memory and being leaked. It also doesn't look good to mark one of the lists having a deep destructor while others having shallow ones. If you first destroy the list with deep destructor, others will have dangling pointers which you can accidentally access. So it looks like shallow destructor is not a well-recognized term simply because it is not of a great use. You just have destructors: functions that destroy the stuff that you objects conceptually own.
Now, for the particular task of sharing elements in lists and destroying everything in time shared pointers seem to be a reasonable solution. A shared pointer is conceptually a pointer to a struct consisting of two elements: an actual object (list element) and reference counter. Copying the shared pointer increases the counter; destroying the shared pointer decreases the counter and destructs the struct with object & counter if the counter fell to 0. In this scenario, each list own its copies of shared pointers, but the list elements themselves are owned by shared pointers rather than a list. Due to the shared pointer destruction semantics there is no trouble with destroying all the shared pointer copies that the list owns (the destructor doesn't deallocate memory unless there are no references left), so there is no need for distinguishing "shallow" and "deep" destructors, as shared pointers will take care of deleting themselves in time automatically.
As you already suspected, your design is weird. Think about it: if you are going to "shallow copy" a list, why not just take a pointer of it? "Shallow copy" has no use outside of a classroom, where it is only useful to explain what a "deep copy" is.
You either want multiple users to have independent lists, that can be used and destroy independently of the each other, or you want one user to "own" the list, and the others just point to that list. Your idea of "shallow copy" has no advantage over a simple pointer, but is much more complex to handle.
What is actually useful is having multiple "lists" but with shared data, where each user has its own "shared copy" of the list that can be used and destroyed independently, but points to the same data, and will only really be deallocated when the last user has destroyed it. This is a very common pattern usually handled by an algorithm called reference counting, and is implemented by many libraries and languages, like Python, glib, and even in C++ as the smart pointer std::shared_ptr.
If you are using C, you may want to add support to reference counting to your struct list, and it is not very difficult: just add a field unsigned reference_count; and set it to 1 when it is allocated. Decrement when destroyed, but only really deallocate if reference_count == 0, in which case there are no more users and you must do a "deep deallocation". You would still have only one destructor function, but two copy constructors:
/** Performs shared copy.
*
* Actually, just increments reference_count and returns the same pointer.
*/
struct list * list_shared_copy_constructor(struct list * input)
REQUIRE_RETURNED_VALUE_CAPTURE;
/** Performs deep copy.
*
* reference_count for the new copy is set to 1.
*/
struct list * list_deep_copy_constructor(const struct list * input)
REQUIRE_RETURNED_VALUE_CAPTURE;
/** Performs destruction. */
void list_shallow_destructor(struct list * input);
If you are actually using C++, as you hinted in your question, then simply use std::shared_ptr.
But, the problem is, I don't recognize shallow destructor as a term in bibliography, so I thought I might be doing something wrong. Am I doing something wrong?
The point of the destructor (or in C just myobject_destroy(myobject)*) is to clean up the resources the instance holds (memory, os handles, ...). Whether you need a "shallow destructor" or a "deep destructor" depends on the way you decided to implement your object, as long as it does its job to cleanup stuff.
If you are using modern C++ stack allocation and smart pointers are your friend, because they manage memory by themselves.
I have been working on some features of a custom programming language written in c. Currently i'm working on a system that does reference counting for objects in the language, which in c are represented as structs with among other things, a reference count.
There also is a feature which can free all currently allocated objects (say before the exit of the program to clean up all memory). Now here lies the problem exactly.
I have been thinking about how to do it best but i'm running into some problems. Let me sketch out the situation a bit:
2 new integers are allocated. both have reference count of 1
1 new list is allocated, also with a reference count of 1
now both integers go in the list, which gives them a reference count of 2
after these actions both integers go out of scope for some reason, so their reference count drops to 1 as they are still in the list.
Now i'm done with these objects so i run the function to delete all tracked objects. However, as you might have noticed both the list and the objects in the list have the same reference count (1). This means there is no way to decide which object to free first.
If i would free the integers before the list, the list will try to decrement the reference count on the integers which were freed before, which will segfault.
If the list would be freed before the integers, it would decrement the reference count of the integers to 0, which automatically frees them too and no further steps need to be taken to free the integers. They aren't tracked anymore.
Currently i have a system that works most of the time but not for the example i give above, where i free the objects based on their reference count. Highest count latest. This obviously only works as long as the integers have higher reference count than the list which is as visible in the example above, not always the case. (It only works assuming the integers didn't drop out of scope so they still have a higher reference count than the list)
Note: i have already found one way which i really don't like: adding a flag to every object indicating it is in a container so cant be freed. I don't like this because it adds some memory overhead to every allocated object, and when there is a circular dependency no object would be freed. Of course a cycle detector could fix this but preferably i'd like to do this with the reference counting only.
Let me give a concrete example of the described steps above:
//this initializes and sets a garbage collector object.
//Basically it's a datastructure which records every allocated object,
//and is able to free them all or in the future
//run some cycle detection on all objects.
//It has to be set before allocating objects
garbagecollector *gc = init_garbagecollector();
set_garbagecollector(gc);
//initialize a tracked object fromthe c integer value 10
myobject * a = myinteger_from_cint(10);
myobject * b = myinteger_from_cint(10);
myobject * somelist = mylist_init();
mylist_append(somelist,a);
mylist_append(somelist,b);
// Simulate the going out of scope of the integers.
// There are no functions yet so i can't actually do it but this
// is a situation which can happen and has happened a couple of times
DECREF(a);
DECREF(b);
//now the program is done. all objects have a refcount of 1
//delete the garbagecollector and with that all tracked objects
//there is no way to prevent the integers being freed before the list
delete_garbagecollector(gc);
what of course should happen is that 100% of the time, the list is freed before the integers are.
What would be a smarter way of freeing all existing objects, in a way such that objects stored in containers aren't freed before the containers they're in?
It depends on your intention with:
There also is a feature which can free all currently allocated objects (say before the exit of the program to clean up all memory).
If the goal is to forcibly deallocate every single object regardless of its ref count, then I would have a separate chunk of code that walks the object graph and frees each object without touching its ref count. The ref count itself is going to end up freed too, so there's little point in updating it.
If the goal is to just tell the system "We don't need the objects anymore" then another option is to simply walk the roots and decrement their ref counts. If there are no other references to them, they'll hit zero. They will then decrement the ref counts of everything they refer to before being deallocated. That in turn percolates through the object graph. If the roots are the only thing holding onto references at the point that you call this, it will effectively free everything.
You should not free anything until the reference count for somelist is zero.
I have written a Simulink S-function (Level 2) in C. The resulting block has one output and one parameter. This parameter is stored in a variable, which is defined at file scope, right after setting up the block:
#define NUM_PARAMS 1
#define NUM_INPORTS 0
#define NUM_OUTPORTS 1
unsigned short int MASK_INDEX;
I assign it within mdlInitializeSizes, and do some operations on its value:
static void mdlInitializeSizes(SimStruct *S) {
// Check Parameters
ssSetNumSFcnParams(S, NUM_PARAMS);
if (ssGetNumSFcnParams(S) != ssGetSFcnParamsCount(S)) {
return;
}
MASK_INDEX = *mxGetPr(ssGetSFcnParam(S, 0));
(...) operations
}
My problem is, that the variable MASK_INDEX seems to be global, and shared among all blocks of the same type. Therefore, it holds the same value for all blocks.
As a workaround, I reload it every time, and re-do the operations, for example:
static void mdlOutputs(SimStruct *S, int_T tid) {
MASK_INDEX = *mxGetPr(ssGetSFcnParam(S, 0));
(...) operations
}
How can I get a true "local variable", so that I don't have to repeat all this every time?
You haven't mentioned where you've declared MASK_INDEX, but from your description it sounds like it's at file scope. If so, then yes, this variable will be shared across all instances. This is not isolated to S-Functions in any way, it's how shared libraries on most, if not all, platforms behave. A single instance of the shared library will be loaded by an application, in this case MATLAB; consequently there is only one copy of global variables.
The easiest option is to use ssGetSFcnParam every time you want to access the parameter. If you dig into those S-Function macros, they're simply accessing fields of the SimStruct, so it's unlikely repeated access will result in performance degradation. I've even seen macros being used to wrap common use cases such as the one you have.
If you really want to go about caching the dialog parameter, the easiest is probably to use ssSetUserData. Declare a struct containing a MASK_INDEX member (you don't have to use a struct but this approach is more extensible). Dynamically allocate an instance using mxMalloc within mdlStart and assign it to the block's user data. Make sure you set SS_OPTION_CALL_TERMINATE_ON_EXIT in the ssSetOptions call in mdlInitializeSizes. Then define the mdlTerminate function within which you'll access the allocated struct using ssGetUserData and mxFree it. Now you can access the struct members within mdlOutputs using ssGetUserData.
There are other, more advanced options as well, such as work vectors, probably a PWork vector.
Another option, if your parameter is tunable, is using runtime parameters, which let you cache, and optionally transform, a block's dialog parameters.
In your case, I'd just stick with using ssGetSFcnParam every time within mdlOutputs.
How do reference counted structures work? For example let's look at SDL_Surface:
typedef struct SDL_Surface
{
...
int refcount;
} SDL_Surface;
s = SDL_CreateRGBSurface(...); // <-- what happens here?
SDL_FreeSurface(s); // <-- and here?
How do I implement reference counting in my own code?
SDL_CreateRGBSurface will allocate a new instance of SDL_Surface (or a suitable derived structure), and increment the reference count (setting it to 1).
SDL_FreeSurface will decrement the reference count, and check if it's zero. If it is, that means that no other objects are using the surface, and it will be deallocated.
SDL also guarantees that the refcount is incremented whenever the object gets used somewhere else (e.g. in the renderer). So, if the reference count is nonzero when SDL_FreeSurface is called, then some other object must be using it. That other object will eventually also call SDL_FreeSurface and release the surface for good.
Reference counting allows you to cheaply track objects without the overhead of a cycle-collecting garbage collector. However, one drawback is that it won't handle cycles (e.g. where object A holds a reference to B, which in turn holds a reference to B); in those cases, the cycles will keep the objects involved alive even when all other external references are gone.
To implement refcounting, you simply need to add a refcount field to any objects you want to refcount, and ensure (in your public API, and internally) that every allocation and deallocation of the object goes through the appropriate refcount-maintaining interface (which you must define). Finally, when an object or function wants a reference to your refcounted objects, they must first get the reference by incrementing the refcount (directly or through some interface). When they are done they must decrement the refcount.
A Scene struct has a pointer to (a linked list of) SceneObjects.
Each SceneObject refers to a Mesh.
Some SceneObjects may however refer to the same Mesh (by sharing the same pointer - or handle, see later - to the Mesh). Meshes are pretty big and doing it this way has obvious advantages for rendering speed.
typedef struct {
Mesh *mesh;
...
struct SceneObject *next;
} SceneObject;
typedef struct Scene {
SceneObject *objects;
...
} Scene;
My question:
How do I free a Scene, while avoiding to free the same Mesh pointer multiple times?
I thought I could solve this by using handle to Mesh (Mesh** mesh_handle) instead of a pointer so I could set the referenced Mesh pointer to 0, and let successive frees on it just free 0, but I can't make it work. I just can't get my head around how to avoid multiple deallocations.
Am I forced to keep references for such a scenario? Or am I forced to put all the Mesh objects into a separate Mesh table and free it separately? Is there a way to tackle this without doing these things? By tagging the objects as instances of each other I can naturally adjust the free algorithm so it deals with the problem, but I was wondering if there is a more 'pure' solution for this problem.
One standard solution is to have reference counters, that is every object that can possibly be referred by many other objects must have a counter that remembers how many of them are pointing it. This is done with something like
typedef struct T_Object
{
int refcount;
....
} Object;
Object *newObject(....)
{
Object *obj = my_malloc(sizeof(Object));
obj->refcount = 1;
....
return obj;
}
Object *ref(Object *p)
{
if (p) p->refcount++;
return p;
}
void deref(Object *p)
{
if (p && p->refcount-- == 1)
destroyObject(p);
}
Who first allocates the object will be the first owner (hence the counter is initialized to 1). When you need to store the pointer in other places every time you should store ref(p) instad, to be sure to increment the counter. When someone is not going to point to it anymore you should call deref(p). Once the last reference to the object is gone the counter will become zero and the deref call will actually destroy the object.
It takes some discipline to get it working (you should always think when calling ref and deref) but it's possible to write complex software that has zero leaks using this approach.
A simpler solution that is sometimes applicable is having all your shared objects also stored in a separate list... you freely assign and change complex data structures pointing to these objects but you never free them during the normal use. Only when you need to throw everything away you deallocate those objects by using that separate list.
Note that this approach is possible only if you're not allocating many objects during the "normal use" because in that case delaying the destruction could be not viable.