deallocating array of struct pointers in c implementation of priority queue - c

I have an issue freeing my array of struct pointers for a priority queue that I am implementing. I create two dynamic arrays of node pointers with a fixed size from client c program. The array heapMap contains node pointers that map to each created node with a specific ID integer value and the array heap is the heap array that contains the nodes with respect to their current values.
Everything seems to work, however, my pq_free function seems to cause errors or doesn't properly deallocate the arrays. Any help would be appreciated
Structures
typedef struct node_struct{
int ID;
double val;
}NODE;
struct pq_struct {
char heapType;
int max;
int inUse;
NODE ** heap; //BOTH have a specific capacity
NODE **heapMap; //array of pointers to each
};
This is the function I use to allocate memory for the structure.
PQ * pq_create(int capacity, int min_heap){
PQ * newQueue = (PQ*) malloc(sizeof(PQ)); //Allocate memory for a new heap
newQueue->max = capacity;
newQueue->inUse = 0;
int inUse = 1;//1 in use by default, the 0th point in the array is left alone intentionally
//If min_heap == 0, it it is a max heap, any other value is a min heap.
if(min_heap != 0){
newQueue->heapType = 'm';
}else{
newQueue->heapType = 'M';
}
//Allocate memory for heapMap and heap..
newQueue->heap = (NODE**) malloc(sizeof(NODE*)*capacity); //array of nodes, the heap
newQueue->heapMap = (NODE**) malloc(sizeof(NODE*) * capacity);//array of node pointers, the HEAPMAP
int i = 0;
for (i = 0; i < capacity + 1;i++) {
newQueue->heapMap[i] = NULL;
}
//return PQ pointer
return newQueue;
}
This is my pq_free function that doesn't seem to work properly. Thanks for help in advance.
void pq_free(PQ * pq){
//free all nodes
NODE * temp;
NODE ** temp2;
int i;
for (i = 0; i < pq->inUse; i++) {
if (pq->heapMap[i] != NULL) {
temp = pq->heapMap[i];
free(temp);
}
}
//pq->heapMap = NULL;
free(pq->heap);
free(pq->heapMap);
free(pq);
}

As I was once railed on this site for doing this, I feel obligated to do the same to you. You shouldn't cast malloc because it is automatically cast to the assigned data type and can lead to some bad situations.
Other than that how are the individual nodes allocated? What errors specifically are given? I think you are also walking off your heapMap as you allocate capacity but iterate over capacity + 1 elements.

Related

Allocating memory for Heap with mallocs

I'm having problems declaring a new heap, empty, with max size "capacity".
Heap struct:
typedef struct {
/* number of elements on vector */
int size;
/* vector max size */
int capacity;
/*vector of pointers for elements*/
element_t** elements;
} heap;
Element_t struct:
typedef struct element_
{
char nameItem[100];
char expirationDate[11];
int qty;
int sellRate;
float priorityVal;
} element_t;
The function that I need to create the heap is declared like that, where the argument capacity is the heap capacity.
heap* new_heap(int capacity){
Function that insert elements in Heap:
int heap_insert(heap *h, element_t* elem)
{
element_t * aux;
int i;
//gilc
if(!h) return 0;
/* if heap is full, dont insert element */
if (h->size >= h->capacity)
return 0;
if (!elem)
return 0;
/* insert element in the end of the heap */
h->size++;
i = h->size;
h->elements[i] = elem;
/* while element has more prioritary than his father, trade them */
while (i != ROOT && bigger_than(h->elements[i], h->elements[FATHER(i)]))
{
aux = h->elements[FATHER(i)];
h->elements[FATHER(i)] = h->elements[i];
h->elements[i] = aux;
i = FATHER(i);
}
return 1;
//Default
return 0;
}
FATHER and ROOT is defined like that (I don't understand what that means, was pre-defined for the project too)
#define FATHER(x) (x/2)
#define ROOT (1)
and bigger_than like this:
int bigger_than(element_t* e1, element_t* e2)
{
if (e1 == NULL || e2 == NULL)
{
return 0;
}
return e1->priorityVal > e2->priorityVal;
}
What malloc calls do I need to use? The function new_heap must allocate all memory necessary for the number of elements specified as argument capacity.
heap *new_heap(int capacity) {
heap *h = malloc(sizeof(heap));
h->size = 0;
h->capacity = capacity;
h->elements = malloc(capacity * sizeof(element_t *));
return h;
}
The first malloc will make enough space for your heap structure. The second is for "vector" (as you called it) of pointers to elements, since these need to be stored in a separate spot in memory (based on your declaration of heap). Together, this allocates all the memory you need for the heap. I'm assuming you'll also have a new_element function that will handle allocating the memory for an individual element for you whenever you want to add something to the heap.

Pointer seg faulting although I malloc-ed right

I don't understand why my program seg faults at this line: if ((**table->table).link == NULL){ I seem to have malloc-ed memory for it, and I tried looking at it with gdb. *table->table was accessible and not NULL, but **table->table was not accessible.
Definition of hash_t:
struct table_s {
struct node_s **table;
size_t bins;
size_t size;
};
typedef struct table_s *hash_t;
void set(hash_t table, char *key, int value){
unsigned int hashnum = hash(key)%table->bins;
printf("%d \n", hashnum);
unsigned int i;
for (i = 0; i<hashnum; i++){
(table->table)++;
}
if (*(table->table) == NULL){
struct node_s n = {key, value, NULL};
struct node_s *np = &n;
*(table->table) = malloc(sizeof(struct node_s));
*(table->table) = np;
}else{
while ( *(table->table) != NULL){
if ((**table->table).link == NULL){
struct node_s n = {key, value, NULL};
struct node_s *np = &n;
(**table->table).link = malloc(sizeof(struct node_s));
(**table->table).link = np;
break;
}else if (strcmp((**table->table).key, key) == 0){
break;
}
*table->table = (**(table->table)).link;
}
if (table->size/table->bins > 1){
rehash(table);
}
}
}
I'm calling set from here:
for (int i = 0; i < trials; i++) {
int sample = rand() % max_num;
sprintf(key, "%d", sample);
set(table, key, sample);
}
Your hashtable works like this: You have bins bins and each bin is a linked list of key / value pairs. All items in a bin share the same hash code modulo the number of bins.
You have probably created the table of bins when you created or initialised the hash table, something like this:
table->table = malloc(table->bins * sizeof(*table->table);
for (size_t i = 0; i < table->bins; i++) table->table[i] = NULL;
Now why does the member table have two stars?
The "inner" star means that the table stores pointers to nodes, not the nodes themselves.
The "outer" start is a handle to allocated memory. If your hash table were of a fixed size, for example always with 256 bins, you could define it as:
struct node_s *table[256];
If you passed this array around, it would become (or "decay into") a pointer to its first element, a struct node_s **, just as the array you got from malloc.
You access the contents of the l´bins via the linked lists and the head of linked list i is table->table[i].
You code has other problems:
What did you want to achieve with (table->table)++? This will make the handle to the allocated memory point not to the first element but tho the next one. After doing that hashnum times, *table->table will now be at the right node, but you will have lost the original handle, which you must retain, because you must pass it to free later when you clean up your hash table. Don't lose the handle to allocated memory! Use another local pointer instead.
You create a local node n and then make a link in your linked list with a pointer to that node. But the node n will be gone after you leave the function and the link will be "stale": It will point to invalid memory. You must also create memory for the node with malloc.
A simple implementation of your has table might be:
void set(hash_t table, char *key, int value)
{
unsigned int hashnum = hash(key) % table->bins;
// create (uninitialised) new node
struct node_s *nnew = malloc(sizeof(*nnew));
// initialise new node, point it to old head
nnew->key = strdup(key);
nnew->value = value;
nnew->link = table->table[hashnum];
// make the new node the new head
table->table[hashnum] = nnew;
}
This makes the new node the head of the linked list. This is not ideal, because if you overwrite items, the new ones will be found (which is good), but the old ones will still be in the table (which isn't good). But that, as they say, is left as an exercise to the reader.
(The strdup function isn't standard, but widely available. It also creates new memory, which you must free later, but it ensures, that the string "lives" (is still valid) after you have ceated the hash table.)
Please not how few stars there are in the code. If there is one star too few, it is in hash_t, where you have typecasted away the pointer nature.

How to double the size of a dynamic array while keeping the old contents

For part of my C data structures assignment, I am tasked with taking an array of pointers to nodes of 2 doubly linked lists (one representing the main service queue, and the other representing a "bucket" of buzzers ready to be reused or used for the first time in the queue), doubling the size, while keeping the original contents in tact. The idea is that each node has an ID associated which corresponds to the number index of the pointer array map. So for example, the pointer in index 3 will always point to the node whose ID is 3. The boolean inQ is for something unrelated to this issue.
I've written most of the code, but it seems to be functioning incorrectly (it changes all the original pointers to the last node in the list before the array resizing) So, since the starting size of the array is 10 elements, when I print out the contents after the function, it displays 9 9 9 9 9 9 9 9 9 9.
Here are the structs im using:
typedef struct node {
int id;
int inQ;
struct node *next;
struct node *prev;
}NODE;
typedef struct list
{
NODE *front;
NODE *back;
int size;
} LIST;
//referred to as SQ in the separate header file
struct service_queue
{
LIST *queue;
LIST *bucket;
NODE **arr;
int arrSize;
int maxID;
};
Here is the function in question:
SQ sq_double_array(SQ *q)
{
NODE **arr2 = malloc(q->arrSize * 2 * sizeof(NODE*));
int i;
//fill the first half of the new array with the node pointers of the first array
for (i = 0; i < q->arrSize; i++)
{
arr2[i] = malloc(sizeof(NODE));
if (i > 0)
{
arr2[i - 1]->next = arr2[i];
arr2[i]->prev = arr2[i - 1];
}
arr2[i]->id = q->arr[i]->id;
arr2[i]->inQ = q->arr[i]->inQ;
arr2[i]->next = q->arr[i]->next;
arr2[i]->prev = q->arr[i]->prev;
}
//fill the second half with node pointers to the new nodes and place them into the bucket
for (i = q->arrSize; i < q->arrSize * 2; i++)
{
//Point the array elements equal to empty nodes, corresponding to the inidicies
arr2[i] = malloc(sizeof(NODE));
arr2[i]->id = i;
arr2[i]->inQ = 0;
//If the bucket is empty (first pass)
if (q->bucket->front == NULL)
{
q->bucket->front = arr2[i];
arr2[i]->prev = NULL;
arr2[i]->next = NULL;
q->bucket->back = arr2[i];
}
//If the bucket has at least 1 buzzer in it
else
{
q->bucket->back = malloc(sizeof(NODE));
q->bucket->back->next = arr2[i];
q->bucket->back = arr2[i];
q->bucket->back->next = NULL;
}
}
q->arrSize *= 2;
q->arr = arr2;
return *q;
}
Keep in mind this must only be done in c, which is why im not using 'new'
You could use the realloc function:
void *realloc(void *ptr, size_t size);
Quoted from the man pages:
The realloc() function changes the size of the memory block pointed to
by ptr to size bytes. The
contents will be unchanged in the range from the start of the region up to the minimum of the old
and new sizes. If the new size is larger than the old size, the added memory will not be initial‐
ized. If ptr is NULL, then the call is equivalent to malloc(size), for all values of size; if
size is equal to zero, and ptr is not NULL, then the call is equivalent to free(ptr). Unless ptr
is NULL, it must have been returned by an earlier call to malloc(), calloc() or realloc(). If the
area pointed to was moved, a free(ptr) is done.

C - Check if index of struct array is uninitialized

I'm making a HashMap in C but am having trouble detecting when a Node has been initialized or not.
Excerpts from my code below:
static struct Node
{
void *key, *value;
struct Node *next;
};
struct Node **table;
int capacity = 4;
table = malloc(capacity * sizeof(struct Node));
// At this point I should have a pointer to an empty Node array of size 4.
if (table[0] != NULL)
{
// This passes
}
I don't see what I can do here. I've read tons of other posts of this nature and none of their solutions make any sense to me.
malloc does not initialize the memory allocated. You can use calloc to zero-initialize the memory.
// Not sizeof(struct Node)
// table = calloc(capacity, sizeof(struct Node));
table = calloc(capacity, sizeof(*table));
After that, it will make sense to use:
if (table[0] != NULL)
{
...
}
I suggest you consider something like a HashMapCollection type that you create with a set of functions to handle the various memory operations you need.
So you might have code something like the following. I have not tested this nor even compiled it however it is a starting place.
The FreeHashMapCollection() function below would process a HashMapCollection to free up what it contains before freeing up the management data structure. This may not be what you want to do so that is something for you to consider.
The idea of the following is to have a single pointer for the HashMapCollection struct and the array or list of HashMapNode structs immediately follows the management data so a single free() would free up everything at once.
typedef struct _TAGHashMapNode {
void *key, *value;
struct _TAGHashMapNode *next;
} HashMapNode;
typedef struct {
int iCapacity; // max number of items
int iSize; // current number of items
HashMapNode *table; // pointer to the HashMapNode table
} HashMapCollection;
Then have a function to allocate a HashMapCollection of a particular capacity initialized properly.
HashMapCollection *AllocateHashMapCollection (int iCapacity)
{
HashMapCollection *p = malloc (sizeof(HashMapCollection) + iCapacity * sizeof(HashMapNode));
if (p) {
p->table = (HashMapNode *)(p + 1);
p->iCapacity = iCapacity;
p->iSize = 0;
memset (p->table, 0, sizeof(HashMapNode) * iCapacity);
}
return p;
}
HashMapCollection *ReallocHashMapCollection (HashMapCollection *p, int iNewCapacity)
{
HashMapCollection *pNew = realloc (p, sizeof(HashMapCollection) + sizeof(HashMapNode) * iNewCapacity);
if (pNew) {
pNew->table = (HashMapNode *)(pNew + 1);
if (p == NULL) {
// if p is not NULL then pNew will have a copy of that.
// if p is NULL then this is basically a malloc() so initialize pNew data.
pNew->iCapacity = pNew->iSize = 0;
}
if (iNewCapacity > pNew->iCapacity) {
// added more memory so need to zero out that memory.
memset (pNew->table + iCapacity, 0, sizeof(HashMapNode) * (iNewCapacity - pNew->iCapacity));
}
pNew->iCapacity = iNewCapacity; // set our new current capacity
p = pNew; // lets return our new memory allocated.
}
return p; // return either old pointer if realloc() failed or new pointer
}
void FreeHashMapCollection (HashMapCollection *p)
{
// go through the list of HashMapNode items and free up each pair then
// free up the HashMapCollection itself.
for (iIndex = 0; iIndex < p->iCapacity; iIndex++) {
if (p->table[iIndex].key) free (p->table[iIndex].key);
if (p->table[iIndex].value) free (p->table[iIndex].value);
// WARNING ***
// if these next pointers are actually pointers inside the array of HashMapNode items
// then you would not do this free as it is unnecessary.
// this free is only necessary if next points to some memory area
// other than the HashMapNode table of HashMapCollection.
if (p->table[iIndex].next) free (p->table[iIndex].next);
// even though we are going to free this, init to NULL
p->table[iIndex].key = NULL;
p->table[iIndex].value = NULL;
p->table[iIndex].next = NULL;
}
free (p); // free up the memory of the HashMapCollection
}

Freeing memory of used data leads to Segmentation Fault

I wrote a hashtable and it basically consists of these two structures:
typedef struct dictEntry {
void *key;
void *value;
struct dictEntry *next;
} dictEntry;
typedef struct dict {
dictEntry **table;
unsigned long size;
unsigned long items;
} dict;
dict.table is a multidimensional array, which contains all the stored key/value pair, which again are a linked list.
If half of the hashtable is full, I expand it by doubling the size and rehashing it:
dict *_dictRehash(dict *d) {
int i;
dict *_d;
dictEntry *dit;
_d = dictCreate(d->size * 2);
for (i = 0; i < d->size; i++) {
for (dit = d->table[i]; dit != NULL; dit = dit->next) {
_dictAddRaw(_d, dit);
}
}
/* FIXME memory leak because the old dict can never be freed */
free(d); // seg fault
return _d;
}
The function above uses the pointers from the old hash table and stores it in the newly created one. When freeing the old dict d a Segmentation Fault occurs.
How am I able to free the old hashtable struct without having to allocate the memory for the key/value pairs again?
Edit, for completness:
dict *dictCreate(unsigned long size) {
dict *d;
d = malloc(sizeof(dict));
d->size = size;
d->items = 0;
d->table = calloc(size, sizeof(dictEntry*));
return d;
}
void dictAdd(dict *d, void *key, void *value) {
dictEntry *entry;
entry = malloc(sizeof *entry);
entry->key = key;
entry->value = value;
entry->next = '\0';
if ((((float)d->items) / d->size) > 0.5) d = _dictRehash(d);
_dictAddRaw(d, entry);
}
void _dictAddRaw(dict *d, dictEntry *entry) {
int index = (hash(entry->key) & (d->size - 1));
if (d->table[index]) {
dictEntry *next, *prev;
for (next = d->table[index]; next != NULL; next = next->next) {
prev = next;
}
prev->next = entry;
} else {
d->table[index] = entry;
}
d->items++;
}
best way to debug this is to run your code against valgrind .
But to you give some perspective :
when you free(d) you are expecting more of a destructor call on your struct dict which would internally free the memory allocated to the pointer to pointer to dictEntry
why do you have to delete the entire has table to expand it ? you have a next pointer anyways why not just append new hash entries to it ?
Solution is not to free the d rather just expand the d by allocating more struct dictEntry and assigning them to appropriate next.
When contracting the d you will have to iterate over next to reach the end and then start freeing the memory for struct dictEntrys inside of your d.
To clarify Graham's point, you need to pay attention to how memory is being accessed in this library. The user has one pointer to their dictionary. When you rehash, you free the memory referenced by that pointer. Although you allocated a new dictionary for them, the new pointer is never returned to them, so they don't know not to use the old one. When they try to access their dictionary again, it's pointing to freed memory.
One possibility is not to throw away the old dictionary entirely, but only the dictEntry table you allocated within the dictionary. That way your users will never have to update their pointer, but you can rescale the table to accomodate more efficient access. Try something like this:
void _dictRehash(dict *d) {
printf("rehashing!\n");
int i;
dictEntry *dit;
int old_size = d->size;
dictEntry** old_table = d->table;
int size = old_size * 2;
d->table = calloc(size, sizeof(dictEntry*));
d->size = size;
d->items = 0;
for (i = 0; i < old_size; i++) {
for (dit = old_table[i]; dit != NULL; dit = dit->next) {
_dictAddRaw(d, dit);
}
}
free(old_table);
return;
}
As a side note, I'm not sure what your hash function does, but it seems to me that the line
int index = (hash(entry->key) & (d->size - 1));
is a little unorthodox. You get a hash value and do a bitwise and with the size of the table, which I guess works in the sense that it will be guaranteed to be within (I think?) [0, max_size), I think you might mean % for modulus.
You are freeing a pointer which is passed in to your function. This is only safe if you know that whoever's calling your function isn't still trying to use the old value of d. Check all the code which calls _dictRehash() and make sure nothing's hanging on to the old pointer.
What does dictCreate actually do?
I think you're getting confused between the (fixed size) dict object, and the (presumably variable sized) array of pointers to dictEntries in dict.table.
Maybe you could just realloc() the memory pointed to by dict.table, rather than creating a new 'dict' object and freeing the old one (which incidentally, isn't freeing the table of dictentries anyway!)

Resources