How to add a field to kernel task_struct? - c

I'm trying to write a kernel module to detect a fork bomb, and to do this, I want to add a field int descendantCount to task_struct. This is my code so far:
struct task_struct *pTask;
for_each_process(pTask)
{
struct task_struct *p;
*p = *pTask;
//trace back to every ancestor
for(p = current; p != &init_task; p->parent)
{
//increment the descendant count of p's parent
p->descendantCount = p->descendantCount +1 //want to do something like this
}
}
Basically, I'm trying to loop through every process, and for each process, go through all of it's ancestors and increment the ancestor's descendantCount, which is the field that I want to add to task_struct.
I found this, and this, but I'm just still really confused on how I would go about doing this, as I'm new to kernel programming... Should I be going to include/linux/sched.h and adding a field there? Something like this:
struct task_struct {
......
pid_t pid;
....
int descendantCount;
}
Any help would be greatly appreciated, thank you!!

It is unclear what the actual idea is - is this supposed to bo executed on fork? Regardless, the idea is wrong and the implementation is buggy regardless of how pseudocody pasted sample is.
First of all descendantCount is a name using camelCase, which makes it inconsistent with the rest of the code. A less bad name would be descendant_count.
Counter modification must use atomic operations to not lose writes or the entire thing needs to be using an exclusive lock.
The traversal uses ->parent which is subject to change with ptrace where it starts pointing to the tracer. The parent you want can be found in ->real_parent.
Except there is no RCU protection provided, thus processes can be freed as you traverse them making the loop a use-after-free.
With RCU or tasklist_lock the traversal will be safe, but nonsensical. Procesess can be reparented to init when their original parent dies, rendering your hierarchical approach broken.
Processes would have to be grouped in some manner, but parent<->child relation is unsuitable for this purpose. Somewhat working examples of such grouping are cgroups and simple stuff like uids/gids.
All in all, this does not work.
Given that it looks like you are new not only to kernel itself but also C, I can only recommend focusing on userspace for the time being.

Related

Whats the best way to asynchronously return a result (as a struct) that hasn't been fully "set up" (or processed) yet

Alright, I honestly have tried looking up "Asynchronous Functions in C" (Results are for C# exclusively), but I get nothing for C. So I'm going to ask it here, but if there are better, already asked questions on StackExchange or what-have-you, please direct me to them.
So I'm teaching myself about concurrency and asynchronous functions and all that, so I'm attempting to create my own thread pool. So far, I'm still in the planning phase of it, and I'm trying to find a clear path to travel on, however I don't want a hand-out of code, I just want a nudge in the right direction (or else the exercise is pointless).
What would be the best way to asynchronously return from a function that isn't really "ready"? In that, it will return almost immediately, even if it's currently processing the task given by the user. The "task" is going to be a callback and arguments to fit the necessary pthread_t arguments needed, although I'll work on attributes later. The function returns a struct called "Result", which contains the void * return value and a byte (unsigned char) called "ready" which will hold values 0 and 1. So while "Result" is not "ready", then the user shouldn't attempt to process the item yet. Then again, the "item" can be NULL if the user returns NULL, but "ready" lets the user know it finished.
struct Result {
/// Determines whether or not it has been processed.
unsigned char ready;
/// The return type, NULL until ready.
void *item;
};
The struct isn't really complete, but it's a basic prototype embodying what I'm attempting to do. This isn't really the issue here though, although let me know if its the wrong approach.
Next I have to actually process the thing, while not blocking until everything is finished. As I said, the function will create the Result, then asynchronously process it and return immediately (by returning this result). The problem is asynchronously processing. I was thinking of spawning another thread inside of the thread_pool, but I feel it's missing the point of a thread pool as it's not longer remaining simple.
Here's what I was thinking (which I've a feeling is grossly over-complicated). In the function add_task, spawn a new thread (Thread A) with a passed sub_process struct then return the non-processed but initialized result. In the spawned thread, it will also spawn another thread (see the problem? This is Thread B) with the original callback and arguments, join Thread A with Thread B to capture it's return value, which is then stored in the result's item member. Since the result will be pointing to the very same struct the user holds, it shouldn't be a problem.
My problem is that it spawns 2 threads instead of being able to do it in 1, so I'm wondering if I'm doing this wrong and complicating things.Is there a better way to do this? Does pthread's library have a function which will asynchronously does this for me? Anyway, the prototype Sub_Process struct is below.
/// Makes it easier than having to retype everything.
typedef void *(*thread_callback)(void *args);
struct Sub_Process {
/// Result to be processed.
Result *result;
/// Thread callback to be processed
thread_callback cb;
/// Arguments to be passed to the callback
void *args;
};
Am I doing it wrong? I've a feeling I'm missing the whole point of a Thread_Pool. Another question is, is there a way to spawn a thread that is created, but waiting and not doing anything? I was thinking of handling this by creating all of the threads by having them just wait in a processing function until called, but I've a feeling this is the wrong way to go about this.
To further elaborate, I'll also post some pseudocode of what I'm attempting here
Notes: Was recommended I post this question here for an answer, so it's been copy and pasted, lemme know if there is any faulty editing.
Edit: No longer spawns another thread, instead calls callback directly, so the extra overhead of another thread shouldn't be a problem.
I presume it is your intention is that a thread will request the asychronous work to be performed, then go on to perform some different work itself until the point where it requires the result of the asynchronous operation in order to proceed.
In this case, you need a way for the requesting thread to stop and wait for the Result to be ready. You can do this by embedding a mutex and condition variable pair inside the Result:
struct Result {
/// Lock to protect contents of `Result`
pthread_mutex_t lock;
/// Condition variable to signal result being ready
pthread_cond_t cond;
/// Determines whether or not it has been processed.
unsigned char ready;
/// The return type, NULL until ready.
void *item;
};
When the requesting thread reaches the point that it requires the asynchronous result, it uses the condition variable:
pthread_mutex_lock(&result->lock);
while (!result->ready)
pthread_cond_wait(&result->cond, &result->lock);
pthread_mutex_unlock(&result->lock);
You can wrap this inside a function that waits for the result to be available, destroys the mutex and condition variable, frees the Result structure and returns the return value.
The corresponding code in the thread pool thread when the processing is finished would be:
pthread_mutex_lock(&result->lock);
result->item = item;
result->ready = 1;
pthread_cond_signal(&result->cond);
pthread_mutex_unlock(&result->lock);
Another question is, is there a way to spawn a thread that is created,
but waiting and not doing anything? I was thinking of handling this by
creating all of the threads by having them just wait in a processing
function until called, but I've a feeling this is the wrong way to go
about this.
No, you're on the right track here. The mechanism to have the thread pool threads wait around for some work to be available is the same as the above - condition variables.

posix threading in C condition variables

I've done my share of reading of condition variables, and I am simply stuck not being able to comprehend on how to use them.
I have a tree, who as of now, when you make an insertion for a node which already exists, it returns 0, which implies it already exists hence failure.
I now want to extend pthreads support, where rather than mentioning it can not be done since it already exists and returns 0, I want it to be on a waiting queue, and once the requested node is deleted, go ahead and now insert.
For example,
Suppose a tree has 3 nodes, with value 1, 5, 10
If I want to insert a new node with value 10, rather than returning 0 and throwing an error that the value already exists, it should wait for the node with the value 10 to delete, once it is deleted, it should go back ahead and insert.
My insert function else block, which returns 0 after it has previously inspected that the node exists with that value, you can be assured that the logic of knowing it exists works fine, now I am simply trying to add the conditional variable support where it waits. The datafield condition is initialized at the first line of the insert, so that's done as well. I am now hoping that when it enters this block, the cond_wait is the only line of code which will be executed, and then it will simply wait till the delete signals it. Is my approach here right? If it is, in the delete, how do I signal it? Please help me out here, I have spent hours reading and looking at examples trying to figure this out.
Code,
else
{
//IF DUPLICATE EXISTS
pthread_mutex_lock(&m);
node->counter++;
pthread_cond_wait(&node->condition, &m);
_insert(string, strlen, ip4_address, node, NULL, NULL);
pthread_mutex_unlock(&m);
return 1;//I CHANGED IT FROM 0 to one, since if signalled, and if reached to this limit
//it was probably successful
}
Here are assumptions:
struct tree
{
... // some other data (whatever)
pthread_mutex_t mitex;
pthread_cond_t cond;
};
Helper function:
int tree_contains_value(struct tree *t, int value)
{
return ...; // returns 0 or 1
}
And here is an insertion:
void tree_insert(struct tree *t, int val)
{
pthread_mutex_lock(&t->mutex);
while (tree_contains_value(t, value))
{
pthread_cond_wait(&t->cond, &t->mutex);
}
... // do actual insert
pthread_mutex_unlock(&t->mutex);
}
And removal:
void tree_remove(struct tree *t, int val)
{
pthread_mutex_lock(&t->mutex);
... //remove value
pthread_cond_broadcast(&t->cond); // notify all wating threads if any
pthread_mutex_unlock(&t->mutex);
}
A condition variable wait must be wrapped in a loop. The loop's guard tests a condition over the shared data protected by a mutex. It makes no sense to use a condition variable as you have it.
If it makes sense to wait for the node with value 10 to be deleted before inserting it, then it is done with logic like this:
lock(mutex)
while (tree.contains(key))
wait(cond, mutex)
tree.insert(key, value)
unlock(mutex)
The other task does this:
lock(mutex)
tree.delete(key)
unlock(mutex)
broadcast(cond) // could be in the mutex, but better outside!
When C. A. R. Hoare invented monitors and condition variables, the original concept was a little different. Concerns about efficiency on multiple processors wasn't a concern, and so the following logic was supported:
enter(monitor);
if (tree.contains(key)) // no loop
wait(cond, monitor)
tree.insert(key, value)
leave(monitor);
There was a guarantee that when the other task signals the condition, the waiting task will be atomically transferred back to the monitor without any other task being able to seize the monitor. So for instance when a task is in the monitor and deletes node 10, and signals the condition variable, the first task waiting on that condition variable is guaranteed to immediately get the monitor. It is not that way with POSIX mutexes and conditions (for good reasons).
Another difference between mutexes and monitors is a thread does not have to hold the mutex to signal the condition variable. In fact, it is a good idea not to. Signaling a condition variable is potentially an expensive operation (trip to the kernel). Mutexes should guard critical regions which are as short as possible (just a few instructions, ideally) to minimize contention.
Yet another difference is that POSIX conditions have a pthread_cond_broadcast function which wakes up all threads waiting on a condition. This is always the correct function to use by default. In situations in which it it obvious (or can be shown that) waking up a single thread is correct, then the function pthread_cond_signal can be used to optimize the code.

Queue of variable length array or struct

How would one go about creating a queue that can hold an array, more over an array with variable amounts of rows.
char data[n][2][50];
//Could be any non 0 n e.g:
n=1; data = {{"status","ok}};
// or
n=3; {{"lat", "180.00"},{"long","90.123"},{"status","ok}};
// and so on
n to be added to the queue. Or is there even a better solution than what I'm asking? A queue is easy enough to write (or find re-usable examples of) for single data items but I'm not sure what method I would use for the above. Maybe a struct? That would solve for array and n...but would it solve for variable array?
More broadly the problem I'm trying to solved is this.
I need to communicate with a web server using POST. I have the code for this already written however I don't want to keep the main thread busy every time this task needs doing, especially since I need to make other checks such as is the connection up, if it isn't I need to back off and wait or try and bring it back online.
My idea was to have a single separate dedicated to this task. I figured creating a queue would be the best way for the main thread to let the child thread know what to do.
The data will be a variable number of string pairs. like:
Main
//Currently does
char data[MAX_MESSAGES_PER_POST][2][50];
...
assembles array
sendFunction(ptrToArray, n);
resumes execution with large and un predicatable delay
//Hopefully will do
...
queue(what needs doing)
carry on executing with (almost) no delay
Child
while(0)
{
if(allOtherConditionsMet()) //Device online and so forth
{
if(!empty(myQueue))
{
//Do work and deque
}
}
else
{
//Try and make condition ok. Like reconect dongle.
}
// sleep/Back off for a while
}
You could use an existing library, like Glib. GLib is cross platform. If you used GLib's asynchronous queues, you'd do something like:
The first thread to create the queue executes:
GAsyncQueue *q = g_async_queue_new ();
Other threads can reference (show intent to use the queue) with:
g_async_queue_ref (q);
After this, any thread can 'push' items to the queue with:
struct queue_item i;
g_async_queue_push (q, ( (gpointer) (&i)));
And any thread can 'pop' items from the queue with:
struct queue_item *d = g_async_queue_pop (q);
/* Blocks until item is available. */
Once a thread finishes using the queue and doesn't care any more about it, it calls:
g_async_queue_unref (q);
Even the thread which created the queue needs to do this.
There are a bunch of other useful functions, which you can all read about on the page documenting them. Synchronization (locking/consistency/atomicity of operations) is taken care of by the library itself.

Create a non-blocking timer to erase data

Someone can show me how to create a non-blocking timer to delete data of a struct?
I've this struct:
struct info{
char buf;
int expire;
};
Now, at the end of the expire's value, I need to delete data into my struct. the fact is that in the same time, my program is doing something else. so how can I create this? even avoiding use of signals.
It won't work. The time it takes to delete the structure is most likely much less than the time it would take to arrange for the structure to be deleted later. The reason is that in order to delete the structure later, some structure has to be created to hold the information needed to find the structure later when we get around to deleting it. And then that structure itself will eventually need to be freed. For a task so small, it's not worth the overhead of dispatching.
In a difference case, where the deletion is really complicated, it may be worth it. For example, if the structure contains lists or maps that contain numerous sub-elements that must be traverse to destroy each one, then it might be worth dispatching a thread to do the deletion.
The details vary depending on what platform and threading standard you're using. But the basic idea is that somewhere you have a function that causes a thread to be tasked with running a particular chunk of code.
Update: Hmm, wait, a timer? If code is not going to access it, why not delete it now? And if code is going to access it, why are you setting the timer now? Something's fishy with your question. Don't even think of arranging to have anything deleted until everything is 100% finished with it.
If you don't want to use signals, you're going to need threads of some kind. Any more specific answer will depend on what operating system and toolchain you're using.
I think the motto is to have a timer and if it expires as in case of Client Server logic. You need to delete those entries for which the time is expired. And when a timer expires, you need to delete that data.
If it is yes: Then it can be implemented in couple of ways.
a) Single threaded : You create a sorted queue based on the difference of (interval - now ) logic. So that the shortest span should receive the callback first. You can implement the timer queue using map in C++. Now when your work is over just call the timer function to check if any expired request is there in your queue. If yes, then it would delete that data. So the prototype might look like set_timer( void (pf)(void)); add_timer(void * context, long time_to_expire); to add the timer.
b) Multi-threaded : add_timer logic will be same. It will access the global map and add it after taking lock. This thread will sleep(using conditional variable) for the shortest time in the map. Meanwhile if there is any addition to the timer queue, it will get a notification from the thread which adds the data. Why it needs to sleep on conditional variable, because, it might get a timer which is having lesser interval than the minimum existing already.
So suppose first call was for 5 secs from now
and the second timer is 3 secs from now.
So if the timer thread only sleeps and not on conditional variable, then it will wake up after 5 secs whereas it is expected to wake up after 3 secs.
Hope this clarifies your question.
Cheers,

How do I find a item in ListView control?

My list view contains 3 columns Name, address and phone number.
I want to retrieve an index for a particular name.
I'm using ListView_FindItem macro to find the index number but when my code comes to this line it crashes the program.
It just says Payroll has stopped working. Windows can check online for a solution to the problem.
I'm sure I have passed right handle to the ListView_FindItem macro but I'm not sure about the LVFINDINFO structure.
Here's my code
WCHAR szProcess[80] = {0};
LVFINDINFO item = {LVFI_STRING, (LPCTSTR) szProcess};
//code to find parent handles
...
//code to find index
index = ListView_FindItem(hwndListView, -1, &item);
I'm not sure about the LVFI_STRING flag and I have even tried passing a constant LVFINDINFO structure to ListView_FindItem macro still my program crashes.
Note : The above code is not part of
the payroll application. I mean to say
the payroll application has the
listview and I'm trying to search the
item from other application.
Can some one point me in a right direction ?
Thanks.
Your description is a little unclear, but I interpret it that you are sending LVM_FINDITEM message (via the ListView_FindItem() macro) to a window in a different process.
This simply does not work for this particular Windows message since it passes a pointer to a struct in the calling process which is meaningless when interpreted in the context of the other process (the payroll app that owns the list view).
To solve your problem you could allocate memory in the other process although this is quite a complex task. A commonly cited example of the technique is to be found in the Code Project article, Stealing Program's Memory.
Perhaps a simpler approach would be to use WM_COPYDATA which will marshal string data between processes. If that doesn't have enough flexibility then you'd need to find another IPC mechanism, e.g. named pipes.

Resources