How to pass data(structure) as a message to a thread. - c

How do I pass data to a thread from main application?
Inside the main application I created a thread for processing error messages. While processing data in the main application if there is an error, it generates an error message and fills it into a structure. This error message(structure) needs to be passed to the thread which will then process it further and the main application should continue its work. I am trying to do this in C on windows platform.
There will be only one thread running in my application. At the moment I have defined a global variable structure (myData) and I am passing that using PostThreadMessage.
struct myData errorData;
From the main application post a message using
PostThreadMessage(ErrorLogId, THRD_MESSAGE_EXIT , 0 , (LPARAM)&errorData);
In the thread I have
MsgReturn = GetMessage(&msg, NULL, THRD_MESSAGE_SOMEWORK, THRD_MESSAGE_EXIT);
At the moment it is working fine. But if processing of error message takes more time, by that time main application might get new errors and update data in global structure errorData.
I could use locking mechanism but I cannot stop main application till the thread has finished processing. How do I pass data without having it as a global variable?

You might like to create a new instance of struct myData each time you are about to call PostThreadMessage().
The thread needs to free() this instance of struct myData when done with it.
Adding synchronization to your current approach would be against the asynchronous concept of spawning workers while the main task continues.
Anyway the threads still need to use synchronisation on their side in case of writing something to a shared log file for example.

A solution is to dynamically allocate a struct myData (using malloc()) each time, populate it and pass it to the thread for processing. The thread is responsible for free()ing it once it has completed processing it.
This approach removes any synchronization between threads on the global object errorData (as it is no longer required).

How about allocating the error message dynamically (with malloc()), filling it and passing a pointer to it to the thread in a message? Then the thread would work with the message and deallocate it (with free()).

Edit:
Didn't realize there was a message queue already, sorry, well then a dynamically allocated message will do of course.
Old answer for reference:
If you don't wish to wait until the thread finishes processing the error message, then you should use a synchronized queue for communications between the main thread and the worker thread. This is some pseudo code to explain what I mean:
Worker Thread:
while (queue_is_empty())
wait;
lock(queue);
process_error(read(queue));
unlock(queue);
Main Thread:
if (error)
lock(queue)
write(queue, error)
unlock(queue)
//possibly signal thread
You don't have to implement that from scratch, you could use something like RabbitMQ

Related

How to send a message to a thread that is doing a lengthy operation?

I have a main thread, and a worker thread. When the main thread is about to terminate, I need to send a message to the worker thread so it can terminate itself cleanly (I have heard that TerminateThread() is a bad idea).
The worker thread will be doing a lengthy operation (that will take more than 30 seconds), so is there a way to send it a message while doing this lengthy operation somehow. Or maybe the design of my program is wrong and a worker thread should not be doing such length operations?
As a general principle, do not terminate threads.
Your background worker thread can periodically check if the program wants to exit. You should use std::atomic<bool> to do that. Here is an example on how to use it.
#include <iostream>
#include <thread>
#include <atomic>
volatile std::atomic<bool> g_bExit(false);
void thread1()
{
while( !g_bExit )
{
// do stuff here
}
}
int main()
{
std::thread th1(thread1);
g_bExit = true;
th1.join() ; // wait for thread
}
An easy solution, but maybe not the best one is to use a global flag variable, initially with 0 value.
When the main thread finishes set it to one. Your other thread should check for this variable in a loop.
when the variable is 1 then the thread knows that the main thread has finished.
Hope it helps!
Use PostThreadMessage function to post a message to another thread. Another thread must be having message pump (not necessarily a GUI-message-pump). Use GetMessage in target thread to retrieve the message. You need to craft a structure (better approach), and send the structure object. Your structure would contain actual message type (Start, Message, Stop etc - set of enum values for example).
Or you may use new threading techniques of C++14 or use unbounded_buffer of VC++ concurrency runtime. You need to explore and find appropriate solution for your problem.
I didn't post any example, since this domain is quite large. You better search on the web for examples, references etc.
EDIT: I believe I have misunderstood the problem initially. Here is my solution: Use event synchronization object. In worker thread, wait for event through WaitForSingeObject (or WaitForMultipleObjects). From source thread call SetEvent on same synchronization object. Yes, there are new alternatives in C++ std::thread

How to create a new thread that will run based off of the next ucontext_t in my ready queue

So I am simulating multi threading through context switches. Now I actually need to create a new thread that will continue to run based off of the next context in my ready queue.
So I think I have a general idea of how this would work. I have a global ready queue that contains my ucontext_t structs. I want to essentially "pause" the running of the current thread and instead run the next ucontext_t on a new thread until I tell my application to stop waiting. My confusion is coming from how I would create this new thread and get it to run the next context. pthread_create needs a function pointer but I don't even know what function it will be running (isn't that determined by the context?).
Any insight on this problem would be greatly appreciated.
I've done some more reading and it seems that I am going to want to use clone(...) however the details are still eluding me.

Proper way to handle pthread communication / signals in this instance?

I'm writing a small client / server demo that shares files between peers. One a peer gets a list of ip addresses from the main server, the main thread creates a thread for each respective file. The process looks like this:
Main thread gets list of files from server
Thread created for each file (detached)
In each created thread, connect to the peers specified / associated with a file
Thread downloads the file in chunks
Thread announces the file was complete
My problem comes into play when trying to "query" a thread. In each thread, I keep track of the progress of a transfer. In my main thread, I would like the user to be able to see the progress of all of the transfers taking place. What would be the best way to do so? I was thinking about sending a signal using pthread_kill to each thread respectively, although it seems like there should be a better way. If anyone has an idea, I'd love to hear it.
When you create your thread, you include a void * to point to anything you wish. In your example, you could declare an array of progress values and pass the address of one of them to each thread you create, let the thread perform a simple update when it needs to, and your main thread can periodically check the values.
If you're already using that parameter for something, you will need to create a structure comprising this new value and whatever you're already using, and pass the address of it so the thread gets everything it needs.

When to use QueueUserAPC()?

I do understand what an APC is, how it works, and how Windows uses it, but I don't understand when I (as a programmer) should use QueueUserAPC instead of, say, a fiber, or thread pool thread.
When should I choose to use QueueUserAPC, and why?
QueueUserAPC is a neat tool that can often be a shortcut for some tasks that are otherwise handled with synchronization objects. It allows you to tell a particular thread to do something whenever it is convenient for that thread (i.e. when it finishes its current work and starts waiting on something).
Let's say you have a main thread and a worker thread. The worker thread opens a socket to a file server and starts downloading a 10GB file by calling recv() in a loop. The main thread wants to have the worker thread do something else in its downtime while it is waiting for net packets; it can queue a function to be run on the worker while it would otherwise be waiting and doing nothing.
You have to be careful with APCs, because as in the scenario I mentioned you would not want to make another blocking WinSock call (which would result in undefined behavior). You really have to be watching in order to find any good uses of this functionality because you can do the same thing in other ways. For example, by having the other thread check an event every time it is about to go to sleep, rather than giving it a function to run while it is waiting. Obviously the APC would be simpler in this scenario.
It is like when you have a call desk employee sitting and waiting for phone calls, and you give that person little tasks to do during their downtime. "Here, solve this Rubik's cube while you're waiting." Although, when a phone call comes in, the person would not put down the Rubik's cube to answer the phone (the APC has to return before the thread can go back to waiting).
QueueUserAPC is also useful if there is a single thread (Thread A) that is in charge of some data structure, and you want to perform some operation on the data structure from another thread (Thread B), but you don't want to have the synchronization overhead / complexity of trying to share that data between two threads. By having Thread B queue the operation to run on Thread A, which solely maintains that structure, you are executing any arbitrary function you want on that data without having to worry about synchronization.
It is just another tool like a thread pool. However with a thread pool you cannot send a task to a particular thread. You have no control over where the work is done. When you queue up a task that may end up creating a whole new thread. You may queue two tasks and they get done simultaneously on two different threads. With QueueUserAPC, you can be guaranteed that the tasks would get done in order and on the thread you designate.

Ruby C Extension: run an event loop concurrently

I'm implementing a simple windowing library as a Ruby C extension. Windows have a handle_events! method that enters their native event loop.
The problem is that I want one event loop per window and the method blocks. I'd like the method to return immediately and let the loop run in a separate thread. What would be the best way to achieve this?
I tried using rb_thread_call_without_gvl to call the event loop function, and then use rb_thread_call_with_gvl in order to call the window's callbacks, which are Procs. Full source code can be found here.
It still works, but not as I intended: the method still blocks. Is this even possible with Ruby's threading model?
I had the very same problem to solve. And as rb_thread_call_with_gvl() was marked as experimental in 1.9.2 and it was not an exported symbol, I toke a different approach:
I called the blocking handle_event! function from a separate thread. I used a second ruby thread, that blocked on a message queue. While blocking on the message queue, the gvl was released with rb_thread_blocking_region().
If now the thread calling handle_event! was unblocked due to an event, it pulled all required information for the Proc's upcall together in a queue element and pushed that element onto the queue. The ruby thread received the element, returned from rb_thread_blocking_region() and thus reacquired the gvl and call the Proc with the information from the received element.
Kind regards
Torsten
As far as I understand, using rb_thread_call_with_gvl() still needs to be done on the same thread. i.e.: it's about releasing and taking the global lock, not really about changing threads. For example, a long running gzip function can run without the lock so that other ruby threads can run in parallel.
If you want your Procs called back on another thread, shouldn't you need to create a ruby thread for those Procs? Then on that thread, call out using rb_thread_call_without_gvl() to not hold the GVL (allowing other ruby threads to run), then when you have an event on the secondary window thread, call rb_thread_call_with_gvl() to grab the lock and then you should be right to call the Proc on that same thread.
That's the way I understand it... (not having done the C extension stuff very long.)

Resources