I'm writing a small client / server demo that shares files between peers. One a peer gets a list of ip addresses from the main server, the main thread creates a thread for each respective file. The process looks like this:
Main thread gets list of files from server
Thread created for each file (detached)
In each created thread, connect to the peers specified / associated with a file
Thread downloads the file in chunks
Thread announces the file was complete
My problem comes into play when trying to "query" a thread. In each thread, I keep track of the progress of a transfer. In my main thread, I would like the user to be able to see the progress of all of the transfers taking place. What would be the best way to do so? I was thinking about sending a signal using pthread_kill to each thread respectively, although it seems like there should be a better way. If anyone has an idea, I'd love to hear it.
When you create your thread, you include a void * to point to anything you wish. In your example, you could declare an array of progress values and pass the address of one of them to each thread you create, let the thread perform a simple update when it needs to, and your main thread can periodically check the values.
If you're already using that parameter for something, you will need to create a structure comprising this new value and whatever you're already using, and pass the address of it so the thread gets everything it needs.
Related
Question is fairly self explanatory, but here is the context, basically I have a server socket thread that spawns child threads when it receives new connections. These child threads accept data dumps from the remote connections, then clean up themselves and close when they are done.
Currently I have the child threads calling "pthread_detach(pthread_self())" right before they exit, what I'm considering doing is making the program to wait on program close, for the active data dumps to finish. Now I actually already have an alternate way around this that's part of the dynamic array I'm using to keep track of the active threads, but for future reference I would like to know what would happen if you joined a thread destined to detach itself before it closes and if it'll cause any issues.
This is what the documentation says.
If an implementation detects use of a thread ID after the end of its lifetime, it is recommended that the function should fail and report an [ESRCH] error. ( Is listed for both functions. )
If you join a detached thread you should get an error returned.
The same happens if you detach a joined thread.
i am writing a small IRC program in C.I'm using thread to handle multiple clients,
and i use a chained list to store the fd of each client.So if a client send a message, it will be written on the fd of the others.
I'm not sure this is the best way to do, could you give me some advice ???
Plus, in this way, i need to share the struct (that contains the file descriptior of each clients) throughout the thread, so if there is an update in a thread, it will update the struc for the others.I'm wondering how i could do this, how could i share that struct ??
Any help is welcome.
Without knowing more about your design it's very difficult to comment on whether your linked list of FDs is appropriate.
In terms of sharing a struct of data between threads there is nothing you need to do. Threads share memory space so anything visible in one thread will be visible in another. Your only risk is that you have multiple threads modifying the struct at one time, something you protect against by using a mutex (mutual exclusion semaphore).
Since you're on Linux I'm assuming you're using POSIX Threads (pthreads) in which case you'll need to look at the pthread_mutex_ functions.
In your setup, I would use:
one input queue per channel,
one output queue per client.
Whenever a client thread receives a message, it posts it to the channel thread. When a channel receives a new post, it reposts it to all the clients. Each channel and client can be represented as a struct, which may be handled then by the threads (with one or more clients or channel per thread).
All queues are simple linked lists protected using a pthread_mutex_t. When a function needs to access them, it locks the queue, add the message, and unlock.
pthread_mutex_create
pthread_mutex_lock
pthread_mutex_unlock
How do I pass data to a thread from main application?
Inside the main application I created a thread for processing error messages. While processing data in the main application if there is an error, it generates an error message and fills it into a structure. This error message(structure) needs to be passed to the thread which will then process it further and the main application should continue its work. I am trying to do this in C on windows platform.
There will be only one thread running in my application. At the moment I have defined a global variable structure (myData) and I am passing that using PostThreadMessage.
struct myData errorData;
From the main application post a message using
PostThreadMessage(ErrorLogId, THRD_MESSAGE_EXIT , 0 , (LPARAM)&errorData);
In the thread I have
MsgReturn = GetMessage(&msg, NULL, THRD_MESSAGE_SOMEWORK, THRD_MESSAGE_EXIT);
At the moment it is working fine. But if processing of error message takes more time, by that time main application might get new errors and update data in global structure errorData.
I could use locking mechanism but I cannot stop main application till the thread has finished processing. How do I pass data without having it as a global variable?
You might like to create a new instance of struct myData each time you are about to call PostThreadMessage().
The thread needs to free() this instance of struct myData when done with it.
Adding synchronization to your current approach would be against the asynchronous concept of spawning workers while the main task continues.
Anyway the threads still need to use synchronisation on their side in case of writing something to a shared log file for example.
A solution is to dynamically allocate a struct myData (using malloc()) each time, populate it and pass it to the thread for processing. The thread is responsible for free()ing it once it has completed processing it.
This approach removes any synchronization between threads on the global object errorData (as it is no longer required).
How about allocating the error message dynamically (with malloc()), filling it and passing a pointer to it to the thread in a message? Then the thread would work with the message and deallocate it (with free()).
Edit:
Didn't realize there was a message queue already, sorry, well then a dynamically allocated message will do of course.
Old answer for reference:
If you don't wish to wait until the thread finishes processing the error message, then you should use a synchronized queue for communications between the main thread and the worker thread. This is some pseudo code to explain what I mean:
Worker Thread:
while (queue_is_empty())
wait;
lock(queue);
process_error(read(queue));
unlock(queue);
Main Thread:
if (error)
lock(queue)
write(queue, error)
unlock(queue)
//possibly signal thread
You don't have to implement that from scratch, you could use something like RabbitMQ
I do understand what an APC is, how it works, and how Windows uses it, but I don't understand when I (as a programmer) should use QueueUserAPC instead of, say, a fiber, or thread pool thread.
When should I choose to use QueueUserAPC, and why?
QueueUserAPC is a neat tool that can often be a shortcut for some tasks that are otherwise handled with synchronization objects. It allows you to tell a particular thread to do something whenever it is convenient for that thread (i.e. when it finishes its current work and starts waiting on something).
Let's say you have a main thread and a worker thread. The worker thread opens a socket to a file server and starts downloading a 10GB file by calling recv() in a loop. The main thread wants to have the worker thread do something else in its downtime while it is waiting for net packets; it can queue a function to be run on the worker while it would otherwise be waiting and doing nothing.
You have to be careful with APCs, because as in the scenario I mentioned you would not want to make another blocking WinSock call (which would result in undefined behavior). You really have to be watching in order to find any good uses of this functionality because you can do the same thing in other ways. For example, by having the other thread check an event every time it is about to go to sleep, rather than giving it a function to run while it is waiting. Obviously the APC would be simpler in this scenario.
It is like when you have a call desk employee sitting and waiting for phone calls, and you give that person little tasks to do during their downtime. "Here, solve this Rubik's cube while you're waiting." Although, when a phone call comes in, the person would not put down the Rubik's cube to answer the phone (the APC has to return before the thread can go back to waiting).
QueueUserAPC is also useful if there is a single thread (Thread A) that is in charge of some data structure, and you want to perform some operation on the data structure from another thread (Thread B), but you don't want to have the synchronization overhead / complexity of trying to share that data between two threads. By having Thread B queue the operation to run on Thread A, which solely maintains that structure, you are executing any arbitrary function you want on that data without having to worry about synchronization.
It is just another tool like a thread pool. However with a thread pool you cannot send a task to a particular thread. You have no control over where the work is done. When you queue up a task that may end up creating a whole new thread. You may queue two tasks and they get done simultaneously on two different threads. With QueueUserAPC, you can be guaranteed that the tasks would get done in order and on the thread you designate.
I have a daemon that accepts socket connections and reads or writes a dynamic set of files, depending on the nature of the connection. Because my daemon is multithreaded, the possibility exists that the same file may be written to by more than one thread. Because my list of files is dynamic and not fixed, I'm not sure how to keep one thread from bumping into the other. For performance reasons, I want threads to be writing to different files at the same time, just not the same file at the same time.
Other questions have suggested using mutexes, but I'm not entirely clear how a mutex would help in this scenario - the list of files being dynamic and only known to the thread.
Would it be appropriate to use file locking in this case? If so, how would one implement file locking in a thread-safe way?
flock will work OK. It doesn't lock file descriptors, it locks the actual file.
A file that has been exclusively flock'ed can't be exclusively locked again by another process or thread. That would defeat the entire purpose of locks.
One note is that these locks are advisory. A process that doesn't use flock can happily overwrite the file, even if another process has exclusive-flock'ed it.
I would use an event broker pattern. Each socketing thread fires an event (have args of the file(s) ) then the event is handled by a central file broker with a shared collection of files currently being written.
If the file cannot be written to, decide what you want to do... otherwise report a success.
Multiple listeners, one central file-lock collection, multiple writers.
I can't say this would be the "optimum" solution, but I'd propose something like this:
Maintain a linked list of a struct that contains two things:
The filename
A condition wait variable associated with the file.
Flow A. When the daemon receives a request, mutex lock the list and check to see whether the filename is in the list or not. If it is not, add a new entry to the linked list with a new condition wait variable for other threads to use. Release the mutex lock. Perform the file operation. Once complete, lock the linked list and remove the struct entry for that file, then signal the other threads via the wait object.
Flow B. If a request comes in for the same file, it'll lock the list and look for the filename contained in the list. If it is in the list, grab the wait variable and wait on it. When the thread is signaled, grab a lock on the list and see if the file is in the list (It's possible another thread picked up the lock on the filename before you). If not, follow Flow A. If so, grab the wait variable in the new struct and wait again until signaled, then follow the above steps again.