Asynchronous I/O and time consuming work - c

I know that asynchronous socket programming is more scalable than synchronous.
But there is one thing I don't really understand about it:
If your event loop should be non blocking, how can you delegate time consuming work to another thread without blocking? A work queue normally needs a mutex for protection. I know there are lock free queues but is this how its done? Can someone please give a little concept idea of this thing?

The worker threads pulling from the queue block all the time. They have to when the queue is empty. What else are they supposed to do?
It is the work items that are not supposed to block so that we need just a few queue worker threads.
Async IO is about using less threads.
If your event loop should be non blocking
This assumption is false. It is not supposed to not block. The loop contains blocking all the time. Every time the queue is empty and a worker tries to dequeue.

Related

Pause a loop till a bool changes

I am trying to make RPC calls to a daemon.
Now the issue is. I want to make multiple async RPC calls, that'd choke the message buffer. So I want to do it one by one.
what I am following
bool wait;
success(){
// On success from send_to_single_peer this will run
wait = false;
}
send_to_single_peer(peer){
wait = true
//makes an RPC call for sendcustommsg to a peer and goes to sucess function
}
sendmultipleasync(){
for(//conditions) {
send_to_single_peer(peer[i]);
do{}
while(wait)
}
}
But this isn't working. What should I do? I've heard multithreading can help me in this case. But I don't know about that much.
I think you need. Your function has the name async in it, so why not do it async.
create a worker thread
create a mutex
make the worker thread lock the mutex
for every peer:
send the request
did it work? if not try again
unlock the mutex (all done)
While you are doing that, you can check if the worker thread is done by using pthread_mutex_trylock. If you just want to wait until it is done use pthread_mutex_lock. NEVER USE while (pthread_mutex_trylock(mutex)); TO DO THAT.
I've heard multithreading can help me in this case. But I don't know about that much.
Basically, with threads you can execute code in parallel. The problem is, that they share the heap memory and are able to modify the variables while the other threads are trying to use them. This will lead to problems really hard to debug, sometimes they only occur one time in a million, because they depend on timing. To prevent access at the same time you can use a mutex. It is kind of a lock for threads. A mutex can only be locked once at a time. One thread for example can lock a mutex while changing a variable. The other thread can try to lock it too. This will cause it to block until the first thread unlocks the mutex. After that block the second thread can read the variables and unlock the mutex after that.
Read a tutorial on threads, they are a really useful took.

How to use a semaphore in C to implement a mutex that respects starvation with a first in first out structure

I'm trying to implement a simple mutex lock using a semaphore that does not fall victim to starvation. In order to do this, I'm pretty sure I need to implement some sort of queue or other first-in-first-out approach, but semaphores in C appear to not respect any sort of FIFO structure. Given that, I've not been able to decipher how to wake and sleep threads in a proper FIFO order? Then again, perhaps I'm barking up the entirely wrong tree in my approach.
This link, https://pubs.opengroup.org/onlinepubs/7908799/xsh/sem_post.html implies something with SCHED_FIFO might be able to resolve my issue, but my relative inexperience with C left me neither sure if that could resolve my problem nor how I'd implement my solution.
Does C have a way of enabling a FIFO "fair" semaphore to make a fair lock that avoids starvation, or do you need a seperate queuing system of some sort? And in either case, how would you approach its implementation?
Thanks for any input you can provide!
Are you only allowed to use a single semaphore as the means to block a thread? If so, then I don't think there is any pretty solution. Here's an ugly one: (pseudo-code)
queue.put(my_thread_id);
semaphore.dec();
while (queue.head() != my_thread_id) {
semaphore.inc();
sleep(VERY_SMALL_TIME_INTERVAL)
semaphore.dec();
}
(void) queue.pop();
...do whatever...
semaphore.inc();
Suppose that thread A releases the semaphore while threads B, C, and D are awaiting it. Exactly one of B, C, or D will awaken. It will look at the queue, and if its own thread ID is at the head of the queue, it will pop the ID and proceed to do whatever. Otherwise, it will awaken one of the other two, sleep for a bit, and then try again.
In this way, each of the threads will be awakened, one-by-one, until one of them sees its own ID and breaks out of the loop.
The sleep(...) is important. Without it, the fundamental unfairness of the semaphore would make it likely that the subsequent semaphore.dec() call would immediately succeed, and the same thread would keep going round the loop, not seeing its own ID, and starving the others. The sleep(...) blocks the caller, thereby encouraging the OS to waken one of the other waiting threads.
OTOH, are you using the Posix Threads Library (pthreads)? And are you allowed to use any means available to block and awaken waiting threads? In that case, you could use a condition variable instead of the semaphore. You'd still need an explicit queue, and you'd still need a loop, but you could get rid of the sleep(...) because pthread_cond_broadcast(...) simultaneously awakens all of the waiting threads.
Condition variables are a bit trickier than semaphores—easy to make mistakes. I suggest you google for a good tutorial if you want to go that way.

sleeping on an event

I have a multi threaded program in which I sleep in one thread(Thread A) unconditionally for infinite time. When an event happens in another thread (Thread B), it wake up Thread-A by signaling. Now I know there are multiple ways to do it.
When my program runs in windows environment, I use WaitForSingleObject in Thread-A and SetEvent in the Thread-B. It is working without any issues.
I can also use file descriptor based model where I do poll, select. There are more than one way to do it.
However, I am trying to find which is the most efficient way. I want to wake up the Thread-A asap whenever Thread-B signals. What do you think is the best option.
I am ok to explore a driver based option.
Thanks
As said, triggering an SetEvent in thread B and a WaitForSingleObject in thread A is fast.
However some conditions have to be taken into account:
Single core/processor: As Martin says, the waiting thread will preempt the signalling thread. With such a scheme you should take care that the signalling thread (B) is going idle right after the SetEvent. This can be done by a sleep(0) for example.
Multi core/processor: One might think there is an advantage to put the two threads onto different cores/processors but this is not really such a good idea. If both threads are on the same core/processor, the time-span between calling SetEventand the return of WaitForSingleObject is much shorter shorter.
Handling both threads on one core (SetThreadAffinityMask) also allows to handle the behavior of them by means of their priority setting (SetThreadPriority). You may run the waiting thread at a higher priorty or you have to ensure that the signalling thread is really not doing anything after it has set the event.
You have to deal with some other synchronization matter: When is the next event going to happen? Will thread A have completed its task? Most effective a second event can be used to solve this matter: When thread A is done, it sets an event to indicate that thread B is allowed to set its event again. Thread B will effectively first set the event and then wait for the feedback event, it meets the requirment to go idle immedeately.
If you want to allow thread B to set the event even when thread A is not finished and not yet in a wait state, you should consider using semaphores instead of events. This way the number of "calls/events" from thread B is kept and the wait function in thread A can follow up, because it is returning for the number of times the semaphore has been released. Semaphore objects are about as fast as events.
Summary:
Have both threads on the same core/cpu by means of SetThreadAffinityMask.
Extend the SetEvent/WaitForSingleObject by another event to establish a Handshake.
Depending on the details of the processing you may also consider semaphore objects.

linux pthread_suspend

Looks like linux doesnt implement pthread_suspend and continue, but I really need em.
I have tried cond_wait, but it is too slow. The work being threaded mostly executes in 50us but occasionally executes upwards of 500ms. The problem with cond_wait is two-fold. The mutex locking is taking comparable times to the micro second executions and I don't need locking. Second, I have many worker threads and I don't really want to make N condition variables when they need to be woken up.
I know exactly which thread is waiting for which work and could just pthread_continue that thread. A thread knows when there is no more work and can easily pthread_suspend itself. This would use no locking, avoid the stampede, and be faster. Problem is....no pthread_suspend or _continue.
Any ideas?
Make the thread wait for a specific signal.
Use pthread_sigmask and sigwait.
Have the threads block on a pipe read. Then dispatch the data through the pipe. The threads will awaken as a result of the arrival of the data they need to process. If the data is very large, just send a pointer through the pipe.
If specific data needs to go to specific threads you need one pipe per thread. If any thread can process any data, then all threads can block on the same pipe and they will awaken round robin.
It seems to me that such a solution (that is, using "pthread_suspend" and "pthread_continue") is inevitably racy.
An arbitrary amount of time can elapse between the worker thread finishing work and deciding to suspend itself, and the suspend actually happening. If the main thread decides during that time that that worker thread should be working again, the "continue" will have no effect and the worker thread will suspend itself regardless.
(Note that this doesn't apply to methods of suspending that allow the "continue" to be queued, like the sigwait() and read() methods mentioned in other answers).
May be try an option of pthread_cancel but be careful if any locks to be released,Read the man page to identify cancel state
Why do you care which thread does the work? It sounds like you designed yourself into a corner and now you need a trick to get yourself out of it. If you let whatever thread happened to already be running do the work, you wouldn't need this trick, and you would need fewer context switches as well.

Implementing a Priority queue with a Condition Variable in C

My current understanding of condition variables is that all blocked (waiting) threads are inserted into a basic FIFO queue, the first item of which is awakened when signal() is called.
Is there any way to modify this queue (or create a new structure) to perform as a priority queue instead? I've been thinking about it for a while, but most solutions I have end up being hampered by the existing queue structure inherent to C.V.'s and mutexes.
Thanks!
I think you should rethink what you're trying to do. If you're trying to optimize your performance, you're probably barking up the wrong tree.
pthread_cond_signal() isn't even guaranteed to unblock exactly one thread -- it's guaranteed to unblock at least one thread, so your code better be able to handle the situation where multiple threads are unblocked simultaneously. The typical way to do this is for each thread to re-check the condition after becoming unblocked, and, if false, return to waiting again.
You could implement some sort of scheme where you kept your own priority queue of threads waiting, and each thread added itself to that queue immediately before it was to begin waiting, and then it would check the queue when unblocking, but this would add a lot of complexity and a lot of potential for serious problems (race conditions, deadlocks, etc.). It was also add a non-trivial amount of overhead.
Also, what happens if a higher-priority thread starts waiting on a condition variable at the same moment that condition variable is being signalled? Who gets unblocked, the newly arrived high-priority thread or the former highest priority thread?
The order that threads get unblocked in is entirely dependent on the kernel's thread scheduler, so you are at its mercy. I wouldn't even assume FIFO ordering, either.
Since condition variables are basically just a barrier and you have no control over the queue of waiting threads there's no real way to apply priorities. It's invalid to assume waiting threads will act in a FIFO manner.
With a combination of atomics, additional condition variables, and pre-knowledge of the threads/priorities involved you could construct a solution where a signaled thread will re-signal the master CV and then re-block on a priority CV but it certainly wouldn't be a generic solution. That's also off the top of my head so might also have some other flaw.
It's the scheduler that determines which thread will run. You can look at pthread_setschedparam and pthread_getschedparam and fiddle with the policies (SCHED_OTHER, SCHED_FIFO, or SCHED_RR) and the priorities. But it probably won't get you to where I suspect you want to go.
It sounds as if you want to make something predictable from the inherently non-deterministic. As Andrew notes you might hack something but my guess is that this will lead to heartache or a lot code you will hate yourself for writing in six months (or both).

Resources