gcc 4.4.2 c89
I have a function that has to run (config_relays). It make a call to a API function called set_relay, then the code has to wait before continuing until the event for set_relay event has completed. The set_relay is any Async call.
i.e.
void run_processes()
{
switch()
{
case EV_RELAY_SET:
break;
}
}
void config_relays()
{
set_relay();
/* Wait until EV_RELAY_SET has fired */
/* Cannot do init_relay until set_relay event has fired - has to block here */
init_relay();
}
I guess I could put the init_relay() in the switch. However, that event is used for other things and not just for initializing the relay. I would really like to handle everything in the config_relays function.
In C# you can do this by using autoreset. Does C have anything like that.
Many thanks for any advice,
As Anders wrote, conditional wait is the solution. In the POSIX Thread API you use pthread_cond_wait together with a mutex. It is quite easy, the following pattern works:
int ready_flag = 0;
pthread_mutex_t ready_mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t ready_cond = PTHREAD_COND_INITIALIZER;
void wait_ready()
{
pthread_mutex_lock(&ready_mutex);
while(!ready_flag) {
pthread_cond_wait(&ready_cond, &ready_mutex);
}
pthread_mutex_unlock(&ready_mutex);
}
void set_ready(int ready)
{
pthread_mutex_lock(&ready_mutex);
ready_flag = ready;
pthread_cond_signal(&ready_cond);
// or using pthread_cond_broadcast(&ready_cond);
pthread_mutex_unlock(&ready_mutex);
}
The difference between pthread_cond_signal and pthread_cond_broadcast is that if more than one thread waits for the flag to be set, pthread_cond_signal only releases one thread but broadcast releases all threads.
Not that the while loop created around your condition is up to you, you can test multiple conditions or do whatever you want. The code pattern ensures that your tests are performed on protected variables so that race conditions can never cause problems for example
while(resource_a_busy && resource_b_busy) ...
Is a typical problem where both resource states must be protected by a mutex.
The cond_wait can be removed from the loop, but then it would translate the wait_ready to a polling loop which consumes CPU, the pthread_wait_cond does not consume any CPU.
There are porting libraries that provides a Win32 like API on top of pthreads as well as libraries that gives a phread like API on top of Win32 event API, the later is called [Pthreads-w32] 1
1) You can use OS specific API's like createEvent or pthread condition variable to wait on and signal it when set_relay() is completed.
2) Polling approach. Poll periodically to see set_relay() is completed, else sleep for few seconds and retry.
It depends on what threading library you are using and how the asynchronous method is called. If you are on Windows, there are auto-reset events in the Windows API that you can use. See the CreateEvent API. If you are on unix/linux you can look into condition variables in pthreads. Unfortunately pthreads doesn't have auto-reset events, because they are very hard to use in a race-condition-free way.
When choosing your waiting strategy you also have to take into consideration how the asynchronous call is done. Is it on another thread? Is it done through signal handling? Some other asynch mechanism which "borrows" the main thread? If the call is done by "borrowing" the main thread, you have to make sure that your waiting strategy does not stop the thread from being used to perform the asynch call.
Related
The ASIO documentation for basic_deadline_timer::cancel() has the following remarks section:
If the timer has already expired when cancel() is called, then the handlers for asynchronous wait operations will:
have already been invoked; or
have been queued for invocation in the near future.
These handlers can no longer be cancelled, and therefore are passed an error code that indicates the successful completion of the wait operation.
The emphasis has been added by me. Normally when you call cancel() on a timer, the callback is run with an error code of "operation cancelled by user". But this says there is a small chance it could actually be called with a success error code. I think it is trying to say that the following could happen:
Thread A calls async_wait(myTimerHandler) on a timer, where myTimerHandler() is a user callback function.
Thread B calls io_context::post(cancelMyTimer) where cancelMyTimer() is a user callback function. This is now queued up to be called in thread A.
The timer deadline expires, so ASIO queues up the timer callback handler, with a success error code. It doesn't call it yet, but it is queued up to be called in thread A.
ASIO gets round to calling cancelMyTimer() in thread A, which calls cancel() on the timer. But the timer already fired, and ASIO doesn't check that the handler is still queued up and not executed, so this does nothing.
ASIO now calls myTimerHandler, and doesn't check that cancel() was called in the meantime, and so it still passes success as the error code.
Bear in mind this example only has a single thread calling io_context::run(), deadline_timer::async_wait or deadline_timer::cancel(). The only thing that happened in another thread was a call to post(), which happened in an attempt to avoid any race conditions. Is this sequence of events possible? Or is it referring to some multithreading scenario (that seems unlikely given that timers are not thread safe)?
Context: If you have a timer that you wish to repeat periodically, then the obvious thing to do is check the error code in the callback, and set the timer again if the code is success. If the above race is possible, then it would be necessary to have a separate variable saying whether you cancelled the timer, which you update in addition to calling cancel().
You don't even need a second thread to run into a situation where basic_waitable_timer::cancel() is invoked too late (because the timer's (completion) handler is already queued).
It's sufficient that your program executes some other asynchronous operations concurrently to the not yet resumed basic_waitable_timer::async_wait(). If you then only rely on basic_waitable_timer::cancel() for cancellation then the cancel() call from another asynchronous (completion) handler races with an already scheduled async_wait() handler:
If the timer has already expired when cancel() is called, then the handlers for asynchronous wait operations will:
have already been invoked; or
have been queued for invocation in the near future.
These handlers can no longer be cancelled, and therefore are passed an error code that indicates the successful completion of the wait operation.
(basic_waitable_timer::cancel(), emphasis mine, i.e. the race condition is due to the second case)
A real-world example that is single-threaded (i.e. the program doesn't explicitly start any threads and only invokes io_server.run() once) and contains the described race:
void Fetch_Timer::resume()
{
timer_.expires_from_now(std::chrono::seconds(1));
timer_.async_wait([this](const boost::system::error_code &ec)
{
BOOST_LOG_FUNCTION();
if (ec) {
if (ec.value() == boost::asio::error::operation_aborted)
return;
THROW_ERROR(ec);
} else {
print();
resume();
}
});
}
void Fetch_Timer::stop()
{
print();
timer_.cancel();
}
(Source: imapdl/copy/fetch_timer.cc)
In this example, the obvious fix (i.e. also querying a boolean flag) doesn't even need to use any synchronization primitives (such as atomics), because the program is single-threaded. That means it executes (asynchronous) operations concurrently but not in parallel.
(FWIW, in the above example, the bug manifested itself only every 2 years or so, even under daily usage)
Everything you stated is correct. So in your situation you could need a separate variable to indicate you don’t want to continue the loop. I normally used a atomic_bool and I don’t bother posting a cancel routine, I just set the bool & call cancel from whatever thread I am on.
UPDATE:
The source for my answer is mainly experience in using ASIO for years and for understanding the asio codebase enough to fix problems and extend parts of it when required.
Yes the documentation says that the it's not thread safe between shared instances of the deadline_timer, but the documentation is not the best (what documentation is...). If you look at the source for how the "cancel" works we can see:
Boost Asio version 1.69: boost\asio\detail\impl\win_iocp_io_context.hpp
template <typename Time_Traits>
std::size_t win_iocp_io_context::cancel_timer(timer_queue<Time_Traits>& queue,
typename timer_queue<Time_Traits>::per_timer_data& timer,
std::size_t max_cancelled)
{
// If the service has been shut down we silently ignore the cancellation.
if (::InterlockedExchangeAdd(&shutdown_, 0) != 0)
return 0;
mutex::scoped_lock lock(dispatch_mutex_);
op_queue<win_iocp_operation> ops;
std::size_t n = queue.cancel_timer(timer, ops, max_cancelled);
post_deferred_completions(ops);
return n;
}
You can see that the cancel operation is guarded by a mutex lock so the "cancel" operation is thread safe.
Calling most of the other operations on deadline timer is not (in regards to calling them at the same time from multiple threads).
Also I think you are correct about the restarting of timers in quick order. I don't normally have a use case for stopping and starting timers in that sort of fashion, so I've never needed to do that.
The first thing, which I was told when had started working with pthreads, was - you should avoid force thread cancelation, like pthread_cancel. Instead we should use thread cancel notification via threads communication channel.
If we have a really long task to run in the thread, we split this task into small chunks and check the cancelation flag after each chunk processing. Like this:
loop {
process_chunk();
if (check_cancel_flag())
break;
}
But what is the best approach for implementation of this check_cancel_flag() function?
With all my experience in c and linux, I can remember only those methods:
(If you have only one working thread) You can use sig_atomic_t as a type for the cancelation flag. Check it in check_cancel_flag() function and mark it as true in the thread` signal handler. Then just call pthread_kill from the main thread.
Use any POD type for cancelation flag and protect it with a mutex. In this case you will get overhead with calling lock too often.
Use mutex as cancelation flag. Check it with pthread_mutex_trylock call. If the main thread releases this mutex, it is time to shutdown for the worker thread.
(For C11) Use gcc _atomic built-in functions (or another asm atomic library) to set and check cancelation flag.
I could not remember nothing else.
The question: How to choose correct approach?
Do you know any bench mark about this problem?
An alternative is to use a reader-writer lock (pthread_rwlock_t) to protect the flag, as your worker threads need to frequently read it but it is only written once.
As long as the chunk that is processed in between checks of the flag isn't too small, the overhead will be insignificant.
I'm trying to start a thread as soon as an interrupt occurs. However, I have realized that I can't start a thread from within an interrupt handler (or any function that is directly or indirectly being called by the interrupt handler). So, what I have decided to do is have the handler assert a flag. Then, a separate thread continously monitors that flag and if it's asserted it will in turn create (and start) a thread. Here's a pseudocode:
int interrupt_flag = 0;
interrupt_handler(void)
{
interrupt_flag = 1
}
monitoring_thread(void) //this thread is started at the start of the program
{
while(1)
{
if(interrupt_flag)
{
interrupt_flag = 0;
//start the thread here
sleep(/*some amount of time*/);
}
}
}
I'm not really happy with having a dedicated while loop constantly monitoring a flag. The problem with this is that it significantly reduces the speed of the other threads in my program. For this reason, I'm calling the sleep function to increase the speed of the other threads in my program.
Question: Is there a way I can truly start a thread upon interrupt, without having a dedicated while loop? Is there a workaround for starting a thread from within an interrupt handler?
If it makes any difference, I'm using the POSIX library.
Thanks,
PS. This question is somewhat related to an earlier question posted here:
Sharing data between master thread and slave thread in interrupt driven environment in C
Instead of having your monitoring thread spin on a flag, it could wait until the interrupt handler provides notification that a thread should be spawned. One way to do this is with a semaphore:
sem_t interrupt_sem;
void interrupt_handler(void)
{
sem_post(&interrupt_sem);
}
void monitoring_thread(void)
{
while(1)
{
sem_wait(&interrupt_sem);
//start the thread here
}
}
Previously, I had a solution based on a condition variable, but it is unlikely your system would operate correctly if the interrupt handler makes blocking calls. It could cause a deadlock or other undefined behaviors, as the variables in the system may not have consistent values at the time the interrupt takes place.
As pointed out in comments by myself and others, your operating system should provide some kind of interface to explicitly wake up a waiting task. In the code above, we are assuming the monitoring thread is always active in the background.
you can use POSIX semaphore too
you can wait a semaphore that initial value is 0 by a thread that will be blocked by wait
and post this semaphore in your signal handle function
then , thread above will be waked up and do things you want(create thread)
gcc 4.4.3 c89
I have a event loop that runs in a separate thread.
My design is like this below, just sample code to help explain.
I need to somehow wait for the initialization to complete before I can make a call to the get_device_params.
I did put a usleep for 3 seconds just before the call to the get_device_params, but I don't really want to block.
Many thanks for any suggestions,
void* process_events(void *data)
{
switch(event_type)
{
case EVT_INITIALIZED:
/* Device is now initialized */
break;
}
}
int main(void)
{
/* Create and start thread and process incoming events */
process_events();
/* Initialize device */
initialize_device();
/* Get device parameters */
/* However, I cannot run this code until initialization is complete */
get_device_params();
return 0;
}
If this separate thread is a POSIX thread (i.e. you're on a typical UNIX platform), then you can use pthread conditional variables.
You call pthread_cond_wait() in the waiting thread. When the init thread finishes its work, you call pthread_cond_signal(). In my opinion that's a canonical way to wait for initialization in another thread.
I need to somehow wait for the initialization to complete before I can make a call to the get_device_params.
Since you apparently have some sort of a FSM inside the process_events(), and it why ever runs in a separate thread, you shouldn't do anything from the main thread with the device.
In other words, logically, call to the get_device_params(); should be placed inside the FSM, on the event that the device is initialized EVT_INITIALIZED which I presume is triggered by the initialize_device().
Alternatively, you can create second FSM (possibly in another thread) and let the process_events() (the first FSM) after it has finished its own processing, forward the EVT_INITIALIZED event to the second FSM. (Or initialize_device() could send the event to the both FSMs simultaneously.)
To me it seems (from the scarce code you have posted) that your problem is that you try to mix sequential code with an event based one. Rule of thumb: in event/FSM based application all code should run inside the FSM, being triggered by an event; there should be no code which may run on its own outside of the FSM.
If it were me, I would probably use a barrier. In main you can call pthread_barrier_init, indicating that you have 2 threads. Then, in main call pthread_barrier_wait, to wait on the barrier you initialized, after calling your device initialization function. Finally, in the device thread, after you initialize your device, you can call pthread_barrier_wait on the same barrier and when both threads are waiting, the barrier will have been satisfied, so both threads will continue. I find barriers easier to use than condition variables sometime, but I'm sure that's an issue of preference.
I am supposed to implement a userlevel threads library in C. To do so, I need to implement yield(), createThread() and destroyThread() functions. I believe I've got the basics right:
To keep track of threads, I'll use a queue of ThreadControlBlock elements (which resemble PCBs in an OS) that look like this:
struct ThreadControlBlock {
int ThreadId,
u_context *context };
We can use the setcontext() family of functions to "save" and "load" the context.
Upon initialization of the program, initialize ThreadQueue with no elements.
Now the part I am not getting: when a thread calls yield(), I get the current context and save it in a ThreadControlBlock and put in the queue. Then get the first element in the queue and load the context in it, and execution proceeds.
The question is, if I do this, say I am a thread calling yield() and the next thread is myself. If I am saving the context and loading it again, upon re-entering wouldn't I be at the exact same spot where I was (before calling yield()?) And this would keep going on forever?
When a thread calls yield(), you have to save the state of a thread that's about to return from a yield() call. Don't save the context from immediately before the yield().
The same issue actually applies if you're switching to another task, too - since that other task saved its context at the same point (where it was about to switch to a second task). Using setcontext() and getcontext(), you need to use a static variable to keep track of whether you're switching in or switching out:
static volatile int switched;
switched = 0;
getcontext(current->context);
if (!switched)
{
switched = 1;
setcontext(next->context);
}
Alternatively, you can just use swapcontext(current->context, next->context);
It's perfectly reasonable in your yield() implementation to check to see if the next thread is the current thread and treat that case as a no-op.
If there are no other threads to run besides the current thread then there is nothing else to do besides just return from yield. I wouldn't bother with calling swapcontext in this case, though -- just detect and return.
I think that what you are actually dealing with is what to do when no threads (including the current one) when yield is called. An easy way to deal with this is to have an idle thread, which is only run when the run queue (ready threads) is empty. This thread will probably just:
{
while (1) {
yield();
pause();
}
}
This allows your program to go to sleep (via pause) until a signal happens. Hopefully the signal will be some event that makes one of the other threads ready to run, so the next call to yield will run the other thread instead of running the idle thread again.