C pthreads: Start a Thread from main - c

I am using an ARM processor with an Embedded Linux (C). My main program has to perform real-time operations and after a certain time duration (determined by the main program) I have to send data via a Bluetooth connection to an Android Tablet.
I thought to outsource the Bluetooth data transfer to a POSIX Thread. In the main function I have now to trigger the Thread to perform the sending of the data. So how can I 'restart' a Thread, because it should just send data via Bluetooth when the main function has to do it? That's why it also doesn't help to have a loop in the Thread function (combined with a sleep function) because, as I have already said, the timing is determined by the main function not the Thread function itself.
So is there an option to restart the Thread? Or maybe if anyone has a better idea to solve this issue I am open to it. :)

The easiest is with a synchronisation routine. Semaphore comes to mind:
thread() {
for (;;) {
sem_wait(&semaphore);
send_data(...);
}
}
main() {
get_data();
pus_data_in_a_buffer();
sem_post(&semaphore);
// ...
}
But if all you need to do is asynchronously send data you may want to have a look at the asynchronous IO library aio(7).

It is possible for you to just have your thread loop, check if the send buffer is ready to send, and then pause briefly if it isn't ready. While not ideal for every circumstance, you certainly can let the thread keep looping and waiting for something to do.
Note that this is not practical for every case, but because you have mentioned that it is a real-time scenario, this may be your best option for maintaining peak performance, as semaphores can be somewhat expensive (in terms of processing time). If you do not need such optimization, you are probably safer using semaphores.

Related

Non-blocking library API in C without threads

How can I implement a non-blocking library API in c, without the use of threads?
In short, I have a library that I wrote that issues some read/write calls via a serial controller in order to get data the client of the library needs. These calls to the serial device, via a proprietary driver, are blocking, and I can't change them.
Without the use of threads in my library, or writing a system service to co-exist with my library, is there any way to "wrap" my library API calls so they are non-blocking (i.e. like co-routines in python)? The desired end result is a simple, synchronous, non-blocking API call to query the status of the library with minimal wait involved.
Thank you.
Short answer: no.
You cannot change the delay as it is inherent to the operation requested (sending data takes a certain amount of time) so you cannot make the call shorter.
Therefore your choice is either to wait for a call to complete or not wait for it.
You cannot skip waiting for it without some sort of threading as this is how the processor exposes the abstraction of it doing two things at once (i.e. sending data on the serial port and continuing on with more code)... thus you will need threads to achieve this.
I think first of all you should change you lib to make all calls non-blocking. Here is good explanation Linux Blocking vs. non Blocking Serial Read
Most close technique to python's co-routines in C is Protothread. It implements simple cooperative multitasking without using threads
Its probably better to do this with threads in general.
However, there is a way that will work with limitations for simple applications.
Please forgive the use of a C++11 lambda, I think it makes things a little clearer here.
namespace
{
sigjmp_buf context;
} // namespace
void nonBlockingCall(int timeoutInSeconds)
{
struct sigaction* oldAction = nullptr;
struct sigaction newAction;
if (sigsetjmp(::context,0) == 0)
{
// install a simple lambda as signal handler for the alarm that
// effectively makes the call time out.
// (e.g. if the call gets stuck inside something like poll() )
newAction.sa_handler = [] (int) {
siglongjmp(::context,1);
};
sigaction(SIGALRM,&newAction,oldAction);
alarm(timeoutInSeconds); //timeout by raising SIGALM
BLOCKING_LIBRARY_CALL
alarm(0); //cancel alarm
//call did not time out
}
else
{
// timer expired during your call (SIGALM was raised)
}
sigaction(SIGALRM,oldAction,nullptr);
}
Limitations:
This is unsafe for multi-threaded code.
If you have multi-threaded code it is better to have the timer in a
monitoring thread and then kill the blocked thread.
timers and signals from elsewhere could interfere.
Unless BLOCKING_LIBRARY_CALL documents its behaviour very well you may be in undefined behaviour land.
It will not free resources properly if interrupted.
It might install signal handlers or masks or raise signals itself.
If using this idiom in C++ rather than C you must not allow any objects to be constructed or destroyed between setjmp and longjmp.
Others may find additional issues with this idiom.
I thought I'd seen this in Steven's somewhere, and indeed it is discussed in the signals chapter where it discusses using alarm() to implement sleep().

Call functions of the created threads periodically (Manual scheduling)

I have created 10 threads (pthreads to be precise), each thread is registered with a call back functions say fn1, fn2 ...fn10. I am also assigning different priorities for each thread with scheduling policy FIFO. The requirement of the application is that each of these functions have to be called periodically (periodicity varies for each thread). To implement the periodicity, I got ideas from other questions to use itimer and sigwait methods (Not very sure if this is good way to implement this, Any other suggestion to implement this are welcome).
My question is how do I need handle SIGALRM to repeatedly call these functions in their respective threads when periodicity is varying for each thread?
Thanks in advance.
Using Do sleep functions sleep all threads or just the one who call it? as a reference, my advice would be to avoid SIGALRM. Signals are normally delivered to a process.
IMHO you have two ways to do that :
implement a clever monitor that knows about all threads periodicity. It computes the time at which it must wake a thread, sleeps to that time, wakes the thread and continuouly iterates on that. Pro : threads only wait on a semaphore or other mutex, con : the monitor it too clever for me
each thread knows its periodicity, and stores its last start time. When it finishes its job, it computes how long it should wait until next activation time and sleeps for that duration. Pro : each thread is fully independant and implementation looks easy, cons : you must ensure that in your implementation, sleep calls only blocks calling thread.
I would use the 2nd solution, because the first looks like a user level implementation of sleep in a threaded environment.

Linux Scheduling: OS vs "virtual"

How does one implement a multithreaded single process model in linux fedora under c where a single scheduler is used on a "main" core reading i/o availability (ex. tcp/ip, udp) then having a single-thread-per-core (started at init), the "execution thread", parse the data then update a small amount of info update to shared memory space (it is my understanding pthreads share data under a single process).
I beleive my options are:
Pthreads or the linux OS scheduler
I have a naive model in mind consisting of starting a certain number of these execution threads a single scheduler thread.
What is the best solution one could think when I know that I can use this sort of model.
Completing Benoit's answer, in order to communicate between your master and your worker threads, you could use conditional variable. The workers do something like:
while (true)
{
pthread_mutex_lock(workQueueMutex);
while (workQueue.empty())
pthread_cond_wait(workQueueCond, workQueueMutex);
/* if we get were then (a) we have work (b) we hold workQueueMutex */
work = pop(workQueue);
pthread_mutex_unlock(workQueueMutex);
/* do work */
}
and the master:
/* I/O received */
pthread_mutex_lock(workQueueMutex);
push(workQueue, work);
pthread_cond_signal(workQueueCond);
pthread_mutex_unlock(workQueueMutex);
This would wake up one idle work to immediately process the request. If no worker is available, the work will be dequeued and processed later.
Modifying the Linux scheduler is quite a tough work. I would just forget about it. Pthread is usually prefered. If I understand well, you want to have one core dedicated to the control plan, and a pool of other cores dedicated to the data plan processing? Then create a pool of threads from your master thread and setup core affinity for these slave threads with pthread_setaffinity_np(...).
Indeed threads of a process share the same address-space, and global variables are accessible by any threads of that process.
It looks to me that you have a version of the producer-consumer problem with a single consumer aggregating the results of n producers. This is a pretty standard problem, so I definitely think that pthread is more than enough for you. You don't need to go and mess around with the scheduler.
As one of the answer's states, a thread safe queue like the one described here works nicely for this sort of issue. Your original idea of spawning a bunch of threads is a good idea. You seem to be worried that the ability of the threads to share global state will cause you problems. I don't think that this is an issue if you keep shared state to a minimum and use sane locking discipline. Sharing state is fine as long as you do so responsibly.
Finally, unless you really know what you're doing, I would advise against manually messing with thread affinity. Just spawn the threads and let the scheduler handle when and on what core a thread runs. The thing to optimize is the number of threads you use. One for each core may not actually be the fastest approach if other threads are running.
Generally speaking, this is more or less exactly what the posix select and linux specific epoll functions are for.

When to use QueueUserAPC()?

I do understand what an APC is, how it works, and how Windows uses it, but I don't understand when I (as a programmer) should use QueueUserAPC instead of, say, a fiber, or thread pool thread.
When should I choose to use QueueUserAPC, and why?
QueueUserAPC is a neat tool that can often be a shortcut for some tasks that are otherwise handled with synchronization objects. It allows you to tell a particular thread to do something whenever it is convenient for that thread (i.e. when it finishes its current work and starts waiting on something).
Let's say you have a main thread and a worker thread. The worker thread opens a socket to a file server and starts downloading a 10GB file by calling recv() in a loop. The main thread wants to have the worker thread do something else in its downtime while it is waiting for net packets; it can queue a function to be run on the worker while it would otherwise be waiting and doing nothing.
You have to be careful with APCs, because as in the scenario I mentioned you would not want to make another blocking WinSock call (which would result in undefined behavior). You really have to be watching in order to find any good uses of this functionality because you can do the same thing in other ways. For example, by having the other thread check an event every time it is about to go to sleep, rather than giving it a function to run while it is waiting. Obviously the APC would be simpler in this scenario.
It is like when you have a call desk employee sitting and waiting for phone calls, and you give that person little tasks to do during their downtime. "Here, solve this Rubik's cube while you're waiting." Although, when a phone call comes in, the person would not put down the Rubik's cube to answer the phone (the APC has to return before the thread can go back to waiting).
QueueUserAPC is also useful if there is a single thread (Thread A) that is in charge of some data structure, and you want to perform some operation on the data structure from another thread (Thread B), but you don't want to have the synchronization overhead / complexity of trying to share that data between two threads. By having Thread B queue the operation to run on Thread A, which solely maintains that structure, you are executing any arbitrary function you want on that data without having to worry about synchronization.
It is just another tool like a thread pool. However with a thread pool you cannot send a task to a particular thread. You have no control over where the work is done. When you queue up a task that may end up creating a whole new thread. You may queue two tasks and they get done simultaneously on two different threads. With QueueUserAPC, you can be guaranteed that the tasks would get done in order and on the thread you designate.

C functions invoked as threads - Linux userland program

I'm writing a linux daemon in C which gets values from an ADC by SPI interface (ioctl). The SPI (spidev - userland) seems to be a bit unstable and freezes the daemon at random times.
I need to have some better control of the calls to the functions getting the values, and I was thinking of making it as a thread which I could wait for to finish and get the return value and if it times out assume that it froze and kill it without this new thread taking down the daemon itself. Then I could apply measures like resetting the ADC before restarting. Is this possible?
Pseudo example of what I want to achieve:
(function int get_adc_value(int adc_channel, float *value) )
pid = thread( get_adc_value(1,&value); //makes thread calling the function
wait_until_finish(pid, timeout); //waits until function finishes/timesout
if(timeout) kill pid, start over //if thread do not return in given time, kill it (it is frozen)
else if return value sane, continue //if successful, handle return variable value and continue
Thanks for any input on the matter, examples highly appreciated!
I would try looking at using the pthreads library. I have used it for some of my c projects with good success and it gives you pretty good control over what is running and when.
A pretty good tutorial can be found here:
http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html
In glib there is too a way to check the threads, using GCond (look for it in the glib help).
In resume you should periodically set a GCond in the child thread and check it in the main thread with a g_cond_timed_wait. It's the same with the glib or the pthread.
Here is an example with the pthread:
http://koders.com/c/fidA03D565734AE2AD9F5B42AFC740B9C17D75A33E3.aspx?s=%22pthread_cond_timedwait%22#L46
I'd recommend a different approach.
Write a program that takes samples and writes them to standard output. It simply need have alarm(TIMEOUT); before every sample collection, and should it hang the program will exit automatically.
Write another program that runs that first program. If it exits, it runs it again. It looks something like this:
main(){for(;;){system("sampler");sleep(1);}}
Then in your other program, use FILE*fp=popen("supervise_sampler","r"); and read the samples from fp. Better still: Have the program simply read the samples from stdin and insist users start your program like this:
(while true;do sampler;sleep 1; done)|program
Splitting up the task like this makes it easier to develop and easier to test, for example, you can collect samples and save them to a file and then run your program on that file:
sampler > data
program < data
Then, as you make changes to program, you can simply run it again on the same data over and over again.
It's also trivial to enable data logging- so should you find a serious issue you can run all your data through your program again to find the bugs.
Something very interesting happens to a thread when it executes an ioctl(), it goes into a very special kind of sleep known as disk sleep where it can not be interrupted or killed until the call returns. This is by design and prevents the kernel from rotting from the inside out.
If your daemon is getting stuck in ioctl(), its conceivable that it may stay that way forever (at least till the ADC is re-set).
I'd advise dropping something, like a file with a timestamp prior to calling ioctl() on a known buggy interface. If your thread does not unlink that file in xx amount of seconds, something else needs to re-start the ADC.
I also agree with the use of pthreads, if you need example code, just update your question.

Resources