I am implementing a thread library in C using makecontext(), getcontext() and swapcontext(). I need to implement a scheduler which is invoked every 5 ms in order to switch contexts with another thread (round robin). How could I implement this timer functionality? If I put the timer in the scheduler, then it will be impossible for time to be incremented while the scheduler is not running. Is there a way to associate a timer with a particular process that updates no matter what context is active?
A solution based on setitimer() system call could do the job. This can be programmed to trigger a cyclic SIGALRM signal. It could then be possible to attach a signal handler which would trigger the scheduler.
Related
I wanted to execute a task on the calling thread only but after some delay
I see Timer API as one option but it creates another thread. What would be the best possible solution
As the title suggests, is there a way in C to detect when a user-level thread running on top of a kernel-level thread e.g., pthread has blocked (or about to block) for I/O?
My use case is as follows: I need to execute tasks in a multithreaded environment (on top of kernel threads e.g., pthreads). The tasks are basically user functions that can be synchronized and may use blocking operations within. I need to hide latency in my implementation. So, I am exploring the idea of implementing the tasks as user-level threads for better control of their execution context such that, when a task blocks or synchronizes, I context-switch to other ready tasks (i.e., implementing my own scheduler for the user-level threads). Consequently, almost the full use of the OS’s time quantum per kernel thread can be achieved.
There used to be code that did this, for example GNU pth. It's generally been abandoned because it just doesn't work very well and we have much better options now. You have two choices:
1) If you have OS help, you can use the OS mechanisms. Windows provides OS help for this, IOCP dispatching uses it.
2) If you have no OS help, then you have to convert all blocking operations into non-blocking ones that call your dispatcher rather than blocking. So, for example, if someone calls socket, you intercept that call and set the socket non-blocking. When they call read, you intercept that call and if they get a "would block" indication, you arrange to resume when the operation might succeed and schedule another thread.
You can look at GNU pth to see how you might make option 2 work. But be warned, GNU pth is full of reported bugs that have never been fixed since it was abandoned. It will give you an idea of how to implement things like mutexes and sleeps in a cooperative user-space threading environment. But don't actually use the code.
Are there any conventions/design pattern for using signals and signal handlers in a library code? Because signals are directed to the whole process and not to specific thread or library, i feel there may be some issues.
Let's say i m writing a shared library which will be used by other applications and i want to use alarm, setitimer functions and trap SIGALRM signal to do some processing at specific time.
I see some problems with it:
1) If application code (which i have no control of) also uses SIGALRM and i install my own signal handler for it, this may overwrite the application's signal handler and thus disable it's functionality. Of course i can make sure to call previous signal handler (retrieved by signal() function) from my own signal handler, but then there is still a reverse problem when application code can overwrite my signal handler and thus disable the functionality in my library.
2) Even worse than that, application developer may link with another shared library from another vendor which also uses SIGALRM, and thus nor me nor application developer would have any control over it.
3) Calling alarm() or setitimer() will overwrite the previous timer used by the process, so application could overwrite the timer i set in the library or vice versa.
I m kinda a novice at this, so i m wondering if there is already some convention for handling this? (For example, if every code is super nice, it would call previous signal handler from their own signal handler and also structure the alarm code to honor previous timers before overwriting them with their own timer)
Or should i avoid using signal handlers and alarm()s in a library alltogether?
Or should i avoid using signal handlers and alarm()s in a library alltogether?
Yes. For the reasons you've identified, you can't depend on signal disposition for anything, unless you control all code in the application.
You could document that your library requires that the application not use SIGALRM, nor call alarm, but the application developer may not have any control over that anyway, and so it's in your best interest to avoid placing such restrictions in the first place.
If your library can work without SIGALRM (perhaps with reduced functionality), you can also make this feature optional, perhaps controlled by some environment variable. Then, if it is discovered that there is some code that interferes with your signal handling, you can tell the end-user to set your environment variable to disable that part of your library (which beats having to rebuild and supply a new version of it).
P.S. Your question and this answer applies to any library, whether shared or archive.
I am experimenting with SCHED_FIFO and I am seeing some unexpected behaviour. The server I am using has 12 cores with hyper-threading disabled. All configurable interrupts have been set to run on CPU 0.
My program starts creates a thread for lower priority tasks using the pthreads library without changing the scheduling policy with CPU affinity set to core 0. The parent thread then sets its CPU affinity to core 3 and its own scheduling policy to SCHED_FIFO using sched_setscheduler() with pid zero and priority 1 and then starts running a non-blocking loop.
The program itself runs well. However, if I attempt to log into the server for a second time while the program is running the terminal is unresponsive until I stop my program. It is like the scheduler is trying to run other processes on the same core as the real time process.
What am I missing?
Will the scheduler still attempt to run other processes on a core running a real time process? If so, is there a way to prevent this?
Will setting the scheduling policy with sched_setscheduler() in the parent change the behaviour of the child that was created before?
Thanks in advance.
sched_setscheduler sets the scheduler of the process, not the thread. See:
http://pubs.opengroup.org/onlinepubs/9699919799/functions/sched_setscheduler.html
If you want to set the scheduler for a thread, you need to use the pthread_attr_setschedpolicy and pthread_attr_setschedparam functions on the attribute object for the new thread before you create it.
I'm not sure how conformant Linux is on honoring these requirements, but you should at least start out by making sure your code is correct to the specification, then adjust it as needed...
I have a task which is basically a TIMER; so it goes to sleep and is supposed to wake up periodically.. So the timer task sleeps for say 10ms. But what is happening is that it is inconsistent in waking up and cannot be relied upon to awaken in time correctly.
In fact, in my runs, there is a big difference in sleep times. Sometimes it can vary by 1-2 ms in awakening and very few times does not come back to at all. This is because the kernel scheduler puts all the sleeping and waiting tasks in a queue and then when it polls to see who is to be awakened, I think it is round robin. So sometimes the task would have expired by the time the scheduler polls again. Sometimes, when there are interrupts, the ISR gets control and delays the timer from waking up.
What is the best solution to handle this kind of problem?
(Additional details: The task is a MAC timer for a wireless network; RTOS is a u-velOSity microkernel)
You should be using the timer API provided by the OS instead on relying on the scheduler. Here's an introduction to the timer API for Linux drivers.
If you need hardcore timing, the OS scheduler is not likely to be good enough (as you've found).
If you can, use a separate timer peripheral, and use it's ISR to do as little as you can get away with (timestamping some critical data, set some flags for example) and then let your higher-jitter routine make use of that data with its less guaranteed timing.
Linux is not an RTOS, and that is probably the root of your problem.
You can render Linux more suited to real-time use in various ways and to various extent. See A comparison of real-time Linux approaches for some methods and an assessment of the level of real-time performance you can expect.