High performance computing pthread parameters - c++ - c

I'm working on a project where I need to use multiple threads using pthread (C++).
I have a question:
What is the best pthread parameter configuration setting for when I want to do some high performance computing in that thread without too much other threads interrupting it?
Currently I'm using something like this:
pthread_t thread;
struct sched_param param;
int sched = SCHED_FIFO;
memset(&param, 0, sizeof(sched_param));
// Now I set priority to max value (sched_get_priority_max() returns that value)
param.sched_priority = sched_get_priority_max();
// Create my thread
pthread_create(&thread, NULL, (void *) &hard_number_crunching, (void *) &structure_passed_to_thread);
// finally I set parameters to thread
pthread_setschedparam(&thread, sched, &param);
I was switching "int sched" between SCHED_FIFO and SCHED_RR, but it didn't help me much.
Is it possible to force this thread to run longer on the CPU than it is at the moment?

If you are creating one thread per core, you probably want to set the thread's affinity to prevent it from roaming between cores. This usually improves the performance by ensuring that each thread remains close to its cached data. See:
int sched_setaffinity(pid_t pid,size_t cpusetsize,cpu_set_t *mask);
Note: you should not set the affinity if you are creating more threads than cores! This could cause all kind of crazy things to happen, deadlocks to mention one.

Let's say your thread is going to take X cycles. It doesn't matter what the priority is, or what else the system is doing, it's still going to take X cycles. If the system is not busy doing other things, your thread will run nearly continuously until it finishes. If the system is busy, your thread will be stopped, perhaps even frequently, to let other things run. Eventually your thread will still get its X cycles. Note that for your purposes here, there are two types of priorities... thread priorities (the priority of your one thread versus the priority of your other threads), and process priorities (the priority of your process/program versus the priority of other processes/programs/operating-system). You're setting the thread priority. Do you have other threads of equal or higher priority competing with this one? Are there system processes of higher priority competing with yours? If the answer is no, you'll pretty much get the CPU to yourself. Also note, if you have multiple processors and/or cores, your thread may not run well on more than one, so your overall system processing may not be 100% if you haven't divided up your tasks well.

Both zr. and mark answers helped me.
I also found a way to set affinity for individual thread.
Check this link to see how: pthread_attr_setaffinity_np()
Thanks everyone who helped.

Related

Priority based multithreading?

I have written code for two threads where is one is assigned priority 20 (lower) and another on 10 (higher). Upon executing my code, 70% of the time I get expected results i.e high_prio (With priority 10) thread executes first and then low_prio (With priority 20).
Why is my code not able to get 100 % correct result in all the executions? Is there any conceptual mistake that I am doing?
void *low_prio(){
Something here;
}
void *high_prio(){
Something here;
}
int main(){
Thread with priority 10 calls high_prio;
Thread with priority 20 calls low_prio;
return 0;
}
Is there any conceptual mistake that I am doing?
Yes — you have an incorrect expectation regarding what thread priorities do. Thread priorities are not meant to force one thread to execute before another thread.
In fact, in a scenario where there is no CPU contention (i.e. where there are always at least as many CPU cores available as there are threads that currently want to execute), thread priorities will have no effect at all -- because there would be no benefit to forcing a low-priority thread not to run when there is a CPU core available for it to run on. In this no-contention scenario, all of the threads will get to run simultaneously and continuously for as long as they want to.
The only time thread priorities may make a difference is when there is CPU contention -- i.e. there are more threads that want to run than there are CPU cores available to run them. At that point, the OS's thread-scheduler has to make a decision about which thread will get to run and which thread will have to wait for a while. In this instance, thread priorities can be used to indicate to the scheduler which thread it should prefer allow to run.
Note that it's even more complicated than that, however -- for example, in your posted program, both of your threads are calling printf() rather a lot, and printf() invokes I/O, which means that the thread may be temporarily put to sleep while the I/O (e.g. to your Terminal window, or to a file if you have redirected stdout to file) completes. And while that thread is sleeping, the thread-scheduler can take advantage of the now-available CPU core to let another thread run, even if that other thread is of lower priority. Later, when the I/O operation completes, your high-priority thread will be re-awoken and re-assigned to a CPU core (possibly "bumping" a low-priority thread off of that core in order to get it).
Note that inconsistent results are normal for multithreaded programs -- threads are inherently non-deterministic, since their execution patterns are determined by the thread-scheduler's decisions, which in turn are determined by lots of factors (e.g. what other programs are running on the computer at the time, the system clock's granularity, etc).

Lock that handles a high-contention, high-frequency situation

I am looking for a lock implementation that degrades gracefully in the situation where you have two threads that constantly try to release and re-acquire the same lock, at a very high frequency.
Of course it is clear that in this case the two threads won't significantly progress in parallel. Theoretically, the best result would be achieved by running the whole thread 1, and then the whole thread 2, without any switching---because switching just creates massive overhead here. So I am looking for a lock implementation that would handle this situation gracefully by keeping the same thread running for a while before switching, instead of constantly switching.
Long version of the question
As I would myself be tempted to answer this question by "your program is broken, don't do that", here is some justification about why we end up in this kind of situation.
The lock is a "single global lock", i.e. a very coarse lock. (It is the Global Interpreter Lock (GIL) inside PyPy, but the question is about how to do it in general, say if you have a C program.)
We have the following situation:
There is constantly contention. That's expected in this case: the lock is a global lock that needs to be acquired for most threads to progress. So we expect that a large fraction of them are waiting for the lock. Only one of these threads can progress.
The thread that holds the lock might do sometimes bursts of short releases. A typical example would be if this thread does repeated calls to "something external", e.g. many short writes to a file. Each of these writes is usually completed very quickly. The lock still has to be released just in case this external thing turns out to take longer than expected (e.g. if the write actually needs to wait for disk I/O), so that another thread can acquire the lock in this case.
If we use some standard mutex for the lock, then the lock will often switch to another thread as soon as the owner releases the lock. But the problem is what if the program runs several threads that each wants to do a long burst of short releases. The program ends up spending most of its time switching the lock between CPUs.
It is much faster to run the same thread for a while before switching, at least as long as the lock is released for very short periods of time. (E.g. on Linux/pthread a release immediately followed by an acquire will sometimes re-acquire the lock instantly even if there are other waiting threads; but we'd like this result in a large majority of cases, not just sometimes.)
Of course, as soon as the lock is released for a longer period of time, then it becomes a good idea to transfer ownership of the lock to a different thread.
So I'm looking for general ideas about how to do that. I guess it should exist already somewhere---in a paper, or in some multithreading library?
For reference, PyPy tries to implement something like this by polling: the lock is just a global variable, with synchronized compare-and-swap but no OS calls; one of the waiting threads is given the role of "stealer"; that "stealer" thread wakes up every 100 microseconds to check the variable. This is not horribly bad (it costs maybe 1-2% of CPU time in addition to the 100% consumed by the running thread). This actually implements what I'm asking for here, but the problem is that this is a hack that doesn't cleanly support more traditional cases of locks: for example, if thread 1 tries to send a message to thread 2 and wait for the answer, the two thread switches will take in average 100 microseconds each---which is far too much if the message is processed quickly.
For reference, let me describe how we finally implemented it. I was unsure about it as it still feels like a hack, but it seems to work for PyPy's use case in practice.
We did it as described in the last paragraph of the question, with one addition: the "stealer" thread, which checks some global variable every 100 microseconds, does this by calling pthread_cond_timedwait or WaitForSingleObject with a regular, system-provided mutex, with a timeout of 100 microseconds. This gives a "composite lock" with both the global variable and the regular mutex. The "stealer" will succeed in stealing the "lock" if either it notices a value 0 is the global variable (every 100 microseconds), or immediately if the regular mutex is released by another thread.
It's then a matter of choosing how to release the composite lock in a case-by-case basis. Most external functions (writes to files, etc.) are expected to generally complete quickly, and so we release and re-acquire the composite lock by writing to the global variable. Only in a few specific function cases---like sleep() or lock_acquire()---we expect the calling thread to often block; around these functions, we release the composite lock by actually releasing the mutex instead.
If I understand the problem statement, you are asking the kernel scheduler to do an educated guess on whether your userspace application "hot" thread will try to reacquire the lock in the very near future, to avoid implicitly preempting it by allowing a "not-so-hot" thread to acquire the mutex.
I wouldn't know how the kernel could do that. The only two things that come to my mind:
Do not release mutex unless hot thread is actually transitioning to idle (application specific condition). In Linux you can use MONOTONIC_COARSE to try to reduce the overhead of checking the wall clock to implement some sort of timer.
Increase hot thread prio. This is more of mitigation strategy, in an attempt to reduce the amount of preemption of the hot thread. If the "hot" thread can be identified, you could do something like:
pthread_t thread = pthread_self();
//Set max prio, FIFO
struct sched_param params;
params.sched_priority = sched_get_priority_max(SCHED_FIFO);
int rv = pthread_setschedparam(thread, SCHED_FIFO, &params);
if(rv != 0){
//Print error
//...
}
Spinlock might work better in your case. They avoid context switching and are highly efficient if the threads are likely to hold the lock only for short duration of time.
For this very reason, they are widely used in OS kernels.

Soft Real Time Linux Scheduling

I have a project with some soft real-time requirements. I have two processes (programs that I've written) that do some data acquisition. In either case, I need to continuously read in data that's coming in and process it.
The first program is heavily threaded, and the second one uses a library which should be threaded, but I have no clue what's going on under the hood. Each program is executed by the user and (by default) I see each with a priority of 20 and a nice value of 0. Each program uses roughly 30% of the CPU.
As it stands, both processes have to contended with a few background processes, and I want to give my two programs the best shot at the CPU as possible. My main issue is that I have a device that I talk to that has a 64 byte hardware buffer, and if I don't read from it in time, I get an overflow. I have noted this condition occurring once every 2-3 hours of run time.
Based on my research (http://oreilly.com/catalog/linuxkernel/chapter/ch10.html) there appear to be three ways of playing around with the priority:
Set the nice value to a lower number, and therefore give each process more priority. I can do this without any modification to my code (or use the system call) using the nice command.
Use sched_setscheduler() for the entire process to a particular scheduling policy.
Use pthread_setschedparam() to individually set each pthread.
I have run into the following roadblocks:
Say I go with choice 3, how do I prevent lower priority threads from being starved? Is there also a way to ensure that shared locks cause lower priority threads to be promoted to a higher priority? Say I have a thread that's real-time, SCHED_RR and it shared a lock with a default, SCHED_OTHER thread. When the SCHED_OTHER thread gets the lock, I want it to execute # higher priority to free the lock. How do I ensure this?
If a thread of SCHED_RR creates another thread, is the new thread automatically SCHED_RR, or do I need to specify this? What if I have a process that I have set to SCHED_RR, do all its threads automatically follow this policy? What if a process of SCHED_RR spawns a child process, is it too automatically SCHED_RR?
Does any of this matter given that the code only uses up 60% of the CPU? Or are there still issues with the CPU being shared with background processes that I should be concerned with and could be caused my buffer overflows?
Sorry for the long winded question, but I felt it needed some background info. Thanks in advance for the help.
(1) pthread_mutex_setprioceiling
(2) A newly created thread inherits the schedule and priority of its creating thread unless it's thread attributes (e.g. pthread_attr_setschedparam / pthread_attr_setschedpolicy) are directed to do otherwise when you call pthread_create.
(3) Since you don't know what causes it now it is in fairness hard for anyone say with assurance.

Sync thread by changing schedule priority

I found another way to sync thread from the source code of strongswan. It sync the thread by changing thread's schedule policy(SCHED_FIFO). Does it have any advantage over the mutex way?
The code:
int oldpolicy;
struct sched_param oldparams, params;
pthread_getschedparam(thread_id, &oldpolicy, &oldparams);
params.__sched_priority = sched_get_priority_max(SCHED_FIFO);
pthread_setschedparam(thread_id, SCHED_FIFO, &params);
...
critical section
...
pthread_setschedparam(thread_id, oldpolicy, &oldparams);
PS: strongswan use malloc hook to detect memory leak. To support multi-thread, it use this way to sync the threads.
PPS: It seems that they have modified the code. That piece of code from the version Strongswan 4.5.0.
That does not synchronize anything!
What this does is prevents the thread to be scheduled off the CPU while the critical section is running. Since we now have multiple CPUs and since different thread can run on another CPU, it does not exclude anything at all. And it does not even completely prevent preemption; the thread can still sleep if waiting on page fault or other IO.
The reason for it is to avoid starving other threads when something very important is being calculated without which the other threads can't continue. It does help that cause, but it's a very specialized case (search for "priority inversion").
It's broken if you have more than one core, unless you lock all threads that might conflict to the same core. And, even then, it's still broken if you block on I/O. (For example, a page fault.) Yuck.

Gracefully (i.e eventually cooperatively) suspend thread execution

I have to develop an application that tries to emulate the executing flow of an embedded target. This target has 2 levels of priority : the highest one being preemptive on the lowest one. The low priority level is managed with a round-robin scheduler which gives 1ms of execution to each thread in turn.
My goal is to write a library that provide the thread_create, thread_start, and all the system calls that are available on my target and use POSIX functions to reproduce the behavior natively on a standard PC.
Thus, when an high priority thread executes, low priority threads should be suspended whatever they are doing at that very moment. It is to the responsibility of the low priority thread's implementation to ensure that it won't be perturbed.
I now it is usually unsafe to suspend a thread, which explains why I didn't find any "suspend(pid)" function.
I basically imagine two solutions to the problem :
-find a way to suspend the low priority threads when a high priority thread starts (and resume them when there is no more high priority activity)
-periodically call a very small "suspend_if_necessary" function everywhere in my low-priority code, and whenever an high priority must start, wait for all low-priority process to call that function and be suspended, execute as single high priority thread, then resume them all.
Even if it is not-so-clean, I quite like the second solution, but still have one problem : how to call the function everywhere without changing all my code?
I wonder if there is an easy way to doing that, somewhat like debugging code does : add a hook call at every line executed that checks for a flag and run some specific code when that flag changes?
I'd be very happy if there is an easy solution to that problem, since I really need to be representative with the behavior of the target execution flow...
Thanks in advance,
Goulou.
Unfortunately, it's not really possible to implement what you want with true threads - even if the high prio thread is restarted, it can take arbitrarily long before the high prio thread is scheduled back in and goes to suspend all the low priority threads. Moreover, there is no reliable way to determine whether the high priority thread is blocked or not using only POSIX threads; you could try tracking things manually, but this runs the risk of both false positives (the thread's blocked on something, but the low prio threads think it's running and suspend itself) and false negatives (you miss a resumed annotation, or there's lag between when the thread's actually resumed and when it marks itself as running).
If you want to implement a thread priority system with pure POSIX, one option is to not use threads, but rather use setcontext for cooperative multitasking. This would allow you to swap between threads at a user level. However you must explicitly yield the CPU in this case. It also doesn't help with blocking syscalls, which would then block all threads in your app; but since you're writing an emulator this might not be an issue.
You may also be able to swap threads using setcontext within a signal handler; I've not tested this case myself, but it could be worth a try scheduling using setcontext in a SIGALRM handler.
To suspend a thread, you sleep it. If you want to be able to wake it on command, sleep it using sigwait, which puts the thread to sleep until it gets a signal. You can send a specific thread a signal with pthread_kill (crazy name, but it actually just sends signals to a thread). This is a very fast way to sleep and wake up threads. 40x Faster than condition variables and very easy.

Resources