Are locked pages inherited by pthreads? - c

I have a little paging problem on my realtime system, and wanted to know how exactly linux should behave in my particular case.
Among various other things, my application spawns 2 threads using pthread_create(), which operate on a set of shared buffers.
The first thread, let's call it A, reads data from a device, performs some calculations on it, and writes the results into one of the buffers.
Once that buffer is full, thread B will read all the results and send them to a PC via ethernet, while thread A writes into the next buffer.
I have noticed that each time thread A starts writing into a previously unused buffer, i miss some interrupts and lose data (there is an id in the header of each packet, and if that increments by more than one, i have missed interrupts).
So if i use n buffers, i get exactly n bursts of missed interrupts at the start of my data acquisition (therefore the problem is definitely caused by paging).
To fix this, i used mlock() and memset() on all of the buffers to make sure they are actually paged in.
This fixed my problem, but i was wondering where in my code would be the correct place do this. In my main application, or in one/both of the threads? (currently i do it in both threads)
According to the libc documentation (section 3.4.2 "Locked Memory Details"), memory locks are not inherited by child processes created using fork().
So what about pthreads? Do they behave the same way, or would they inherit those locks from my main process?
Some background information about my system, even though i don't think it matters in this particular case:
It is an embedded system powered by a SoC with a dual-core Cortex-A9 running Linux 4.1.22 with PREEMPT_RT.
The interrupt frequency is 4kHz
The thread priorities (as shown in htop) are -99 for the interrupt, -98 for thread A (both of which are higher than the standard priority of -51 for all other interrupts) and -2 for thread B
EDIT:
I have done some additional tests, calling my page locking function from different threads (and in main).
If i lock the pages in main(), and then try to lock them again in one of the threads, i would expect to see a large amount of page faults for main() but no page faults for the thread itself (because the pages should already be locked). However, htop tells a different story: i see a large amount of page faults (MINFLT column) for each and every thread that locks those pages.
To me, that would suggest that pthreads actually do have the same limitation as child processes spawned using fork(). And if this is the case, locking them in both threads (but not in main) would be the correct procedure.

Threads share the same memory management context. If a page is resident for one thread, it's resident for all threads in the same process.
The implication of this is that memory locking is per-process, not per-thread.
You are probably still seeing minor faults on the first write because a fault is used to mark the page dirty. You can avoid this by also writing to each page after locking.

Related

How does a process know that semaphore is available

I have a very basic doubt.
when a process is waiting on a semaphore , it goes into sleep state.
So no way it can poll the semaphore value.
Does kernel poll the semaphore value and if available sends a signal to all process waiting for it ? If so, wont it be too much overhead for the kernel.
Or does the signal() call internally notifies all the process waiting for the semaphore.
Please let me know on this.
The operating system schedules the process once more when the operating system is told by another process that it has done with the semaphore.
Semaphores are just one of the ways of interacting with the OS scheduler.
The kernel doesn't poll the semaphore; it doesn't need to. Every time a process calls sem_post() (or equivalent), that involves interaction with the kernel. What the kernel does during the sem_post() is look up whatever processes have previously called sem_wait() on the same semaphore. If one or more processes have called sem_wait(), it picks the process with the highest priority and schedules it. This shows up as that sem_wait() finally returning and that process carries on executing.
How This is Implemented Under the Hood
Fundamentally the kernel needs to implement something called an "atomic test and set". That is an operation where by the value of some variable can be tested and, if a certain condition is met (such as the value == 0) the variable value is altered (e.g. value = 1). If this succeeds, the kernel will do one thing, (like schedule a process), if this does not (because the condition value==0 was false) the kernel will do something difference (like put a process on the do-not-schedule list). The 'atomic' part is that this decision is made without anything else being able to look at and change the same variable at the same time.
There's several ways of doing this. One is to suspend all processes (or at least all activity within the kernel) so that nothing else is testing the value of the variable at the same time. That's not very fast.
For example, the Linux kernel once had something called the Big Kernel Lock. I don't know if this was used to process semaphore interactions, but that's the kind of thing that OSes used to have for atomic test & sets.
These days CPUs have atomic test & set op codes, which is a lot faster. The good ole' Motorola 68000 had one of these a long time ago; it took CPUs like the PowerPC and the x86 many, many years to get the same kind of instruction.
If you root around inside linux you'll find mention of futexes. a futex is a fast mutex - it relies on a CPU's test/set instruction to implement a fast mutex semaphore.
Post a Semaphore in Hardware
A variation is a mailbox semaphore. This is a special variation on a semaphore that is extremely useful in some system types where hardware needs to wake up a process at the end of a DMA transfer. A mailbox is a special location in memory which when written to will cause an interrupt to be raised. This can be turned into a semaphore by the kernel because when that interrupt is raised, it goes through the same motions as it would had something called sem_post().
This is incredibly handy; a device can DMA a large amount of data to some pre-arranged buffer, and top that off with a small DMA transfer to the mail box. The kernel handles the interrupt, and if a process has previously called sem_wait() on the mailbox semaphore the kernel schedules it. The process, which also knows about this pre-arranged buffer, can then process the data.
On a real time DSP systems this is very useful, because it's very fast and very low latency; it allows a process to receive data from some device with very little delay. The alternative, to have a full up device driver stack that uses read() / write() to transfer data from the device to the process is incredibly slow by comparison.
Speed
The speed of semaphore interactions depends entirely on the OS.
For OSes like Windows and Linux, the context switch time is fairly slow (in the order of several microseconds, if not tens of microseconds). Basically this means that when a process calls something like sem_post(), the kernel is doing a lot of different things whilst it has the opportunity before finally returning control to the process(es). What it's doing during this time could be, well, almost anything!
If a program has made use of a lot threads, and they're all rapidly interacting between themselves using semaphores, quite a lot of time is lost to the sem_post() and sem_wait(). This places an emphasis on doing a decent amount of work once a process has returned from sem_wait() before calling the next sem_post().
However on OSes like VxWorks, the context switch time is lightning fast. That is there's very little code in the kernel that gets run when sem_post() is called. The result is that a semaphore interaction is a lot more efficient. Moreover, and OS like VxWorks is written in such a way so as to guarantee that the time take to do all this sem_post() / sem_wait() work is constant.
This influences the architecture of one's software on these systems. On VxWorks, where a context switch is cheap, there's very little penalty in having a large number of threads all doing quite small tasks. On Windows / Linux there's more of an emphasis on the opposite.
This is why OSes like VxWorks are excellent for hard real time applications, and Windows / Linux are not.
The Linux PREEMPT_RT patch set in part aims to improve the latency of the linux kernel during operations like this. For example, it pushes a lot of device interrupt handlers (device drivers) up into kernel threads; these are scheduled almost just like any other thread. The idea is to reduce the amount of work that is being done by the kernel (and have more done by kernel threads), so that the work it still has to do itself (such as handling sem_post() / sem_wait()) takes less time and is more consistent about how long this takes. It still not a hard guarantee of latency, but it's a pretty good improvement. This is what we call a soft-realtime kernel. The impact though is that overall throughput of the machine can be lower.
Signals
Signals are nasty, horrible things that really get in the way of using things like sem_post() and sem_wait(). I avoid them like the plague.
If you are on a Linux platform and you do have to use signals, take a serious long look at signalfd (man page). This is a far better way of dealing with signals because you can choose to accept them at a convenient time (simply by called read()), instead of having to handle them as soon as they occur. Certainly if you're using epoll() or select() anywhere at all in a program then signalfd is the way to go.

Soft Real Time Linux Scheduling

I have a project with some soft real-time requirements. I have two processes (programs that I've written) that do some data acquisition. In either case, I need to continuously read in data that's coming in and process it.
The first program is heavily threaded, and the second one uses a library which should be threaded, but I have no clue what's going on under the hood. Each program is executed by the user and (by default) I see each with a priority of 20 and a nice value of 0. Each program uses roughly 30% of the CPU.
As it stands, both processes have to contended with a few background processes, and I want to give my two programs the best shot at the CPU as possible. My main issue is that I have a device that I talk to that has a 64 byte hardware buffer, and if I don't read from it in time, I get an overflow. I have noted this condition occurring once every 2-3 hours of run time.
Based on my research (http://oreilly.com/catalog/linuxkernel/chapter/ch10.html) there appear to be three ways of playing around with the priority:
Set the nice value to a lower number, and therefore give each process more priority. I can do this without any modification to my code (or use the system call) using the nice command.
Use sched_setscheduler() for the entire process to a particular scheduling policy.
Use pthread_setschedparam() to individually set each pthread.
I have run into the following roadblocks:
Say I go with choice 3, how do I prevent lower priority threads from being starved? Is there also a way to ensure that shared locks cause lower priority threads to be promoted to a higher priority? Say I have a thread that's real-time, SCHED_RR and it shared a lock with a default, SCHED_OTHER thread. When the SCHED_OTHER thread gets the lock, I want it to execute # higher priority to free the lock. How do I ensure this?
If a thread of SCHED_RR creates another thread, is the new thread automatically SCHED_RR, or do I need to specify this? What if I have a process that I have set to SCHED_RR, do all its threads automatically follow this policy? What if a process of SCHED_RR spawns a child process, is it too automatically SCHED_RR?
Does any of this matter given that the code only uses up 60% of the CPU? Or are there still issues with the CPU being shared with background processes that I should be concerned with and could be caused my buffer overflows?
Sorry for the long winded question, but I felt it needed some background info. Thanks in advance for the help.
(1) pthread_mutex_setprioceiling
(2) A newly created thread inherits the schedule and priority of its creating thread unless it's thread attributes (e.g. pthread_attr_setschedparam / pthread_attr_setschedpolicy) are directed to do otherwise when you call pthread_create.
(3) Since you don't know what causes it now it is in fairness hard for anyone say with assurance.

Atomic writes in linux

On linux, when writing to a pipe, if the data is equal or less than the memory page size (4k atleast on 64bit rhel), the OS provides the guarantee that the whole write will either succeed or fail, but there would be no corruption of data, when multiple process are doing write at the same time. This applies to writing to regular files also.
My question is that is this atomicity a feature of virtual memory of linux? If yes, consider a shared memory scenario between two process, where one process is swapped out in middle of the write by the scheduler. Does Virtual memory subsytem ensures that the memory page to which the process was writing, is also locked, so that the second process cannot write to the same page?
Is this atomicity at page level only applicable across process , or also between threads of the same process?
No. If two processes are using shared memory, there is no implicit lock between the processes from this. You will have to arrange such a lock yourself (and if the owner of the lock is swapped out, then your other process will have to darn well wait until the owner gets swapped in and releases the lock after finishing whatever it was doing whilst holding the lock).
I don't believe there is any implicit (or explicit) rule that pages are different from other memory overall. The specific rules apply to writing to pipes and files, that if all the data fits in one page, it can be written as one block by the OS - I think you'll find that the OS holds a lock on the resource that it is writing to for one page at a time. If the data is bigger than a page, when the lock is relesed, the other process [or thread] may well be ready to run and thus "steal" the lock from the first process. Less than a page, it does the whole write in one locked run.
But to be clear, there is no implicit locks on writes (or reads) of memory pages in general. It applies strictly to CERTAIN functions. Typically, a particular function will also have a lock of some sort that prevent other processes from running in the same function [at least with a given resource - e.g. a file descriptor or similar - it's perfectly possible that some other process can read from another file simultaneously to your process reading from or writing to your file, but YOUR file is atomic per some size of block that the lock is held for, but not for your "write the entire Shakespeares works at once" system call, as that could potentially block some other important process.

Synchronize two processes using two different states

I am trying to work out a way to synchronize two processes which share data.
Basically I have two processes linked using shared memory. I need process A to set some data in the shared memory area, then process B to read that data and act on it.
The sequence of events I am looking to have is:
B blocks waiting for data available signal
A writes data
A signals data available
B reads data
B blocks waiting for data not available signal
A signals data not available
All goes back to the beginning.
In other terms, B would block until it got a "1" signal, get the data, then block again until that signal went to "0".
I have managed to emulate it OK using purely shared memory, but either I block using a while loop which consumes 100% of CPU time, or I use a while loop with a nanosleep in it which sometimes misses some of the signals.
I have tried using semaphores, but I can only find a way to wait for a zero, not for a one, and trying to use two semaphores just didn't work. I don't think semaphores are the way to go.
There will be numerous processes all accessing the same shared memory area, and all processes need to be notified when that shared memory has been modified.
It's basically trying to emulate a hardware data and control bus, where events are edge rather than level triggered. It's the transitions between states I am interested in, rather than the states themselves.
So, any ideas or thoughts?
Linux has its own eventfd(2) facility that you can incorporate into your normal poll/select loop. You can pass eventfd file descriptor from process to process through a UNIX socket the usual way, or just inherit it with fork(2).
Edit 0:
After re-reading the question I think one of your options is signals and process groups: start your "listening" processes under the same process group (setpgid(2)), then signal them all with negative pid argument to kill(2) or sigqueue(2). Again, Linux provides signalfd(2) for polling and avoiding slow signal trampolines.
If 2 processes are involved you can use a file , shared memory or even networking to pass the flag or signal. But if the processes are more, there may be some suitable solutions in modifying the kernel. There is one shared memory in your question, right ?! How the signals are passed now ?!
In linux, all POSIX control structures (mutex, conditions, read-write-locks, semaphores) have an option such that they also can be used between processes if they reside in shared memory. For the process that you describe a classic mutex/condition pair seem to fit the job well. Look into the man pages of the ..._init functions for these structures.
Linux has other proper utilities such as "futex" to handle this even more efficiently. But these are probably not the right tools to start with.
1 Single Reader & Single Writer
1 Single Reader & Single Writer
This can be implemented using semaphores.
In posix semaphore api, you have sem_wait() which will wait until value of the semaphore count is zero once it is incremented using sem_post from other process the wait will finish.
In this case you have to use 2 semaphores for synchronization.
process 1 (reader)
sem_wait(sem1);
.......
sem_post(sem2);
process 2(writer)
sem_wait(sem2);
.......
sem_post(sem1);
In this way you can achieve synchronization in shared memory.

Why are threads called lightweight processes?

A thread is "lightweight" because most of the overhead has already been accomplished through the creation of its process.
I found this in one of the tutorials.
Can somebody elaborate what it exactly means?
The claim that threads are "lightweight" is - depending on the platform - not necessarily reliable.
An operating system thread has to support the execution of native code, e.g. written in C. So it has to provide a decent-sized stack, usually measured in megabytes. So if you started 1000 threads (perhaps in an attempt to support 1000 simultaneous connections to your server) you would have a memory requirement of 1 GB in your process before you even start to do any real work.
This is a real problem in highly scalable servers, so they don't use threads as if they were lightweight at all. They treat them as heavyweight resources. They might instead create a limited number of threads in a pool, and let them take work items from a queue.
As this means that the threads are long-lived and small in number, it might be better to use processes instead. That way you get address space isolation and there isn't really an issue with running out of resources.
In summary: be wary of "marketing" claims made on behalf of threads. Parallel processing is great (increasingly it's going to be essential), but threads are only one way of achieving it.
Process creation is "expensive", because it has to set up a complete new virtual memory space for the process with it's own address space. "expensive" means takes a lot of CPU time.
Threads don't need to do this, just change a few pointers around, so it's much "cheaper" than creating a process. The reason threads don't need this is because they run in the address space, and virtual memory of the parent process.
Every process must have at least one thread. So if you think about it, creating a process means creating the process AND creating a thread. Obviously, creating only a thread will take less time and work by the computer.
In addition, threads are "lightweight" because threads can interact without the need of inter-process communication. Switching between threads is "cheaper" than switching between processes (again, just moving some pointers around). And inter-process communication requires more expensive communication than threads.
Threads within a process share the same virtual memory space but each has a separate stack, and possibly "thread-local storage" if implemented. They are lightweight because a context switch is simply a case of switching the stack pointer and program counter and restoring other registers, wheras a process context switch involves switching the MMU context as well.
Moreover, communication between threads within a process is lightweight because they share an address space.
process:
process id
environment
folder
registers
stack
heap
file descriptor
shared libraries
instruments of interprocess communications (pipes, semaphores, queues, shared memory, etc.)
specific OS sources
thread:
stack
registers
attributes (for sheduler, like priority, policy, etc.)
specific thread data
specific OS sources
A process contains one or more threads in it and a thread can do anything a process can do. Also threads within a process share the same address space because of which cost of communication between threads is low as it is using the same code section, data section and OS resources, so these all features of thread makes it a "lightweight process".
Just because the threads share the common memory space. The memory allocated to the main thread will be shared by all other child threads.
Whereas in case of Process, the child process are in need to allocate the separate memory space.

Resources