Context Switching to Specific Process after Specific ISR - c

Is there any (dirty)method to provoke context switching to specific process after specific ISR?
In normal situation, after an ISR, the process which was interrupted will keep running, and I have to wait the scheduler to pick that specific process. I want to switch to the specific process right away after the ISR.
Any advice will be great. Thanks!

Construct your driver so that the process has a thread blocking on suitable syscall (read(), ioctl()) with the ISR waking up that thread (because at least one byte became available for read()).
Then, make sure that thread has the highest priority possible, and preferably uses a realtime scheduler (SCHED_FIFO or SCHED_RR). In practice, if your process does not run with root privileges, you'll need to start the service with root privileges, setup the thread, then drop privileges; or give the binary executable CAP_SYS_NICE capability via e.g. setcap pe=CAP_SYS_NICE binary.
It is technically possible for the driver to also mess with the scheduling, but I would not do that. Anything that is so time-critical, should be done in the kernel ISR instead.
If you want to do it in userspace, because you don't want your code to be a derivative of the kernel and therefore GPL-licensed, you're on your own.

Related

Why does OS require/maintain kernel-land threads?

Below are three threading models that i came across.
Based on these below 3 architectures, It is new for me to understand that, there also exist something called kernel thread, apart from user thread which is introduced as part of POSIX.1C
This is 1-1 model
This is N-1 model.
This is Hybrid model.
I have been through many questions on SO for kernel threads. This looks more relevant link for clarification.
At process level, For every user process that is loaded by Linux loader(say), Kernel does not allocate corresponding kernel process for executing machine instructions that a user process has come up with. User process only request for kernel mode execution, when it require a facility from kernel module[like malloc()/fork()]. Scheduling of user process is done by OS scheduler and assign a CPU core.
For example, User process does not require kernel execution mode to execute an instruction
a=a+2;//a is my local variable in a user level C function
My question:
1)
So, What is the purpose of kernel level thread? Why does OS need to maintain a kernel thread(additionally) for corresponding user thread of a User level process? Does User mode programmer have any control on choosing any of the above three threading models for a given User process through programming?
After i understand the answer to first question, one relevant supplementary is,
2)
Does kernel thread actually get scheduled by OS scheduler but not user thread?
I think the use of the word kernel thread is a bit misleading in these figures. I know the figures from a book about operating system (design) and if I remember correctly, they refer to the way how work is scheduled by the operating system.
In the figures, each process has at least one kernel thread assigned that is scheduled by the kernel.
The N-1 model shows multiple user-land threads that are not known to the kernel at all because the latter schedules the process (or how it's called in the figure, a single kernel thread) only. So for the kernel, each process is a kernel thread. When the process is assigned a slice of processor time, it itself runs multiple threads by scheduling them at its own discretion.
In the 1-1 model, the kernel is aware of the user-land threads and each thread is considered for processor time assignment by the scheduler. So instead of scheduling a whole process, the kernel switches between threads inside of processes.
The hybrid model combines both principles, where lightweight processes are actually threads known to the kernel and which are scheduled for execution by it. Additionally, they implement threads the kernel is not aware of and assign processor time in user-land.
And now to be completely confused, there is actually a real kernel thread in Linux. But as far as I understand the concept, these threads are used for kernel-space operations only, e.g. when kernel modules need to do things in parallel.
So, What is the purpose of kernel level thread?
To provide a vehicle for assignment of a set of resources provided by the OS. The set always incudes CPU code execution on a core. Others may include disk, NIC, KB, mouse, timers, as may be requested by syscalls from the thread. The kernel manages access to those resources as they become available and arbitrates between resource conflicts, eg. a request for KB input when none is available will remove CPU execution from the thread until KB input becomes available.
Why do we need a kernel thread(additionally) for corresponding user
thread of a User level process?
Without a kernel-level thread, the user thread would not be able to obtain execution - it would be dead code/stack. Note that with Linux, the concept of threads/processes can get somewhat muddied, but nevertheless, the fundamental unit of execution is a thread. A process is a higher-level construct whose code must be run by at least one thread, (eg. the one raised by the OS loader to run code at the process entry point when it is first loaded).
Does User mode programmer have any control on choosing any of the
above three threading models for a given User process through
programming?
No, not without a syscall, which means leaving user mode.
Does kernel thread actually get scheduled by OS scheduler but not user
thread
Yes - it is the only thing that gets to be given execution when it can use it, have execution removed when it cannot, and be subject to preemptive removal of CPU if the OS scheduler requires it for something else.

Threads in User and kernel mode

what do we mean by thread running in User mode and running in kernel mode? Is this related to thread execution instruction from User mode and thread executing instruction from Kernel mode? Kindly elaborate.
Also, is it possible that if a thread is executing in user mode is put to suspended state, then it may start executing in kernel mode? if yes, how is it possible? Until now I am only aware that a thread if suspended will be SUSPENDED completely, i.e. the context switch will take place by CPU to schedule another thread.
what do we mean by thread running in User mode and running in kernel mode?
There is no way to know what a person means by a phrase without context. If I had to guess, I'd say they are talking about whether the thread is scheduled by a user-space scheduler or a kernel scheduler. But it's also possible they are actually asking whether the thread is running user code or kernel code.
Is this related to thread execution instruction from User mode and thread executing instruction from Kernel mode? Kindly elaborate.
It could be. It also might not be. There's no way to know what a person means by a phrase without context.
Also, is it possible that if a thread is executing in user mode is put to suspended state, then it may start executing in kernel mode? if yes, how is it possible?
For implementations where the kernel schedules threads, the scheduler is running in kernel space. The code that actually suspends the thread typically runs in kernel space too because it has to add the thread to the various kernel scheduler data structures. So the thread that resumes the thread will run in kernel space too. At a higher level view, the same thread of execution can "become" the kernel scheduler, choose a user-space thread to execute, and then "become" that thread.
Until now I am only aware that a thread if suspended will be SUSPENDED completely, i.e. the context switch will take place by CPU to schedule another thread.
Right, and that's kernel code. So the same core is running user space code, then it's running kernel code, then it's running the user space code of another thread.
Modern operating systems have hardware support for separating the user code from the kernel code. On the x86 architecture you can set up memory pages that are not accessible to normal user code, and will trigger a page fault, so that the OS can survive faulty programs.
Code running in kernel mode has higher privileges, but also more responsibillities, as not everything is as easily accessible as from user space. If the user code gets stuck, then the OS can clean it up. If a kernel mode code hangs it might not be that easy, depending on how high the privilege level is.

System call in process

Suppose a process is running and it invokes a system call . Does that means that process will now be blocked . Are all system calls block a process and changes its state from running to block ? Or it depends on the scenario at that time?
No, it does not mean the process is blocked. Some system calls are blocking and some are not. However, note that for the duration of the time the kernel processes the system call, while the process continues to run, your own user code is not executing but the kernel code is executing on behalf of the process.
Some operating systems have even upcalls, where the user application registers some functions to be called by the kernel (back in userspace) at some occasions. The Unix signal machinery is a very simple example, but some OSes have much more complex upcalls.
I think there are some OSes where a syscall trigger some kernel processing which may trigger some upcall back in userspace.
I forgot the details

Any possible solution to capture process entry/exit?

I Would like to capture the process entry, exit and maintain a log for the entire system (probably a daemon process).
One approach was to read /proc file system periodically and maintain the list, as I do not see the possibility to register inotify for /proc. Also, for desktop applications, I could get the help of dbus, and whenever client registers to desktop, I can capture.
But for non-desktop applications, I don't know how to go ahead apart from reading /proc periodically.
Kindly provide suggestions.
You mentioned /proc, so I'm going to assume you've got a linux system there.
Install the acct package. The lastcomm command shows all processes executed and their run duration, which is what you're asking for. Have your program "tail" /var/log/account/pacct (you'll find its structure described in acct(5)) and voila. It's just notification on termination, though. To detect start-ups, you'll need to dig through the system process table periodically, if that's what you really need.
Maybe the safer way to move is to create a SuperProcess that acts as a parent and forks children. Everytime a child process stops the father can find it. That is just a thought in case that architecture fits your needs.
Of course, if the parent process is not doable then you must go to the kernel.
If you want to log really all process entry and exits, you'll need to hook into kernel. Which means modifying the kernel or at least writing a kernel module. The "linux security modules" will certainly allow hooking into entry, but I am not sure whether it's possible to hook into exit.
If you can live with occasional exit slipping past (if the binary is linked statically or somehow avoids your environment setting), there is a simple option by preloading a library.
Linux dynamic linker has a feature, that if environment variable LD_PRELOAD (see this question) names a shared library, it will force-load that library into the starting process. So you can create a library, that will in it's static initialization tell the daemon that a process has started and do it so that the process will find out when the process exits.
Static initialization is easiest done by creating a global object with constructor in C++. The dynamic linker will ensure the static constructor will run when the library is loaded.
It will also try to make the corresponding destructor to run when the process exits, so you could simply log the process in the constructor and destructor. But it won't work if the process dies of signal 9 (KILL) and I am not sure what other signals will do.
So instead you should have a daemon and in the constructor tell the daemon about process start and make sure it will notice when the process exits on it's own. One option that comes to mind is opening a unix-domain socket to the daemon and leave it open. Kernel will close it when the process dies and the daemon will notice. You should take some precautions to use high descriptor number for the socket, since some processes may assume the low descriptor numbers (3, 4, 5) are free and dup2 to them. And don't forget to allow more filedescriptors for the daemon and for the system in general.
Note that just polling the /proc filesystem you would probably miss the great number of processes that only live for split second. There are really many of them on unix.
Here is an outline of the solution that we came up with.
We created a program that read a configuration file of all possible applications that the system is able to monitor. This program read the configuration file and through a command line interface you was able to start or stop programs. The program itself stored a table in shared memory that marked applications as running or not. A interface that anybody could access could get the status of these programs. This program also had an alarm system that could either email/page or set off an alarm.
This solution does not require any changes to the kernel and is therefore a less painful solution.
Hope this helps.

Executing a user-space function from the kernel space

Im writing a custom device driver in linux that has to be able to respond very rapidly on interrupts. Code to handle this already exists in a user-space implementation but that is too slow as it relies on software constantly checking the state of the interrupt line. After doing some research, I found that you can register these interrupt lines from a kernel module, and execute a function given by a function pointer. However the code we want to execute is in the user-space, is there a way to call a function in the user-space from a kernel space module?
You are out of luck with invoking user-space functions from the kernel since the kernel doesn't and isn't supposed to know about individual user-space application functions and logic, not to mention that each user-space application has its own memory layout, that no other process nor the kernel is allowed to invade in that way (shared objects are the exception here, but still you can't tap into that from the kernel space). What about the security model, you aren't supposed to run user-space code (which is automatically considered unsafe code in the kernel context) in the kernel context in the first place since that will break the security model of a kernel right there in that instant. Now considering all of the above mentioned, plus many other motives you might want to reconsider your approach and focus on Kernel <-> User-space IPC and Interfaces, the file system or the user-mode helper API(read bellow).
You can invoke user space apps from the kernel though, that using the usermode-helper API. The following IBM DeveloperWorks article should get you started on using the usermode-helper Linux kernel API:
Kernel APIs, Part 1: Invoking user-space applications from the kernel
I think the easiest way is to register a character device which becomes ready when the device has some data.
Any process which tries to read from this device, then gets put to sleep until the device is ready, then woken up, at which point it can do the appropriate thing.
If you just want to signal readyness, a reader could just read a single null byte.
The userspace program would then just need to execute a blocking read() call, and would be blocked appropriately, until you wake it up.
You will need to understand the kernel scheduler's wait queue mechanism to use this.
Sounds like your interrupt line is already available to userspace via gpiolib? (/sys/class/gpio/...)
Have you benchmarked if gpio edge triggering and poll() is fast enough for you? That way you don't have to poll the status from the userspace application but edge triggering will report it via poll(). See Documentation/gpio.txt in kernel source.
If the edge triggering via sysfs is not good enough, then the proper way is to develop a kernel driver that takes care of the time critical part and exports the results to userspace via a API (sysfs, device node, etc).
I am also facing the same problem, I read this document http://people.ee.ethz.ch/~arkeller/linux/multi/kernel_user_space_howto-6.html, so planning to use signals. In my case there is no chance of losing signals, because
1. the system is closed loop, after signals executed then only I will get another signal.
2. And I am using POSIX real-time signals.

Resources