How to get the received signal in kernel threads - kernel-module

I call the function signal_pending() in my kernel thread. When return true, i want to get the
signal.my code is as follows:
sigset = &current->signal->shared_pending.signal;
but the value is zero.where is my mistake in ?

Related

timer_create function returning EINVAL

I am writing a sample program where my main() will create a thread and then it will start a timer. When the timer expires, the thread should get the signal. This is on Ubuntu 18.04.4 LTS.
My problem is that timer_create() is failing and error number is set to EINVAL. My snippet of code for timer_create() is given below.
/* Create the timer */
sevp.sigev_notify = SIGEV_THREAD_ID;
sevp.sigev_signo = SIGALRM;
sevp.sigev_value.sival_int = somevalue;
sevp._sigev_un._tid = threadid;
retval = timer_create(CLOCK_MONOTONIC,&sevp,&timerid);
if ( 0 == retval )
{
printf("Success in creating timer [%p]",timerid);
}
else
{
printf("Error in creating timer [%s]\n",strerror(errno));
}
What am I doing wrong?
As per the linux man page entry for timer_create with SIGEV_THREAD_ID:
As for SIGEV_SIGNAL, but the signal is targeted at the thread
whose ID is given in sigev_notify_thread_id, which must be a
thread in the same process as the caller. The
sigev_notify_thread_id field specifies a kernel thread ID,
that is, the value returned by clone(2) or gettid(2). This
flag is intended only for use by threading libraries.
The thread ID (threadid in the question code) needs to be a kernel thread ID. That ID can be obtained with gettid.

Right way to delete kthread waiting while semaphore will be upped

I write a kernel module that uses kernel threads and semaphores.
I call up(...) function for semaphore from interrupt handler and then my kthread starts to execute.
static int interrupt_handler_thread(void *data)
{
/* duty cycle */
while (!kthread_should_stop()) {
/*
* If semaphore has been uped in the interrupt, we will
* acquire it here, else thread will go to sleep.
*/
if (!down_interruptible(mysem)) {
/* proccess gpio interrupt */
dev_info(dev, "gpio interrupt detected\n");
}
}
do_exit(0);
return 0;
}
The semaphore and thread are initializated into module_init function. Error checking was omitted.
...
sema_init(mysem, 0);
thread = kthread_create(interrupt_handler_thread,client,"my_int_handler");
wake_up_process(thread);
...
And during unloading a module the semaphore and the thread are removed:
/*
* After this call kthread_should_stop() in the thread will return TRUE.
* See https://lwn.net/Articles/118935/
*/
kthread_stop(thread);
/*
* Release the semaphore to return
* from down_interruptible() function
*/
up(mysem);
When I try to unload my module the one frozes into thread in down_interruptible() function, because it waits while the semaphore ups in interrupt handler. And my code never returns from kthread_stop().
It seems, I need to disable the interrupt from my gpio, up the semaphore by hand and call kthread_stop() function. But it is a potential bug, because after the semaphore is uped by hand, the thread starts executing and the one can again down_interruptible() after its duty cycle.
Could anyone help me, please?
PS: I know about this question, but, it seems, this is not my case.
For correctly operate, your kthread should check "stop" status of the thread when waiting on semaphore. Unfortunately, there is no "stoppable" version of down function.
Instead of kthread use workqueue mechanism. Works already have all features you need:
You can add a work inside interrupt handler (queue_work),
Only a single work can be run at the same time,
Using destroy_workqueue you can safetly finalize all works.
Actually, workqueues are implemented using kthreads. See e.g. implementation of kthread_worker_fn function.

Handling SIGINT in a C program that uses syslog() on ARM causes the program execution to jump to backwards a few lines

Consider the following program. It enters a busy-wait inside a while loop waiting for the SIGINT signal handler to unset the loop's condition, thus leaving it and allowing the main() to return normally instead of just killing the process:
#include <unistd.h>
#include <stdlib.h>
#include <stdint.h>
#include <stdio.h>
#include <inttypes.h>
#include <stdbool.h>
#include <signal.h>
#include <syslog.h>
#define RES_ERROR -1
#define RES_OK 1
#define ARG_MAX_SIZE 30
#define MAX_BUFFER 64
static bool module_running = true;
static void SigHandlerIMU(int signal_number);
static int ProcessSignalConfig(void);
static void SigHandlerIMU(int signal_number)
{
if(signal_number == SIGINT){
module_running = false;
}
return;
}/*SigHandlerIMU*/
static int ProcessSignalConfig(void)
{
int ret_value = RES_ERROR;
struct sigaction signal_handler;
syslog(LOG_USER | LOG_NOTICE, "Catching SIGINT...\n");
signal_handler.sa_handler = SigHandlerIMU;
if(sigaction(SIGINT, &signal_handler, NULL) == -1){
syslog(LOG_USER | LOG_ERR, "can't catch SIGINT\n");
}
else{
ret_value = RES_OK;
}
return ret_value;
}/*ProcessSignalConfig*/
int main(int argcount, char const *argvalue[])
{
int main_return_val = RES_ERROR;
struct sigaction signal_handler;
(void)setlogmask (LOG_UPTO (LOG_DEBUG));
openlog(NULL, LOG_PERROR | LOG_PID, LOG_USER);
syslog(LOG_USER | LOG_NOTICE, "Starting program...\n");
if(ProcessSignalConfig() < 0){
syslog(LOG_USER | LOG_ERR, "Failed catching process signals\n");
module_running = false;
}
syslog(LOG_USER | LOG_DEBUG, "Entering loop...\n");
while(module_running == true){
}
syslog(LOG_USER | LOG_DEBUG, "Exiting...\n");
closelog();
return main_return_val;
} /*main*/
I am getting different behaviour depending on the target architecture.
Compiling with gcc signal_test.c -o signal_test the program inmediately returns with the last call to syslog().
signal_test[4620]: Starting program...
signal_test[4620]: Catching SIGINT...
signal_test[4620]: Entering loop...
^Csignal_test[4620]: Exiting...
However, compiling with arm-linux-gnueabihf-gcc signal_test.c -o signal_test it seems to jump back to the call to ProcessSignalConfig(), then resuming from there (observe the repeated traces):
signal_test[395]: Starting program...
signal_test[395]: Catching SIGINT...
signal_test[395]: Entering loop...
^Csignal_test[395]: Catching SIGINT...
signal_test[395]: Entering loop...
signal_test[395]: Exiting...
EDIT: I have been doing further tests and, if I used all printf() instead of syslog(), the program runs fine also on ARM. I will update the question title to the current situation
You're telling that your program "resumes" after the signal handler returns but actually the program never stops running because it's perfoming a "busy waiting". If you want to wait for a signal arrive you should use the function sigsuspend wich actually block the process until a signal it's delivered to it see help here. Anyway the unexpected behaviour could be caused by the flag checked in the while loop, note that it's shared with the signal handler so the variable should be atomic and declared as follows: static volatile sig_atomic_t module_running;.
Signal handlers are not magic. They are called in user mode, so they must be called in user code. But they can only be detected in kernel mode, so the kernel passes the signal to the process and flags somehow for the process to be running in user mode, when the signal handler is called.
This happens only when the process is executing a system call, or normally when the kernel preempts the process because it has been running too long. As the code must be run in user mode, the designers of UNIX (and this, I'm afraid, has prevailed until now) the call of signal handlers happens just when the kernel is about to return to the user process (the mechanism consists in mangling the user stack to jump to the signal handler on return from the system call, and let the stack so mangled return to the code interrupted as if the interruption were a true hardware interrupt. This allows everything to happen in user code and not compromise the possibility of running in kernel mode.
When the process is stopped in a system call, the mechanism is simple, as the user process is not executing user code, and the signal handler will be called in a very specific point, after syscall return (so the place actually is in the point of code just after the system call ---which returns -1 with errno set to EINTR but you are actually capable of checking this only after the signal handler has already been called) but when the process is preempted, there's the problem that the process can be anywhere in its code. The stack mangling mentioned above has to deal with this, and be prepared to recover full cpu state, (like happens with a return from a hardware interrupt) in order to be able to execute the signal handler at any point in the user code and leave things right. There's no problem with this, as the kernel saved it when interrupted the process. The only difference is that the full cpu state restore is postponed until the signal handler has been executed, after returning to user mode.
The code for signal handler management is normally installed by the kernel in the user mode memory map (in BSD systems, this happens at the top of the main process thread stack, before pushing the environment and the exec(2) args and argv, argc parameters.) But it can be anywhere in the program virtual space.
Your case here is pure execution in user code, so until the kernel does preempt the process, it doesn't get the signal. This can happen anywhere in the loop, when the timer interrupt stops the process and the process is rescheduled again, just before returning to user mode, the stack is mangled to force a jump to the signal handler manager,
a switch to user mode is done, this makes program to jump to the signal handler manager, it calls your signal handler, the signal handler manager then restores the full cpu state and returns to the place where the kernel interrupted the process, as if a hardware interrupt had caused the interruption.

Threads in C in Windows

The perfect way to run and terminate threads in Windows using C is mentioned in the answer below!
There are 2 problems I'm facing with the current implementation method :
I can't forcibly stop the thread. For some reason it still continues. For example I have a for loop, it runs a function of which this thread example is a part. When this function is called 4-5 times, I see multiple animations on the screen suggesting that the previous threads didn't stop even when I called TerminateThread function at the end of my function.
At times the thread doesn't run at all and no animation is displayed on the screen. Which is if my function code runs really fast or for some other reason, I feel like the thread is being killed before it initializes. Is there a way to wait until init of thread?
How do I fix these issues?
Correct way of terminating threads is to signal the thread and let it finish gracefully, i.e.:
(updated to use interlocked intrinsics instead of a volatile flag, as per #IInspectable's comment below)
HANDLE eventHnd;
HANDLE threadHnd;
LONG isStopRequested = 0; // 1 = "stop requested"
static DWORD WINAPI thread_func(LPVOID lpParam)
{
do
{
// wait until signalled from a different thread
WaitForSingleObject(eventHnd, INFINITE);
// end thread if stop requested
if (InterlockedCompareExchange(&isStopRequested, 0, 0) == 1)
return 0;
// otherwise do some background work
Sleep(500);
} while (true);
}
The eventHnd variable is initialized using the CreateEvent function, and the stopRequested variable is just a boolean flag you can set from your main program:
// this creates an auto-reset event, initially set to 'false'
eventHnd = CreateEvent(NULL, false, false, NULL);
InterlockedExchange(&isStopRequested, 0);
threadHnd = CreateThread(NULL, 0, Processing_Thread, NULL, 0, NULL);
So, whenever you want to tell the thread do perform a task, you will simply set the event:
SetEvent(eventHnd);
And when you want to end the thread, you will set the flag to true, signal the event, and then wait for the thread to finish:
// request stop
InterlockedExchange(&isStopRequested, 1);
// signal the thread if it's waiting
SetEvent(eventHnd);
// wait until the thread terminates
WaitForSingleObject(threadHnd, 5000);

How to get rid of an error when quitting pthread while it's in sleep()?

first of all I'd like to apologize for the confusing title. But here's my question:
I have a main function which spawns another thread which is only working from time to time with "sleep(3)" in between.
Inside the main.c , I've a while loop which is running infinitively. So to cancel the program, I have to press Ctrl+C. To catch that, I added a signal handler at the beginning of the main function:
signal(SIGINT, quitProgram);
This is my quitProgram function:
void quitProgram() {
printf("CTRL + C received. Quitting.\n");
running = 0;
return;
}
So when running == 0, the loop is left.
It all seems to work, at least until the thread mentioned started. When I hit Ctrl+C after the thread has started, I get a strange error message:
`*** longjmp causes uninitialized stack frame `***: ./cluster_control terminated
======= Backtrace: =========
/lib/i386-linux-gnu/libc.so.6(+0x68e4e)[0xb7407e4e]
/lib/i386-linux-gnu/libc.so.6(__fortify_fail+0x6b)[0xb749a85b]
/lib/i386-linux-gnu/libc.so.6(+0xfb70a)[0xb749a70a]
/lib/i386-linux-gnu/libc.so.6(__longjmp_chk+0x42)[0xb749a672]
./cluster_control[0x8058427]
[0xb76e2404]
[0xb76e2428]
/lib/i386-linux-gnu/libc.so.6(nanosleep+0x46)[0xb7454826]
/lib/i386-linux-gnu/libc.so.6(sleep+0xcd)[0xb74545cd]
./cluster_control[0x804c0e6]
./cluster_control[0x804ae61]
/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0xb73b8a83]
./cluster_control[0x804a331]
When I try to debug it using gdb I get the following:
`(gdb) where
`#0 0xb7fdd428 in __kernel_vsyscall ()
`#1 0xb7d4f826 in nanosleep () at ../sysdeps/unix/syscall-template.S:81
`#2 0xb7d4f5cd in __sleep (seconds=0) at ../sysdeps/unix/sysv/linux/sleep.c:137
`#3 0x0804c0e6 in master_main (mastBroad_sock=3, workReady_ptr=0xbffff084) at master.c:150
`#4 0x0804ae61 in main () at main.c:84
The line 150 in master.c is this:
sleep(PING_PERIOD);
So my guess what's happening: The main thread is exiting while the master_main thread is sleeping and this is causing the error. But: How can I fix this? Is there a better way to let the master_main thread run every few seconds? Or to prevent the main thread from exiting while the master_man is still in sleep?
I tried to use a mutex, but it didn't work (locked the mutex before master_main is sleeping and unlock it afterwards and the exiting main thread needed that mutex to exit).
Additionally I passed an pointer from main to main_master with a state. So I would set the state of main_master to "exit" before exiting the main method, but that didn't work either.
So are there any other ideas? I'm running linux and programming language is C99.
Update 1
Sorry guys, I think I gave you wrong information. The method which causes trouble isn't even inside a thread. Here's an excerpt from my main method:
int main() {
[...]
signal(SIGINT, quitProgram);
while (running)
{
// if system is the current master
if (master_i)
{
master_main(mastBroad_sock, &workReady_condMtx);
pthread_mutex_lock(&(timeCount_mtx_sct.mtx));
master_i = 0;
pthread_mutex_unlock(&(timeCount_mtx_sct.mtx));
}
[...]
}
return 0;
}
And also an excerpt from the master_main which I guess is the problem.
int master_main(int mastBroad_sock, struct cond_mtx *workReady_ptr) {
[...]
while (master_i)
{
// do something
sleep(5); // to perform this loop only every 5 seconds, this is line 150 in master.c
}
}
Update 2
Forgot to add the code which catches Ctrl+C inside the main.c:
void quitProgram() {
printf("CTRL + C received. Quitting.\n");
running = 0;
return;
}
The simplest solution that comes to mind is to have a global flag that tells the thread that the program is shutting down, and so when the main function want to shutdown it sets the flag and then waits for the thread to terminate.
See Joining and Detaching Threads. Depending on what the thread is doing, you might also want to take a look at Condition Variables.

Resources