how to detect system shutting down event on Linux - c

I'm developing an application on Debian 10, My application need to saving current data to a file before shut down.
I cant handle event when system shutdown, any idea to help me resolve this problem?
Edit:
That is my code
struct sigaction action;
memset(&action, 0, sizeof(action));
action.sa_handler = proc_term;
sigaction(SIGTERM, &action, NULL);
signal(SIGINT, proc_end);
signal(SIGTERM, proc_term);
Below is code for proc_term function
void proc_term()
{
LOG_WARN("Process was force killed");
is_forcedexit = 1;
}

Related

Why is my signal handler not invoked more than once here?

jmp_buf functjmp;
void sigsegv_handler(int sig) {
sio_printf("Caught sigsegv!\n");
siglongjmp(functjmp, 2);
return;
}
void foo(unsigned val) {
assert(0);
sio_printf("entered!\n");
}
int main() {
struct sigaction action;
action.sa_handler = sigsegv_handler;
sigemptyset(&action.sa_mask); /* Block sigs of type being handled */
sigaddset(&action.sa_mask, SIGSEGV);
action.sa_flags = SA_RESTART; /* Restart syscalls if possible */
if (sigaction(SIGSEGV, &action, NULL) < 0) {
sio_fprintf(stderr, "handler error!\n");
}
sigset_t prev_mask;
sigprocmask(SIG_BLOCK, NULL, &prev_mask);
if (sigsetjmp(functjmp, 0) == 0) {
foo(*(unsigned *)0x8);
} {
sigprocmask(SIG_BLOCK, &prev_mask, NULL);
sio_printf("jump handled!\n");
foo(*(unsigned *)0x8);
}
sio_fprintf(stderr, "how did it come here?!\n");
}
I've been debugging this code using gdb, and I cannot figure out why the program will not handle the second SIGSEGV signal with my own handler, assuming no other signals were received or sent by the program? Any sio prefixed functions are async safe variants of the stdio counterparts.
Currently, I surmise it has to do with something I'm missing in my conception about returning from the signal handler, which longjmp doesn't do at all.
Short answer: normally not possible to resume after SIGSEGV for C program. You might get more mileage with C++.
Long Answer: See discussions in Coming back to life after Segmentation Violation
Assuming OK to take the risk of undefined behavior:
It is possible to re-enable SEGV. The core issue is that during signal handler, the code explicitly blocks the SEGV signal from being triggered (with the sigaddset). In addition, the default behavior (of signal handlers) is that during signal handling, the same signal processing will be deferred until the signal handler returns. In the OP code, the signal handler never returns (because of the siglongjmp)
Both issues can be addressed by changing the original code.
// Make sure all attributes are NULL.
struct sigaction action = {} ;
action.sa_handler = sigsegv_handler;
sigemptyset(&action.sa_mask); /* Block sigs of type being handled */
// Not Needed:: sigaddset(&action.sa_mask, SIGSEGV);
// Add SA_NODEFER to disable the deferred processing of SIGSEGV.
action.sa_flags = SA_RESTART | SA_NODEFER ; /* Restart syscalls if possible */
// rest of code here
if (sigaction(SIGSEGV, &action, NULL) < 0) {
sio_fprintf(stderr, "handler error!\n");
}
...

Change application core dump directory with c program

I have one scenario where I want to change directory for core dumps by current application using c program.
I have one option to do chdir() to specified directory. But this changes the home directory of application. And I am looking for some APIs which can change directory for core dumps only.
You can change core dump pattern globally through /proc/sys/kernel/core_pattern.
But if you only want to change the core dump directory of one process, you can do what Apache web server does - register a signal handler that changes the current directory right before core dumping:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <signal.h>
#include <sys/resource.h>
#define COREDUMP_DIR "/tmp"
static void sig_coredump (int sig)
{
struct sigaction sa;
// Change to the directory we want the core to be dumped
chdir (COREDUMP_DIR);
// Clear the signal handler for the signal
memset (&sa, 0, sizeof (sa));
sa.sa_handler = SIG_DFL;
sigemptyset (&sa.sa_mask);
sigaction (sig, &sa, NULL);
// Send the signal again
raise (sig);
}
int main (int argc, char **argv)
{
struct sigaction sa;
// Set up the signal handler for all signals that
// can cause the core dump
memset (&sa, 0, sizeof (sa));
sa.sa_handler = sig_coredump;
sigemptyset (&sa.sa_mask);
sigaction (SIGSEGV, &sa, NULL);
sigaction (SIGBUS, &sa, NULL);
sigaction (SIGABRT, &sa, NULL);
sigaction (SIGILL, &sa, NULL);
sigaction (SIGFPE, &sa, NULL);
// Enable core dump
struct rlimit core_limit;
core_limit.rlim_cur = RLIM_INFINITY;
core_limit.rlim_max = RLIM_INFINITY;
if (setrlimit (RLIMIT_CORE, &core_limit) == -1) {
perror ("setrlimit");
}
// Trigger core dump
raise (SIGSEGV);
return 0;
}
Note that as this relies on the crashing application itself setting up and being able to run the signal handler, it can't be 100% bullet-proof - signal may be delivered before signal handler is set up or signal handling itself may get corrupted.

Signals in C while using switchcontext

I've tried to implement a user-level thread library, using makecontext(), swapcontext() and getcontext() functions. I've read that a good practice is to use signals for schedule function (with a cyclic timer), but while a new thread is adding, block signals.
My problem is that: I can implement function to block signals, but it does not work when I use context switching.
void initSignals()
{
printf("initSignals\n");
act.sa_handler = schedule;
sigemptyset(&act.sa_mask);
act.sa_flags = 0;
sigaction(SIGALRM, &act, &oact);
}
void blockSignals()
{
printf("blockSignals\n");
//sigset_t sig_mask;
sigemptyset(&sig_mask);
sigaddset(&sig_mask, SIGALRM);
sigprocmask(SIG_BLOCK, &sig_mask, NULL);
}
void unblockSignals()
{
printf("unblockSignals\n");
//sigset_t sig_mask;
sigemptyset(&sig_mask);
sigaddset(&sig_mask, SIGALRM);
sigprocmask(SIG_UNBLOCK, &sig_mask, NULL);
runTimer();
}
About sigset_t sig mask now it is global, but I tried to set it like a local variable too.
Moreover I tried to use another signals SIGPROF, SIGVTALRM, because my cyclic timer looks like:
void runTimer()
{
printf("runTimer\n");
it.it_interval.tv_sec = 1;
it.it_interval.tv_usec = 0;
it.it_value.tv_sec = 1;
it.it_value.tv_usec = 0;
setitimer(ITIMER_REAL, &it, NULL);
}
And itimer allow to use this 3 types of alarm...
So the question is: how to block signals while code is using context switching functions? Is it possible? Maybe another functions exist which can do that?
I thought that signals are being used for all processes, so I was surprised that it is not working...

Custom System() which hangs if I restart /etc/init.d script?

I have a multi-threaded application and have got a way to do a telnet, ssh on to this application. In my application, I do one of the init script restart using the custom system() call below. It seems like, the child process is still active. I am saying this because If I logout from telnet session still the process hangs i.e. it cannot logout. This happens only when I restart the script using this system call. Is there something wrong with my system() function?
int system(const char *command)
{
int wait_val, pid;
struct sigaction sa, save_quit, save_int;
sigset_t save_mask;
syslog(LOG_ERR,"SJ.. calling this system function\r\n");
if (command == 0)
return 1;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = SIG_IGN;
/* __sigemptyset(&sa.sa_mask); - done by memset() */
/* sa.sa_flags = 0; - done by memset() */
sigaction(SIGQUIT, &sa, &save_quit);
sigaction(SIGINT, &sa, &save_int);
__sigaddset(&sa.sa_mask, SIGCHLD);
sigprocmask(SIG_BLOCK, &sa.sa_mask, &save_mask);
if ((pid = vfork()) < 0) {
perror("vfork fails: ");
wait_val = -1;
goto out;
}
if (pid == 0) {
sigaction(SIGQUIT, &save_quit, NULL);
sigaction(SIGINT, &save_int, NULL);
sigprocmask(SIG_SETMASK, &save_mask, NULL);
struct sched_param param;
param.sched_priority = 0;
sched_setscheduler(0, SCHED_OTHER, &param);
setpriority(PRIO_PROCESS, 0, 5);
execl("/bin/sh", "sh", "-c", command, (char *) 0);
_exit(127);
}
#if 0
__printf("Waiting for child %d\n", pid);
#endif
if (wait4(pid, &wait_val, 0, 0) == -1)
wait_val = -1;
out:
sigaction(SIGQUIT, &save_quit, NULL);
sigaction(SIGINT, &save_int, NULL);
sigprocmask(SIG_SETMASK, &save_mask, NULL);
return wait_val;
}
Any ideas on how to debug whether this system call is getting hanged or not?
I realized this happens because file descriptors are inherited upon fork .
Since my custom system() is nothing but fork() and exec(). There are plenty of sockets in my application. These socket file descriptors gets inherited by the child process.
My assumption here is that "Child process can't exit because it is waiting for parent process to close the file descriptors or those file descriptors are in a state where it can be closed". Not sure what those states are though.
So, here is the interesting link I found -
Call system() inside forked (child) process, when parent process has many threads, sockets and IPC
Solution -
linux fork: prevent file descriptors inheritance
Not sure, I can do this in a big application where sockets are opened at thousand of places. So, here is what I did.
My Solution -
I created a separate process/daemon that listens for the command from the parent application. This communication is based on socket. Since, it is a separate application/daemon it doesn't affect the main application which is running multiple threads and has a lot of opened sockets. This worked for me.
I believe that this problem will be fixed once I do -
fcntl(fd, F_SETFD, FD_CLOEXEC);
Any comments are welcome here.
Is this a fundamental problem in Linux, C i.e.
all file descriptors are inherited by default?
Why linux/kernel allow this? What advantage do we get out of it?

reset sigaction to default

In Android the bionic loader sets a default signal handler for every process on statrtup:
void debugger_init()
{
struct sigaction act;
memset(&act, 0, sizeof(act));
act.sa_sigaction = debugger_signal_handler;
act.sa_flags = SA_RESTART | SA_SIGINFO;
sigemptyset(&act.sa_mask);
sigaction(SIGILL, &act, NULL);
sigaction(SIGABRT, &act, NULL);
sigaction(SIGBUS, &act, NULL);
sigaction(SIGFPE, &act, NULL);
sigaction(SIGSEGV, &act, NULL);
sigaction(SIGSTKFLT, &act, NULL);
sigaction(SIGPIPE, &act, NULL);
}
I would like to set it back to its default, meaning I want to ignore these signal and that the default handler will take place (CORE DUMP)
How do I revert the action performed ? I want to ignore all these as if the above function never was called
Read signal(7), sigaction(2) and perhaps signal(2).
You could call
signal(SIGILL, SIG_DFL);
signal(SIGABRT, SIG_DFL);
and so on early in your main (which is entered after dynamic loading)
You could also use sigaction with sa_handler set to SIG_DFL
Of course, things are more tricky if you want to default handle these signals before your main, e.g. in some static constructor!
I found it could lead unexpected behavior when mixed using sigaction and signal to set for one process.
From signal(2) posted above(wouldn't surprise me if this warning wasn't there 8 years ago):
WARNING: the behavior of signal() varies across UNIX versions,
and has also varied historically across different versions of
Linux. Avoid its use: use sigaction(2) instead.
Looking at https://docs.oracle.com/cd/E19455-01/806-5257/tlib-49639/index.html
int pthread_sigmask(int how, const sigset_t *new, sigset_t *old);
When the value of new is NULL, the value of how is not significant and the signal mask of the thread is unchanged. So, to inquire about currently blocked signals, assign a NULL value to the new argument.
So I guess you could use that to get the current sigmask and just wipe each one
sigset_t tempSet;
pthread_sigmask(SIG_SETMASK, NULL, &tempSet);
sigdelset(&tempSet, /*Signal you don't want to handle*/);
sigdelset(&tempSet, /*repeat for each signal*/);
pthread_sigmask(SIG_SETMASK, &tempSet, NULL);
It's pretty much the same thing with sigact to query the current action for a signal, from sigaction(2)
sigaction() can be called with a NULL second argument to query
the current signal handler.
It's not clear to me the ramifications of, in my case, having SIGKILL in the first call to sigaction
struct sigaction sigAct;
sigaction(SIGKILL, NULL, &sigAct);
sigAct.sa_handler = SIG_DFL; // Ensure default handling of Kill signal
sigaction(/*Signal you don't want to handle*/, &sigAct, NULL);
sigaction(/*repeat for each signal*/, &sigAct, NULL);
Using siggetmask is obsolete by sigprocmask, and sigprocmask is only for single threaded environments.

Resources