I have a console-mode Windows application (ported from Unix) that was originally designed to do a clean exit when it received ^C (Unix SIGINT). A clean exit in this case involves waiting, potentially quite a long time, for remote network connections to close down. (I know this is not the normal behavior of ^C but I am not in a position to change it.) The program is single-threaded.
I can trap ^C with either signal(SIGINT) (as under Unix) or with SetConsoleCtrlHandler. Either works correctly when the program is run under CMD.EXE. However, if I use the "bash" shell that comes with MSYS (I am using the MinGW environment to build the program, as this allows me to reuse the Unix makefiles) then the program is forcibly terminated some random, short time (less than 100 milliseconds) after the ^C. This is unacceptable, since as I mentioned, the program needs to wait for remote network connections to close down.
It is very likely that people will want to run this program under MSYS bash. Also, this effect breaks the test suite. I have not been able to find any way to work around the problem either from within the program (ideal) or by settings on the shell (acceptable). Can anyone recommend anything?
I had the exact same problem - I had written a program with a SIGINT/SIGTERM handler. That handler did clean-up work which sometimes took awhile. When I ran the program from within msys bash, ctrl-c would cause my SIGINT handler to fire, but it would not finish - the program was terminated ("from the outside", as it were) before it could complete its clean-up work.
Building on phs's answer, and this answer to a similar question: https://stackoverflow.com/a/23678996/2494650, I came up with the following solution. It's insanely simple, and it might have some side-effects that I've yet to discover, but it fixed the problem for me.
Create a ~/.bashrc file with the following line:
trap '' SIGINT
That's it. This traps the sigint signal and prevents msys bash from terminating your program "from the outside". However, it somehow still lets the SIGINT signal through to your program, allowing it to do its graceful cleanup/shutdown. I can't tell you exactly why it works this way, but it does - at least for me.
Good luck!
This could be due to the infamous mintty "Input/Output interaction with alien programs" problem (aka mintty issue #56). In this case it is manifesting as Ctrl-C abruptly killing the program rather than being passed down to the program as a signal to be caught and handled. Evidence for this theory is based on zwol's extensive explanation: "console-mode Windows application", "[application is] designed to do a clean exit when it received ^C", "[application] works correctly when the program is run under CMD.EXE" but "[when using the terminal] that comes with MSYS [...] program is forcibly terminated" (at the time of writing (2018) MSYS defaults to using mintty as its terminal).
Unfortunately mintty isn't a full Windows console replacement and various behaviours expected by "native" Windows programs are not implemented. However, you might have some joy wrapping such native programs in winpty when running them within mintty...
Other questions also describe this behaviour: see https://superuser.com/questions/606201/how-to-politely-kill-windows-process-from-cygwin and https://superuser.com/questions/1039098/how-to-make-mintty-close-gracefully-on-ctrl-c .
Arg - 5 minute edit on comment. Here's what I wanted to write:
As a workaround, instead of trying to trap the CTRL-C event which is also being propagated to the shell I'd propose turning off the ENABLED_PROCESSED_INPUT on stdin so that CTRL-C is reported as a keyboard input instead of as a signal:
DWORD mode;
HANDLE hstdin = GetStdHandle(STD_INPUT_HANDLE);
GetConsoleMode(hstdin, &mode);
SetConsoleMode(hstdin, mode & ~ENABLE_PROCESSED_INPUT); /* disable CTRL-C processing as a signal */
You could then process keyboard input in your main thread while the rest of the program does its thing in a separate thread and set an event to cleanup when CTRL-C is received.
When you run your program with MSYS bash, do you run the executable directly, or is there a wrapping (bash) shell script?
If so, it may be registering a custom Ctrl-C handler with the trap command (that does a sleep followed by a kill.) If such a thing exists, alter or remove it.
If there is no trap registered, or there is no wrapping script, consider making such a script and adding your own trap to override the default behavior. You can see an example of how to use it here or on bash's man page (in the SHELL BUILTINS section).
Ctrl-C is SIGINT? I thought Ctrl-Z was SIGINT, but Ctrl-C is SIGTERM. Check that.
Do you have a CYGWIN environment setting (in control panel/environment variables)? Try setting CYGWIN=notty and restart open a new MSYS bash shell - does the problem persist?
Related
I noticed that the Unix bc program does not print out it's usual prompt (the three symbols ">>> ") when being started as a background process (like if you execute it as "bc &"). This is confusing to me because from my limited knowledge of Unix, starting a program as a background job will keep it running until as soon as it tries to read from stdin, at which point it will receive a signal to stop itself.
But running bc as a background job ("bc &") will not cause it to at least print out the ">>> " prompt before stopping itself which tells me that the program handles that somehow. I am curious as to how it does this. When I wrote a naive program that only tries to emulate the input/output interaction, it still prints out ">>> " before being suspended which doesn't look very clean at all and the behavior gets even more bizarre on certain shells.
I tried looking through the Unix bc source code and I was able to trace the code to parts where it is printing out the ">>> " prompt, but how it was handling not printing out the prompt when started as a background process was beyond me. And I know that obviously you would never start an input/output interactive program in the background as that goes against intended functionality and common sense, but I am more interested in the concepts behind it like if this was implemented with signal handling and/or if this is some more advanced input/output stream buffering or some other Unix concept that I am not familiar with.
The first thing your version of bc does is call the tcsetattr function. This function, when called from a background process, causes the SIGTTOU signal to be sent to the process, which by default causes the process to stop.
Any program that manipulates terminal attributes (vim, bash, anything that uses readline or curses, ...) will probably behave exactly the same way.
I frequently work with PostgreSQL for debugging, and it uses SIGINT internally for some of its inter-backend signalling.
As a result when running certain backends under gdb execution tends to get interrupted a lot. One can use the signal command to make sure SIGINT is passed to the program and that it is not captured by gdb... but then gdb doesn't respond to control-C on the command line, since that sends SIGINT.
If you run:
handle SIGINT noprint nostop pass
gdb will complain
SIGINT is used by the debugger.
Are you sure you want to change it? (y or n) y
Is there any way to get gdb to use a different interrupt signal? Or any alternative method that'd let me have gdb ignore SIGINT?
(This isn't an issue for most PostgreSQL backend debugging, but it's a pain with background workers and autovacuum).
Readers who end up on this page (as I did) with a slightly different variation of this problem, would perhaps be more interested in this question:
Debugging a segmentation fault when I do ctrl c
... and its answer, which is:
send SIGINT from inside gdb itself:
(gdb) signal 2
(Normally I would post the link as a simple comment under the OP's question on this page, but since there are already 7 comments, comments are being hidden/buried.)
If you read all the details of the OP's question here, then it is obvious that my answer is not correct for OP.
However, my answer is correct for many situations that could be described by the same title: "Debugging a program that uses SIGINT with gdb"
On UNIX-like systems, you can distinguish a tty-initiated SIGINT from one sent by kill by looking at the si_pid element in the siginfo struct. If the pid is 0, it came from a tty.
So you could do something like this:
catch signal SIGINT
commands
if $_siginfo._sifields._kill.si_pid == 0
print "Received SIGINT from tty"
else
printf "Received SIGINT from %d; continuing\n", $_siginfo._sifields._kill.si_pid
signal SIGINT
end
end
This part of gdb is a bit tricky, both due to its history and also due to the various modes of operation it supports.
One might think that running gdb in a separate terminal and only using attach would help it do the right thing, but I don't think it is that easy.
One way forward might be to only use async execution when debugging, and then use a command to interrupt the inferior. Something like:
(gdb) attach 5555
... attaches
(gdb) continue &
... lots of stuff happens
(gdb) interrupt -a
Depending on your version of gdb you might need to set target-async for this to work.
What is the best way to execute command such as 'trap -p' etc directly from program written in ANSI C?
I tried:
system("bash");
system("trap -p");
But when I add system("bash") program dissappears. How to prevent it from dissapering or what is the better way to execute such commands?
EDIT:
Thank you all for helping me.
More details about what I intended to achieve:
I want to be able to:
-add new traps inside my program ( traps working only in my program )
-display currently set traps ( again, traps in my program )
Is that possible to achive in relatively easy way?
But when I add system("bash") program dissappears
Yes, bash is now running and your C program is waiting for it to terminate. It seems to have disappeared because you would be seeing a new shell running in your terminal. Try typing exit and your C program will continue. You can confirm this by adding a print statement after system("bash");.
You can get trap -p to produce output by specifying the -i option to bash, which makes it an interactive shell:
system("bash -i -c 'trap -p'");
From this it would seem that trap requires a tty, which non-interactive bash doesn't have.
Or you could put the trap command in a script and run it like this:
system("bash script.sh");
The contents of script.sh:
echo Before setting trap...
trap -p
trap somecmd SIGINT
echo After setting a trap...
trap -p
In the output you should see that initially there were no traps set (assuming that none were inherited from the shell that ran your C program), and then trap should show the newly created trap.
I am guessing you are on Linux or some other POSIX system
You should get a better picture of Linux programming by reading Advanced Linux Programming. It looks like you are misunderstanding processes and signals.
You cannot catch a signal inside the process running your C program from some shell (either your parent shell, or any child shell started with system(3). So the output of trap -p from any shell is not relevant to your program (but to the shell running it). Hence even using popen(3) like FILE*fp = popen("trap -p", "r"); (or popen("bash -i -c 'trap -p'", "r")....) then reading from fp (and at last pclose-ing it) is useless.
If you want to handle signals inside your C program, read first carefully signal(7); then read POSIX signal.h documentation (notice sig_atomic_t); read also sigaction(2), fork(2), execve(2)
I want to be able to: add new traps inside my program
This has no meaning for C programs running on Linux or POSIX. A C program can handle (with great care and caution!) some signals, which are not traps.
[I want to:] display currently set traps
Again, "trap" has no sense inside a C or C++ program, but signals do. You don't really need to display the currently set signal handlers, because you have set them before. And sigaction(2) accepts a third oldact pointer to hold the previous signal action.
Processor traps (which are only handled by kernel code, not by application code) are remotely and indirectly related to signals. For example, a page fault (for implementation of virtual memory) is often handled by the kernel to fill the page cache with a page from disk (file or swap zone) but may translate to a SIGSEGV signal (for segmentation fault) sent to the process, which often terminates with a core dump.
If you install some signal handler in your C program, be sure to understand what are async-signal-safe functions (the only ones you are allowed to call from a signal handler; in particular calling fprintf or malloc -even indirectly- is forbidden, so is undefined behavior). A useful way of handling a signal is to declare some volatile sig_atomic_t variables and set them inside signal handlers (and test and reset them outside, e.g. in your event loop).
The shell trap builtin is used to manage some signals (and also exit and error conditions). To manage signals in C, use sigaction(2). To run something at exit(3) time, use atexit(3). To handle error conditions, be sure to test every individual syscalls(2) and most library functions (like scanf(3) or malloc(3) etc etc ..., see intro(3)), using errno(3)
Instead of running an interactive bash, it seems that you are looking for a way to run trap -p in a noninteractive Bash shell. Here's how you do that.
system("bash -c 'trap -p'");
However, your C-level signal handlers will not be visible in the trap -p output. Bash can only know about trap handlers which were defined in Bash; and the shell you are starting will not have any (unless they are inherited from the shell you used to start your C program).
My UI is in a DLL. Right now, both the DLL and the EXE that uses it are compiled as console programs so I can use stdout and stderr for debugging and error reporting during development. One of the things is that I have an uninit() function that makes sure the DLL isn't leaking memory.
As a result, I have a control handler set up by the DLL such that CTRL_LOGOFF_EVENT and CTRL_SHUTDOWN_EVENT simulate the user clicking the Quit option from the File menu: it does PostQuitMessage(0), with the cleanup code happening after the message pump returns.
I know that normally CTRL_SHUTDOWN_EVENT cannot be ignored, and that the program will terminate after the handler routine returns, regardless of what it returns. But according to MSDN,
Note that a third-party library or DLL can install a console control handler for your application. If it does, this handler overrides the default handler, and can cause the application to exit when the user logs off.
If I am reading this correctly, this says that a control handler that is installed by a DLL overrides the handler that causes my program to quit when the handler function returns. Am I wrong about that? My DLL's handler function simply returns TRUE, which I assume will further stop any other defaults from running given the blurb above.
Why? I'm noticing weird behavior:
On Windows Vista, the program closes regardless of what I do. In this case, I'm wondering if the blurb is wrong and that the handler that terminates the process is still running. This happens regardless of whether I have called ShutdownBlockReasonCreate().
On Windows 7, however, it seems that my program's main window gets a WM_QUERYENDSESSION, and Windows responds to it accordingly. That means that if I say "no, don't quit yet (don't call PostQuitMessage(0))" in my Quit function, Windows pops up the "an application is preventing shutdown" screen saying my main window is preventing shutdown. In that case, the blurb above appears to be correct, as the program is not quitting on return from the console handler (if it's even being called!).
If I instead say "yes, call PostQuitMessage(0), the program quits normally. However, I lose the debugging output on stdout and stderr, so I can't tell if it really is quitting normally or not. Invoking my program as
new.exe > out.txt 2> err.txt
on cmd.exe produces two empty files; I don't know why the output isn't saving on system shutdown (and Googling doesn't turn up any information).
So can someone help clear up my confusion so I can implement this (including ShutdownBlockReasonCreate()) properly? Thanks.
When you return TRUE from the handler you registered, Windows immediately terminates the process. When you return FALSE, the previous handler gets called. Ultimately that will be the default handler, it immediately terminates the process.
So what you have to do is not return and block until you are happy. That requires synching with the thread that's pumping the message loop. You'd use an event, the pumping thread can call SetEvent() after its message loop and your handler can call WaitForSingleEvent() to block after it called PostQuitMessage().
It is however a threading race, your UI thread was probably started by main() and the CRT is going to terminate the program when main() returns. Which one will get there first is a unpredictable.
Having the feeling you are doing something wrong? Well, you are. A console window just isn't a very good way to display debug output. Not sure why you are doing this but I know your tool-chain is unusual, I can never get any of your code snippets to compile and run. The proper way is OutputDebugString(). That function talks to your debugger and gets it to display text. Even if your debugger isn't capable of displaying such text, you can still fallback to SysInternals' DebugView utility.
Your are probably using printf() and won't enjoy fixing all your debug statements, simply write your own version of that links ahead of the CRT, use vprintf() and OutputDebugStringA().
Is there any way to make a program that cannot be interrupted (an uninterrupted program)? By that, I mean a process that can't be terminated by any signal, kill command, or any other key combinations in any System: Linux, windows etc.
First, I am interested to know whether it's possible or not. And if yes, upto what extend it is possible?
I mostly write code in C, C++, and python; but I don't know any of such command(s) available in these programming languages.
Is it possible with assembly language, & how ? Or in high level language c with embedded assembly code(inline assembly)?
I know some signals are catchable some are not like SIGKILL and SIGSTOP.
I remember, when I was use to work on Windows-XP, some viruses couldn't be terminated even from Task Manager. So I guess some solution is possible in low level languages. maybe by overriding Interrupt Vector Table.
Can we write an uninterrupted program using TSRs(Hooking)? Because TSR can only removed when the computer is rebooted or if the TSR is explicitly removed from memory. Am I correct?
I couldn't find any thing on Google.
Well, possibly one can write a program which doesn't respond for most signals like SIGQUIT, SIGHUP etc. - each kind of "kill" is actually a kind of signal sent to program by kernel, some signals means for the kernel that program is stuck and should be killed.
Actually the only unkillable program is kernel itself, even init ( PID 1 ) can be "killed" with HUP ( which means reload ).
Learn more about signal handling, starting with kill -l ( list signals ) command.
Regarding Windows ( basing on "antivirus" tag ) - which actually applies to linux too - if you just need to run some antivirus user is unable to skip/close, it's permission problem, I mean program started by system, and non-administrative user without permission to kill it, won't be able to close/exit it anyway. I guess lameusers on Windows all over the world would start "solving" any problems they have by trying to close antivirus first, just if it would be possible :)
On Linux, it is possible to avoid being killed by one of two ways:
Become init (PID 1). init ignores all signals that it does not catch, even normally unblockable ones like SIGSTOP and SIGKILL.
Trigger a kernel bug, and get your program stuck in D (uninterruptible wait) state.
For 2., one common way to end up in D state is to attempt to access some hardware that is not responding. Particularly on older versions of Linux, the process would become stuck in kernel mode, and not respond to any signals until the kernel gave up on the hardware (which can take quite some time!). Of course, your program can't do anything else while it's stuck like this, so it's more annoying than useful, and newer versions of Linux are starting to rectify this problem by dividing D state into a killable state (where SIGKILL works) and an unkillable state (where all signals are blocked).
Or, of course, you could simply load your code as a kernel module. Kernel modules can't be 'killed', only unloaded - and only if they allow themselves to be unloaded.
You can catch pretty-much any signal or input and stay alive through it, the main exception being SIGKILL. It is possible to prevent that from killing you, but you'd have to replace init (and reboot to become the new init). PID 0 is special on most Unixes, in that it's the only thing that can't be KILL'd.