Prevent glibc from showing extra abort information [duplicate] - c

Some C++ libraries call abort() function in the case of error (for example, SDL). No helpful debug information is provided in this case. It is not possible to catch abort call and to write some diagnostics log output. I would like to override this behaviour globally without rewriting/rebuilding these libraries. I would like to throw exception and handle it. Is it possible?

Note that abort raises the SIGABRT signal, as if it called raise(SIGABRT). You can install a signal handler that gets called in this situation, like so:
#include <signal.h>
extern "C" void my_function_to_handle_aborts(int signal_number)
{
/*Your code goes here. You can output debugging info.
If you return from this function, and it was called
because abort() was called, your program will exit or crash anyway
(with a dialog box on Windows).
*/
}
/*Do this early in your program's initialization */
signal(SIGABRT, &my_function_to_handle_aborts);
If you can't prevent the abort calls (say, they're due to bugs that creep in despite your best intentions), this might allow you to collect some more debugging information. This is portable ANSI C, so it works on Unix and Windows, and other platforms too, though what you do in the abort handler will often not be portable. Note that this handler is also called when an assert fails, or even by other runtime functions - say, if malloc detects heap corruption. So your program might be in a crazy state during that handler. You shouldn't allocate memory - use static buffers if possible. Just do the bare minimum to collect the information you need, get an error message to the user, and quit.
Certain platforms may allow their abort functions to be customized further. For example, on Windows, Visual C++ has a function _set_abort_behavior that lets you choose whether or not a message is displayed to the user, and whether crash dumps are collected.

According to the man page on Linux, abort() generates a SIGABRT to the process that can be caught by a signal handler. EDIT: Ben's confirmed this is possible on Windows too - see his comment below.

You could try writing your own and get the linker to call yours in place of std::abort. I'm not sure if it is possible however.

Related

Can I execute free() or close() in a signal handler? [duplicate]

This question already has answers here:
How to avoid using printf in a signal handler?
(8 answers)
Closed 2 years ago.
I have a code that looks like this:
//global variables
void signal_handler() {
//deallocation of global variables
free(foo);
close(foo_2);
exit(0);
}
int main () {
signal(SIGINT, signal_handler);
//irrelevant code
}
As you can see, I changed the CTRL+C interruption to execute the signal_handler function once instead of killing the process right away. I read somewhere that some functions like might be free are not async-safe and would NOT execute in the signal_handler but I'm not sure about that.
Can I execute functions like free, close, exit or even pthread_join in a signal handler?
No. Only functions listed in man 7 signal-safety are safe to call inside a signal handler.
close is listed and should be safe. free is not. For reasons why you would have to look at its source code (it contains locks). exit is not safe because it can call arbitrary cleanup handlers. You have _exit which exits abruptly without the cleanup.
You techincally can compile a program that calls such functions in a signal handler, nothing stops you from doing that. However it will result in undefined behavior if the function you are trying to execute is not async-signal-safe. It's not like unsafe function would just "NOT execute" as you say, they very well could, but that'd still be undefined behavior.
A list of async-signal-safe functions is documented in man 7 signal-safety. The close() function is safe, while free() and phtread_join() are not. The exit() function is also not safe to call from a signal handler, if you wish to exit from such context you will have to do so using _exit() instead.
The only way to safely call a function that is not async-signal-safe when receiving a signal is to "remember" that you have to call it (for example setting a global variable) and then do so after returning from the signal handler.
Short answer is no:
7.1.4 Use of library functions
...
4 The functions in the standard library are not guaranteed to be reentrant and may modify
objects with static or thread storage duration.188)
188) Thus, a signal handler cannot, in general, call standard library functions
C 2011 Online Draft
Real-world example of the consequences - I worked on a system that communicated with an Access database. There was a signal handler that tried to write an error message to the console with fprintf, but somehow during the signal handling process stderr got mapped to the .mdb file that stored the database, overwriting the header and ruining the database beyond repair.
There's honestly not a whole lot you can do in a signal handler other than set a flag to be checked elsewhere.
Can I execute free() or close() in a signal handler?
You definitely should not. See signal(7) and signal-safety(7)
In practice, it might work like you want perhaps more than half of the time. IIRC, the GCC compiler is doing like you want to do, and it usually works.
A better approach is to use some write(2) to a pipe(7) (from inside your signal handler) and from time to time check that pipe (in your main program) with poll(2) or related things.
Or you could set some volatile sigatomic_t flag; (perhaps it should be also _Atomic) in your signal handler, and check that flag elsewhere (in the main program, outside of signal handlers).
Qt is explaining that better than I could do in a few minutes.
On Linux, see also signalfd(2) and eventfd(2).

Should we use exit() in C?

There is question about using exit in C++. The answer discusses that it is not good idea mainly because of RAII, e.g., if exit is called somewhere in code, destructors of objects will not be called, hence, if for example a destructor was meant to write data to file, this will not happen, because the destructor was not called.
I was interested how is this situation in C. Are similar issues applicable also in C? I thought since in C we don't use constructors/destructors, situation might be different in C. So is it ok to use exit in C? For example I have seen following functions sometimes used in C:
void die(const char *message)
{
if(errno) {
perror(message);
} else {
printf("ERROR: %s\n", message);
}
exit(1);
}
Rather than abort(), the exit() function in C is considered to be a "graceful" exit.
From C11 (N1570) 7.22.4.4/p2 The exit function (emphasis mine):
The exit function causes normal program termination to occur.
The Standard also says in 7.22.4.4/p4 that:
Next, all open streams with unwritten buffered data are flushed, all
open streams are closed, and all files created by the tmpfile function
are removed.
It is also worth looking at 7.21.3/p5 Files:
If the main function returns to its original caller, or if the exit
function is called, all open files are closed (hence all output
streams are flushed) before program termination. Other paths to
program termination, such as calling the abort function, need not
close all files properly.
However, as mentioned in comments below you can't assume that it will cover every other resource, so you may need to resort to atexit() and define callbacks for their release individually. In fact it is exactly what atexit() is intended to do, as it says in 7.22.4.2/p2 The atexit function:
The atexit function registers the function pointed to by func, to be
called without arguments at normal program termination.
Notably, the C standard does not say precisely what should happen to objects of allocated storage duration (i.e. malloc()), thus requiring you be aware of how it is done on particular implementation. For modern, host-oriented OS it is likely that the system will take care of it, but still you might want to handle this by yourself in order to silence memory debuggers such as Valgrind.
Yes, it is ok to use exit in C.
To ensure all buffers and graceful orderly shutdown, it would be recommended to use this function atexit, more information on this here
An example code would be like this:
void cleanup(void){
/* example of closing file pointer and free up memory */
if (fp) fclose(fp);
if (ptr) free(ptr);
}
int main(int argc, char **argv){
/* ... */
atexit(cleanup);
/* ... */
return 0;
}
Now, whenever exit is called, the function cleanup will get executed, which can house graceful shutdown, clean up of buffers, memory etc.
You don't have constructors and destructors but you could have resources (e.g. files, streams, sockets) and it is important to close them correctly. A buffer could not be written synchronously, so exiting from the program without correctly closing the resource first, could lead to corruption.
Using exit() is OK
Two major aspects of code design that have not yet been mentioned are 'threading' and 'libraries'.
In a single-threaded program, in the code you're writing to implement that program, using exit() is fine. My programs use it routinely when something has gone wrong and the code isn't going to recover.
But…
However, calling exit() is a unilateral action that can't be undone. That's why both 'threading' and 'libraries' require careful thought.
Threaded programs
If a program is multi-threaded, then using exit() is a dramatic action which terminates all the threads. It will probably be inappropriate to exit the entire program. It may be appropriate to exit the thread, reporting an error. If you're cognizant of the design of the program, then maybe that unilateral exit is permissible, but in general, it will not be acceptable.
Library code
And that 'cognizant of the design of the program' clause applies to code in libraries, too. It is very seldom correct for a general purpose library function to call exit(). You'd be justifiably upset if one of the standard C library functions failed to return just because of an error. (Obviously, functions like exit(), _Exit(), quick_exit(), abort() are intended not to return; that's different.) The functions in the C library therefore either "can't fail" or return an error indication somehow. If you're writing code to go into a general purpose library, you need to consider the error handling strategy for your code carefully. It should fit in with the error handling strategies of the programs with which it is intended to be used, or the error handling may be made configurable.
I have a series of library functions (in a package with header "stderr.h", a name which treads on thin ice) that are intended to exit as they're used for error reporting. Those functions exit by design. There are a related series of functions in the same package that report errors and do not exit. The exiting functions are implemented in terms of the non-exiting functions, of course, but that's an internal implementation detail.
I have many other library functions, and a good many of them rely on the "stderr.h" code for error reporting. That's a design decision I made and is one that I'm OK with. But when the errors are reported with the functions that exit, it limits the general usefulness the library code. If the code calls the error reporting functions that do not exit, then the main code paths in the function have to deal with error returns sanely — detect them and relay an error indication to the calling code.
The code for my error reporting package is available in my SOQ (Stack Overflow Questions) repository on GitHub as files stderr.c and stderr.h in the src/libsoq sub-directory.
One reason to avoid exit in functions other than main() is the possibility that your code might be taken out of context. Remember, exit is a type of non local control flow. Like uncatchable exceptions.
For example, you might write some storage management functions that exit on a critical disk error. Then someone decides to move them into a library. Exiting from a library is something that will cause the calling program to exit in an inconsitent state which it may not be prepared for.
Or you might run it on an embedded system. There is nowhere to exit to, the whole thing runs in a while(1) loop in main(). It might not even be defined in the standard library.
Depending on what you are doing, exit may be the most logical way out of a program in C. I know it's very useful for checking to make sure chains of callbacks work correctly. Take this example callback I used recently:
unsigned char cbShowDataThenExit( unsigned char *data, unsigned short dataSz,unsigned char status)
{
printf("cbShowDataThenExit with status %X (dataSz %d)\n", status, dataSz);
printf("status:%d\n",status);
printArray(data,dataSz);
cleanUp();
exit(0);
}
In the main loop, I set everything up for this system and then wait in a while(1) loop. It is possible to make a global flag to exit the while loop instead, but this is simple and does what it needs to do. If you are dealing with any open buffers like files and devices you should clean them up before close for consistency.
It is terrible in a big project when any code can exit except for coredump. Trace is very import to maintain a online server.

Confusion about CTRL_SHUTDOWN_EVENT handling in DLLs and WM_QUERYENDSESSION

My UI is in a DLL. Right now, both the DLL and the EXE that uses it are compiled as console programs so I can use stdout and stderr for debugging and error reporting during development. One of the things is that I have an uninit() function that makes sure the DLL isn't leaking memory.
As a result, I have a control handler set up by the DLL such that CTRL_LOGOFF_EVENT and CTRL_SHUTDOWN_EVENT simulate the user clicking the Quit option from the File menu: it does PostQuitMessage(0), with the cleanup code happening after the message pump returns.
I know that normally CTRL_SHUTDOWN_EVENT cannot be ignored, and that the program will terminate after the handler routine returns, regardless of what it returns. But according to MSDN,
Note that a third-party library or DLL can install a console control handler for your application. If it does, this handler overrides the default handler, and can cause the application to exit when the user logs off.
If I am reading this correctly, this says that a control handler that is installed by a DLL overrides the handler that causes my program to quit when the handler function returns. Am I wrong about that? My DLL's handler function simply returns TRUE, which I assume will further stop any other defaults from running given the blurb above.
Why? I'm noticing weird behavior:
On Windows Vista, the program closes regardless of what I do. In this case, I'm wondering if the blurb is wrong and that the handler that terminates the process is still running. This happens regardless of whether I have called ShutdownBlockReasonCreate().
On Windows 7, however, it seems that my program's main window gets a WM_QUERYENDSESSION, and Windows responds to it accordingly. That means that if I say "no, don't quit yet (don't call PostQuitMessage(0))" in my Quit function, Windows pops up the "an application is preventing shutdown" screen saying my main window is preventing shutdown. In that case, the blurb above appears to be correct, as the program is not quitting on return from the console handler (if it's even being called!).
If I instead say "yes, call PostQuitMessage(0), the program quits normally. However, I lose the debugging output on stdout and stderr, so I can't tell if it really is quitting normally or not. Invoking my program as
new.exe > out.txt 2> err.txt
on cmd.exe produces two empty files; I don't know why the output isn't saving on system shutdown (and Googling doesn't turn up any information).
So can someone help clear up my confusion so I can implement this (including ShutdownBlockReasonCreate()) properly? Thanks.
When you return TRUE from the handler you registered, Windows immediately terminates the process. When you return FALSE, the previous handler gets called. Ultimately that will be the default handler, it immediately terminates the process.
So what you have to do is not return and block until you are happy. That requires synching with the thread that's pumping the message loop. You'd use an event, the pumping thread can call SetEvent() after its message loop and your handler can call WaitForSingleEvent() to block after it called PostQuitMessage().
It is however a threading race, your UI thread was probably started by main() and the CRT is going to terminate the program when main() returns. Which one will get there first is a unpredictable.
Having the feeling you are doing something wrong? Well, you are. A console window just isn't a very good way to display debug output. Not sure why you are doing this but I know your tool-chain is unusual, I can never get any of your code snippets to compile and run. The proper way is OutputDebugString(). That function talks to your debugger and gets it to display text. Even if your debugger isn't capable of displaying such text, you can still fallback to SysInternals' DebugView utility.
Your are probably using printf() and won't enjoy fixing all your debug statements, simply write your own version of that links ahead of the CRT, use vprintf() and OutputDebugStringA().

assert() safety in multithreaded context

so I cannot seem to find solid info on whether assert is useable in a mulththreaded context.
logically to me it seems if an assertion fails the thread get shutdown but not the other threads?
or does the entire process get killed?
so basically my question. is it safe to use assert in a multithreaded environment without leaking resources?
if you see the man page of assert(), it clearly states,
The purpose of this macro is to help the programmer find bugs in his
program. The message "assertion failed in file foo.c, function
do_bar(), line 1287" is of no help at all to a user.
This means, it's only useful [and should be used] in a developing environment, not in production software. IMO, in development stage, you need not to worry about leaks caused by assert(). YMMV.
Once you finished debugging your code, you can simply switch off the assert() functionality by defining [#define] NDEBUG.
I'd say more than yes. If I'd see a multithreaded code without asserts I'd not trust it. If you simplify a bit its implementations to something like:
#define assert(x) if( !(x) ) abort()
You'll see that it does nothing special for thread-safety or thread-specific. It's your responsibility to provide race-free condition and if the assertion fails, the whole process is aborted.
The entire process gets killed. Assert will send the expression, source filename and line number to stderr and then call abort(). Abort() terminates the entire process.

detect program termination (C, Windows)

I have a program that has to perform certain tasks before it finishes. The problem is that sometimes the program crashes with an exception (like database cannot be reached, etc).
Now, is there any way to detect an abnormal termination and execute some code before it dies?
Thanks.
code is appreciated.
1. Win32
The Win32 API contains a way to do this via the SetUnhandledExceptionFilter function, as follows:
LONG myFunc(LPEXCEPTION_POINTERS p)
{
printf("Exception!!!\n");
return EXCEPTION_EXECUTE_HANDLER;
}
int main()
{
SetUnhandledExceptionFilter((LPTOP_LEVEL_EXCEPTION_FILTER)&myFunc);
// generate an exception !
int x = 0;
int y = 1/x;
return 0;
}
2. POSIX/Linux
I usually do this via the signal() function and then handle the SIGSEGV signal appropriately. You can also handle the SIGTERM signal and SIGINT, but not SIGKILL (by design). You can use strace() to get a backtrace to see what caused the signal.
There are sysinternals forum threads about protecting against end-process attempts by hooking NT Internals, but what you really want is either a watchdog or peer process (reasonable approach) or some method of intercepting catastrophic events (pretty dicey).
Edit: There are reasons why they make this difficult, but it's possible to intercept or block attempts to kill your process. I know you're just trying to clean up before exiting, but as soon as someone releases a process that can't be immediately killed, someone will ask for a method to kill it immediately, and so on. Anyhow, to go down this road, see above linked thread and search some keywords you find in there for more. hook OR filter NtTerminateProcess etc. We're talking about kernel code, device drivers, anti-virus, security, malware, rootkit stuff here. Some books to help in this area are Windows NT/2000 Native API, Undocumented Windows 2000 Secrets: A Programmer's Cookbook, Rootkits: Subverting the Windows Kernel, and, of course, Windows® Internals: Fifth Edition. This stuff is not too tough to code, but pretty touchy to get just right, and you may be introducing unexpected side-effects.
Perhaps Application Recovery and Restart Functions could be of use? Supported by Vista and Server 2008 and above.
ApplicationRecoveryCallback Callback Function Application-defined callback function used to save data and application state information in the event the application encounters an unhandled exception or becomes unresponsive.
On using SetUnhandledExceptionFilter, MSDN Social discussion advises that to make this work reliably, patching that method in-memory is the only way to be sure your filter gets called. Advises to instead wrap with __try/__except. Regardless, there is some sample code and discussion of filtering calls to SetUnhandledExceptionFilter in the article "SetUnhandledExceptionFilter" and VC8.
Also, see Windows SEH Revisited at The Awesome Factor for some sample code of AddVectoredExceptionHandler.
It depends what do you do with your "exceptions". If you handle them properly and exit from program, you can register you function to be called on exit, using atexit().
It won't work in case of real abnormal termination, like segfault.
Don't know about Windows, but on POSIX-compliant OS you can install signal handler that will catch different signals and do something about it. Of course you cannot catch SIGKILL and SIGSTOP.
Signal API is part of ANSI C since C89 so probably Windows supports it. See signal() syscall for details.
If it's Windows-only, then you can use SEH (SetUnhandledExceptionFilter), or VEH (AddVectoredExceptionHandler, but it's only for XP/2003 and up)
Sorry, not a windows programmer. But maybe
_onexit()
Registers a function to be called when program terminates.
http://msdn.microsoft.com/en-us/library/aa298513%28VS.60%29.aspx
First, though this is fairly obvious: You can never have a completely robust solution -- someone can always just hit the power cable to terminate your process. So you need a compromise, and you need to carefully lay out the details of that compromise.
One of the more robust solutions is putting the relevant code in a wrapper program. The wrapper program invokes your "real" program, waits for its process to terminate, and then -- unless your "real" program specifically signals that it has completed normally -- runs the cleanup code. This is fairly common for things like test harnesses, where the test program is likely to crash or abort or otherwise die in unexpected ways.
That still gives you the difficulty of what happens if someone does a TerminateProcess on your wrapper function, if that's something you need to worry about. If necessary, you could get around that by setting it up as a service in Windows and using the operating system's features to restart it if it dies. (This just changes things a little; someone could still just stop the service.) At this point, you probably are at a point where you need to signal successful completion by something persistent like creating a file.
I published an article at ddj.com about "post mortem debugging" some years ago.
It includes sources for windows and unix/linux to detect abnormal termination. By my experience though, a windows handler installed using SetUnhandledExceptionFilter is not always called. In many cases it is called, but I receive quite a few log files from customers that do not include a report from the installed handlers, where i.e. an ACCESS VIOLATION was the cause.
http://www.ddj.com/development-tools/185300443

Resources