I have just dug up a bug in some code we're working with* that was failing due to the following situation:
Assert(SomeVitalFunction(foo) == OK)
This worked fine all the time the DEBUG macros were #defined:
#ifdef DEBUG
#define Assert(x) if((x) == 0){/*some error handling*/}
#else
#define Assert(x)
#endif
But when we #undef'd DEBUG it has the effect of deleting the vital function call from the code.
I can't for the life of me work out how that could ever work with DEBUG #undef'd, and it seems a bad idea generally to put any sort of function call inside an assert like this.
Have I missed something?
* = Edit to clarify following Carpetsmoker's comment: The code comes from a particularly backward cabal of Elbonian code slaves, our task has been to hack, slash, shave, polish, sanitize and apply lipstick to the thing.
You have missed nothing.
Asserts should always be written as if they could disappear at the flick of a compiler switch.
You can call functions that take a relatively long time to complete inside an assert (for example analysing the integrity of a data structure), because the function call will not be present in the release build. The flip side of this is that you cannot call functions that are necessary for correct operation.
It depends upon what SomeVitalFunction is doing. If it has no interesting side-effects, it is ok to use it inside an assert. But if calling or not calling SomeVitalFunction is essential to the program, it is a bug.
For instance, on POSIX, kill(2) with a 0 signal is only useful to test if a process is living. I would imagine that you might be sometimes tempted to use
assert(kill(sompid, 0) == 0); // process sompid exists
assuming that you always suppose that the process sompid is still running.
Likewise, you might use assert(hash_table_count(htbl)>0); to check that some hash table htbl is not empty.
BTW, notice that assert(3) is documented as being ignored if you compile with -DNDEBUG preprocessor option (not if -DDEBUG is not given).
Related
I'm using the CUnit framework for the way it displays the testing results. (I'm a programming & S.O. newbie so step by step answers really appreciated).
Is there any way I can use the same CUnit framework for when I'm testing for functions that I expect to exit()? It doesnt seem so to me, but I'm keen to ask anyway - it would display the pass/fail result along with my other CUnit tests so its ideal.
If not, I've been looking at other noob-friendly solutions (such as this SO post), but I cannot use GOTO/setjmp/longjmp. The solution also needs to be portable.
I'm using Mac & gcc command line to run this code.
EDIT
One of the suggested solutions is to use C Pre-Processor (CPP) Directive /"mocking", which looks ideal? I have used the below code in my test.c file:
#define ERROR(PHRASE) {fprintf(stderr,"Fatal Error %s occurred in %s, line %d\n",PHRASE, FILE, LINE); exit(2);}
#ifdef ERROR(PHRASE)
#define ERROR(PHRASE) {printf("In test phase");}
#endif
#ifndef ERROR(PHRASE #define ERROR(PHRASE) {printf("Not In test phase");}
#endif
Here is the error message that the terminal gives me:
test.c:30:9: warning: 'ERROR' macro redefined [-Wmacro-redefined]
#define ERROR(PHRASE) {printf("In test phase");}
^
test.c:26:9: note: previous definition is here
#define ERROR(PHRASE) {fprintf(stderr,"Fatal Error %s occured in %s, lin...
^
test.c:32:14: warning: extra tokens at end of #ifndef directive
[-Wextra-tokens]
#ifndef ERROR(PHRASE) {printf("Not In test phase");}
Removing the (PHRASE) still gives the same errors.
EDIT
If helpful for anyone else, mocking using the #ifdef was the easiest way to solve this issue in the end. This website was helpful too.
Just so you know what to search for, what you want to do is "mock" the exit() call. The basic idea is to choose a different implementation for the exit function, generally at compile time. Frankly, C doesn't make this particularly easy, but there are some options with varying levels of portability and intrusiveness.
This article describes something that is pretty portable, but also fairly intrusive. Basically, you use macros and/or function pointers to toggle back and forth, which means modifying your code a bit, but honestly it's not that big of a deal.
For something potentially less intrusive but also much less portable, this article has a couple of ideas (I believe both would work on MacOS). Here you get the linker to redirect the exit() call to another function, which you provide. The good news is that it doesn't require any modifications to your code. The bad news is that it requires you to gain the cooperation of the linker, and won't work everywhere (LD_PRELOAD won't work on Windows, and AFAIK --wrap requires GNU ld or something compatible).
One aspect that might be considered if there are issues/increased effort with regards to testing is if there's any scope to change the program being tested in some way that would help with testing without significantly increasing the complexity of the code.
In this case, is there scope to replace the calls to exit() with error return codes from functions, such that the callers can do things such as tidy up, or log state, before actually exiting? If so, this both simplifies testing and is likely to simplify fault-finding when the code is actually used in release/production, as it can be quite tricky to work out why a program just ups and dies on you, especially if the code is tucked away in a library function!
If you want to do something non-intrusive, you can run the function under test as a separate process. You start this with CreateProcess (Windows) or fork (and possibly execv on Max and Linux. Your test code then use wait to test the exit code and pass if the created process exits correctly.
My product embeds TCL VM to run TCL script. We basically take the TCL 8.4 source and integrate it to our product, the whole product is programmed in C.
Now I need to debug some issue and best I can have some insight about TCL VM at run time. So I add some printf to the TCL source, but I cannot see any print out. Note that the printf added to our side of the code works as expected.
This leads me to suspect that somewhere in TCL the printf is disabled.
I see the following code snippet in TCL source:
#ifdef TCL_COMPILE_DEBUG
fprintf(stdout, " Starting stack top=%d\n", eePtr->stackTop);
fflush(stdout);
#endif
I rebuild TCL by enabling the TCL_COMPILE_DEBUG, still I cannot see print out.
Any suggestion how I should proceed from here?
It seems unlikely that the standard library's fprintf() is disabled. Instead, I see three main alternatives:
The fprintf() you have added is never being called. That could be because it's in the wrong place, because conditional compilation directives cause it to be omitted, or perhaps for some other reason.
The fprintf() being called is not the standard library's, and it does not do what you expect. It might instead be a local function in the TCL VM's code, or the TCL VM might #define it to something else altogether. Depending on exactly how you integrate TCL into your larger code, these possibilities might be limited in scope to just TCL.
stdout does not mean what you think it does inside the TCL code. This would almost surely be as a result of it being #defined to something else, for some reason important to the TCL VM. In that case, there might or might not be a way to get the real stdout in that scope.
I'd suggest you grep the TCL code you have integrated for the fprintf and stdout symbols, to look for macro definitions and alternative implementations. It would also be worthwhile to check the preprocessor output to make sure your call is still there (and is still the call you expected). If you are compiling with GCC, then you can preprocess your sources without compiling the result via gcc -E.
I am writing a stateful scanner and I want to have a debugging symbol for every state change.
In my code I call a macro SETSTATE(ST_xxx) for instance, which does some nasty things, BUT I could easily also tell GCC to emit at that point a specific debugging symbol based on that name ST_xxx.
What I need to accomplish is setting a breakpoint in gdb.
I suppose it should be a #pragma or something.
If I only knew how ...
Though I might misunderstand the question,
how about making a dummy function and calling that in SETSTATE,
then setting a breakpoint in that function?
For example:
void dummy_breakpoint() {}
#define SETSTATE(st) dummy_breakpoint(); ...usual process...
Setting break dummy_breakpoint in .gdbinit might help some labor savings.
EDIT:
How about setting a watch-point in SETSTATE like the following, and
setting watch dummy_variable in .gdbinit?
char dummy_variable; /* global variable */
#define SETSTATE(st) ++ dummy_variable; ...usual process...
However, this might make the program's execution be slower if your
environment doesn't provide hardware watch-point...
If you want debugging symbols as a point of reference, you can use labels to create these (just make sure they aren't stripped out of the debug info if unreferenced), though having never used used gdb i'm not sure if it'll pick up labels like ollydbg does with obj scanning/analysis. but seeing as its breakpoints your after, why not just use a debug trap, like msvc's __debugbreak()?, something from here might be of use for the gcc variant: Is there a portable equivalent to DebugBreak()/__debugbreak?
On compile use -D ST_xxx
I use this for enabling debugging messages, using macros. It defines the constant ST_xxx with value 1.
A programmer I respect said that in C code, #if and #ifdef should be avoided at all costs, except possibly in header files. Why would it be considered bad programming practice to use #ifdef in a .c file?
Hard to maintain. Better use interfaces to abstract platform specific code than abusing conditional compilation by scattering #ifdefs all over your implementation.
E.g.
void foo() {
#ifdef WIN32
// do Windows stuff
#else
// do Posix stuff
#endif
// do general stuff
}
Is not nice. Instead have files foo_w32.c and foo_psx.c with
foo_w32.c:
void foo() {
// windows implementation
}
foo_psx.c:
void foo() {
// posix implementation
}
foo.h:
void foo(); // common interface
Then have 2 makefiles1: Makefile.win, Makefile.psx, with each compiling the appropriate .c file and linking against the right object.
Minor amendment:
If foo()'s implementation depends on some code that appears in all platforms, E.g. common_stuff()2, simply call that in your foo() implementations.
E.g.
common.h:
void common_stuff(); // May be implemented in common.c, or maybe has multiple
// implementations in common_{A, B, ...} for platforms
// { A, B, ... }. Irrelevant.
foo_{w32, psx}.c:
void foo() { // Win32/Posix implementation
// Stuff
...
if (bar) {
common_stuff();
}
}
While you may be repeating a function call to common_stuff(), you can't parameterize your definition of foo() per platform unless it follows a very specific pattern. Generally, platform differences require completely different implementations and don't follow such patterns.
Makefiles are used here illustratively. Your build system may not use make at all, such as if you use Visual Studio, CMake, Scons, etc.
Even if common_stuff() actually has multiple implementations, varying per platform.
(Somewhat off the asked question)
I saw a tip once suggesting the use of #if(n)def/#endif blocks for use in debugging/isolating code instead of commenting.
It was suggested to help avoid situations in which the section to be commented already had documentation comments and a solution like the following would have to be implemented:
/* <-- begin debug cmnt if (condition) /* comment */
/* <-- restart debug cmnt {
....
}
*/ <-- end debug cmnt
Instead, this would be:
#ifdef IS_DEBUGGED_SECTION_X
if (condition) /* comment */
{
....
}
#endif
Seemed like a neat idea to me. Wish I could remember the source so I could link it :(
Because then when you do search results you don't know if the code is in or out without reading it.
Because they should be used for OS/Platform dependencies, and therefore that kind of code should be in files like io_win.c or io_macos.c
My interpretation of this rule:
Your (algorithmic) program logic should not be influenced by preprocessor defines. The functioning of your code should always be concise. Any other form of logic (platform, debug) should be abstractable in header files.
This is more a guideline than a strict rule, IMHO.
But I agree that c-syntax based solutions are preferred over preprocessor magic.
The conditional compilation is hard to debug. One has to know all the settings in order to figure out which block of code the program will execute.
I once spent a week debugging a multi-threaded application that used conditional compilation. The problem was that the identifier was not spelled the same. One module used #if FEATURE_1 while the problem area used #if FEATURE1 (Notice the underscore).
I a big proponent of letting the makefile handle the configuration by including the correct libraries or objects. Makes to code more readable. Also, the majority of the code becomes configuration independent and only a few files are configuration dependent.
A reasonable goal but not so great as a strict rule
The advice to try and keep preprocessor conditionals in header files is good, as it allows you to select interfaces conditionally but not litter the code with confusing and ugly preprocessor logic.
However, there is lots and lots and lots of code that looks like the made-up example below, and I don't think there is a clearly better alternative. I think you have cited a reasonable guideline but not a great gold-tablet-commandment.
#if defined(SOME_IOCTL)
case SOME_IOCTL:
...
#endif
#if defined(SOME_OTHER_IOCTL)
case SOME_OTHER_IOCTL:
...
#endif
#if defined(YET_ANOTHER_IOCTL)
case YET_ANOTHER_IOCTL:
...
#endif
CPP is a separate (non-Turing-complete) macro language on top of (usually) C or C++. As such, it's easy to get mixed up between it and the base language, if you're not careful. That's the usual argument against macros instead of e.g. c++ templates, anyway. But #ifdef? Just go try to read someone else's code you've never seen before that has a bunch of ifdefs.
e.g. try reading these Reed-Solomon multiply-a-block-by-a-constant-Galois-value functions:
http://parchive.cvs.sourceforge.net/viewvc/parchive/par2-cmdline/reedsolomon.cpp?revision=1.3&view=markup
If you didn't have the following hint, it will take you a minute to figure out what's going on: There are two versions: one simple, and one with a pre-computed lookup table (LONGMULTIPLY). Even so, have fun tracing the #if BYTE_ORDER == __LITTLE_ENDIAN. I found it a lot easier to read when I rewrote that bit to use a le16_to_cpu function, (whose definition was inside #if clauses) inspired by Linux's byteorder.h stuff.
If you need different low-level behaviour depending on the build, try to encapsulate that in low-level functions that provide consistent behaviour everywhere, instead of putting #if stuff right inside your larger functions.
By all means, favor abstraction over conditional compilation. As anyone who has written portable software can tell you, however, the number of environmental permutations is staggering. Some design discipline can help, but sometimes the choice is between elegance and meeting a schedule. In such cases, a compromise might be necessary.
Consider the situation where you are required to provide fully tested code, with 100% branch coverage etc. Now add in conditional compilation.
Each unique symbol used to control conditional compilation doubles the number of code variants you need to test. So, one symbol - you have two variants. Two symbols, you now have four different ways to compile your code. And so on.
And this only applies for boolean tests such as #ifdef. You can easily imagine the problem if a test is of the form #if VARIABLE == SCALAR_VALUE_FROM_A_RANGE.
If your code will be compiled with different C compilers, and you use compiler-specific features, then you may need to determine which predefined macros are available.
It's true that #if #endif does complicate the reading of the code. However I have seen a lot of real world code that have no issues using this and are still going strong. So there may be better ways to avoid using #if #endif but using them is not that bad if proper care is taken.
I have been thinking about the difficulty incurred with C error handling.. like who actually does
if(printf("hello world")==-1){exit(1);}
But you break common standards by not doing such verbose, and usually useless coding. Well what if you had a wrapper around the libc? like so you could do something like..
//main...
error_catchall(my_errors);
printf("hello world"); //this will automatically call my_errors on an error of printf
ignore=1; //this makes it so the function will return like normal and we can check error values ourself
if(fopen.... //we want to know if the file opened or not and handle it ourself.
}
int my_errors(){
if(ignore==0){
_exit(1); //exit if we aren't handling this error by flagging ignore
}
return 0;
//this is called when there is an error anywhere in the libc
}
...
I am considering making such a wrapper as I am synthesizing my own BSD licensed libc(so I already have to touch the untouchable..), but I would like to know what people think about it..
would this actually work in real life and be more useful than returning -1?
during this years I've seen several attempts to mimics try/catch in ANSI C:
http://simgrid.gforge.inria.fr/doc/group__XBT__ex.html
http://llg.cubic.org/trycatch/
I think that try/catch approach is more simple than your.
But how would you be able to catch the error when it was expected? For example I might expect a file open to fail and want to deal with it in code instead of the generic error catcher.
To do this you would need two versions of every function. One that trapped errors and one the returns errors.
I did something like this long ago without modifying the library. I just created wrapper functions for common calls that did error checking. So my errchk_malloc call checked the return and raised an error if the allocation failed. Then I just used this version everywhere in place of the built in malloc.
if the goal is to exit cleanly as soon as you encounter an error that's ok... but if you want to do a minimum of error recovery, i can't see how your approach is useful...
To avoid this kind of problem, I sometimes use LD_PRELOAD_PATH to integrate my error management (only for my own projects since this is not a really good practice...)
Do you really want to change the standard behaviors of your LIBC ? You could add a few extensions around common functions.
For example, Gnome uses g_malloc and g_try_malloc. The former will abort on failure while the later will simply yield a null-pointer like malloc.