Should input values outside of contract be unit tested? - c

I'm refactoring a huge C library with legacy code, where many functions have pointers on arguments list. I also write unit tests for newly created functions to make sure, that I haven't broken anything (aside from all good things which come from unit tests, that's my primary motivation). I'm also not allowed to change library's API, only the code below it.
Usually the result of my work looks like this (it's proprietary code, so I can't post actual examples):
externalApi.h:
/**
* Documentation1
*/
bool someExportedFunction1(uint8_t* buffer, size_t len);
/**
* Documentation2
*/
bool someExportedFunction2();
refactoredCode.h:
/**
* Documentation of internal function1
*/
bool internalFuntion1(uint8_t* buffer, size_t len);
/**
* Documentation of internal function2
*/
bool internalFuntion2(uint8_t* buffer, size_t len);
externalApi.c:
bool someExportedFunction1(uint8_t* buffer, size_t len)
{
if (NULL == buffer)
{
ERROR("Meaningful error log");
return false;
}
if (!internalFunction1(buffer, len))
{
ERROR("Other error log");
return false;
}
if (!internalFunction2(buffer, len))
{
ERROR("Yet another error log");
return false;
}
return true;
}
bool someExportedFunction2()
{
uint8_t lBuffer[10] = {};
if (!internalFunction1(lBuffer, sizeof(lBuffer))
{
ERROR("Interesting error log");
return false;
}
uint8_t* ptr = malloc(10);
if (NULL == ptr)
{
ERROR("Malloc error");
return false;
}
if (!internalFunction2(ptr, 10)
{
free(ptr);
ERROR("Boring error log");
return false;
}
free(ptr);
return true;
}
refactoredCode.c
bool internalFuntion1(uint8_t* buffer, size_t len)
{
if (NULL == buffer)
{
ERROR("Guess what, a meaningful error log");
return false;
}
// Do stuff
return true;
}
bool internalFuntion2(uint8_t* buffer, size_t len)
{
if (NULL == buffer)
{
ERROR("Last meaningful error log");
return false;
}
// Do stuff
return true;
}
Those are just simple examples to illustrate the point, there are countless other versions of the same problem.
Now, all unit tests I write include checking what happens if I pass NULL as argument, no matter how low level function I'm testing (even if I'm 100% sure there is no way NULL will ever be passed as an argument).
One of my coworkers however, disagrees with me saying, that NULL is outside functions contract, and proper approach would be to write assertions instead of ifs. Such assertions should not be unit tested (even with EXPECT_DEATH macro), because unit tests are also documentation of proper usage of functions, and using NULL is not allowed.
We can sum up arguments from both sides like this:
Pro "write if and unit test it" approach:
With ERROR logs we can easily find not only which function detected NULL, but also where was it called (pseudo-stacktrace), making debugging significantly easier
If we don't test checking for NULLs how can we be sure that we covered all the cases (with assertions or ifs, doesn't matter)?
People generally either don't read documentation until it's too late, or sometimes they code on Friday afternoon - we can't expect them to be perfect programmers all the time
We can't predict what changes will be made to the code in the future, so it's better to have tests warned us about every change, so we can decide whether it was intentional or not
If assertion is detected in a test (and we didn't expect it, it's totally due to an error in the code) the whole testing application is killed and other, unrelated, tests are not performed until we fix the problem.
Pro "assert and don't test" approach:
Maintaining such test may cost us a lot of time, because we have no standard way of handling programming problems like passing NULL
Passing NULL pointer should outside function's contract - handling NULL is changing the contract, so we must always check for it, which means a lot of work in the future with adding those ifs
Assertions don't go into release code - which is good, because when we make a release the code should be tested sufficiently to make sure, that no mistake slipped through
Passing NULL is usually undefined behaviour - upper layers of application usually can't handle such error properly anyway, so it's safer to simply kill the process
Unit tests should be an example of how the function should be used and passing NULL is exactly the opposite of it
In the end none of us could convinced the other and the debate finished without a conclusion and we fell back on Argumentum ab auctoritate because the other guy is simply way more experienced than I am.
But I'm still not convinced, so I'm asking the question here:
Taking under consideration all arguments for both approaches I presented, and all those I'm not aware of:
Should arguments outside function's contract be checked with if and unit tested, or asserted and not tested?

If a certain section of your code is performance sensitive, avoiding these checks in those sections makes sense.
On all other parts of the code, it makes sense to have all the checks and create log files to help with tracking. Defensive programming like that will save your back lots of times.

Related

if/else strive for logical completeness in single return function

I'm attempting to exist at the crossroads of MISRA C and CERT C and setting the lofty goal of no exceptions. The two rules most against my normal patterns are (paraphrased):
MISRA : A function should only have one return
CERT : Strive for logical completeness
The CERT rule keeps catching me when I have nothing to say in an else. For example:
static int32_t SomeFunc()
{
int32_t retval = PROJECT_ERROR_GENERIC;
retval = ChildFuncOne();
if (retval == PROJECT_SUCCESS)
{
retval = ChildFuncTwo();
}
//Common cleanup
return retval;
}
Assuming there is no specific cleanup for the failure of ChildFuncOne, I have nothing to say in else. Am I missing another way to lay out this function?
Option 1 is an else with an empty body:
static int32_t SomeFunc(void)
{
int32_t retval = PROJECT_ERROR_GENERIC;
retval = ChildFuncOne();
if (retval == PROJECT_SUCCESS)
{
retval = ChildFuncTwo();
}
else
{
// yes, I did consider every possible retval
}
//Common cleanup
return retval;
}
Option 2 is to add a second variable, and then set that variable in the if and the else. Note that I reversed the sense of the if, since that order makes more sense to me. YMMV.
static int32_t SomeFunc2(void)
{
int32_t retval = PROJECT_ERROR_GENERIC;
int32_t finalretval = PROJECT_ERROR_GENERIC;
retval = ChildFuncOne();
if (retval != PROJECT_SUCCESS)
{
finalretval = retval;
}
else
{
finalretval = ChildFuncTwo();
}
//Common cleanup
return finalretval;
}
The problem with option 2 is that it's easy to mix up the two variables that have similar names and uses. Which is where these coding standards make your code more likely to have bugs, rather than less.
The only problem with your code I can find is the obsolescent style function definition. static int32_t SomeFunc() should be static int32_t SomeFunc(void). The former has been obsolescent form in C since some 25-30 years back and also violates MISRA 8.2 for that reason. Other than that, the code is fine and MISRA compliant.
A style comment beyond MISRA is that I would use an enum over int32_t.
Regarding CERT I assume you refer to MSC01-C. This is a fuzzy and unclear rule, with just a bunch of non-related code examples. I would just ignore it.
The corresponding MISRA C:2012 15.7 and 16.1 are much clearer and can be summarized as:
All else if should end with an else.
All switch should contain default.
The rationale is defensive programming and self-documenting code: "yes, I have definitely considered this outcome". Likely this is what CERT C tried (and failed) to say as well. They refer to various "else if with else" rules in their crossref.
This isn't applicable to your code. Nobody requires that every if ends with an else, that would be a ridiculous requirement.
Am I missing another way to lay out this function?
It's sound practice to set the result variable to a known default state at the beginning of a function and later overwrite it if needed. So your function is not only compliant, but also fairly canonical style as safety-related programming goes.
So other than the () to (void), there's no need to change a thing.
My attempt:
static int32_t SomeFunc()
{
int32_t retval = ChildFuncOne();
retval = (retval == PROJECT_SUCCESS)? ChildFuncTwo() : retval;
retval = (retval == PROJECT_SUCCESS)? ChildFuncThree() : retval;
return retval;
}
Basically, the retval is set by the first function, and only if that result is PROJECT_SUCCESS, will the second function get called and set the retval.
If the retval is anything other than success, it remains unchanged, the second function is never called, and retval is returned.
I even show how it can be chained for an arbitrary number of functions.
I'm a bit unclear what you mean bye "common cleanup", so if you need different cleanup operations depending on what functions succeeded and failed, that will take extra work.
You should stop trying to appease the MISRA and CERT gods and just write clear code. The code that you posted is fine. Keep it that way.

In C, what is the best practice for handling errors in your own functions? [duplicate]

What do you consider "best practice" when it comes to error handling errors in a consistent way in a C library.
There are two ways I've been thinking of:
Always return error code. A typical function would look like this:
MYAPI_ERROR getObjectSize(MYAPIHandle h, int* returnedSize);
The always provide an error pointer approach:
int getObjectSize(MYAPIHandle h, MYAPI_ERROR* returnedError);
When using the first approach it's possible to write code like this where the error handling check is directly placed on the function call:
int size;
if(getObjectSize(h, &size) != MYAPI_SUCCESS) {
// Error handling
}
Which looks better than the error handling code here.
MYAPIError error;
int size;
size = getObjectSize(h, &error);
if(error != MYAPI_SUCCESS) {
// Error handling
}
However, I think using the return value for returning data makes the code more readable, It's obvious that something was written to the size variable in the second example.
Do you have any ideas on why I should prefer any of those approaches or perhaps mix them or use something else? I'm not a fan of global error states since it tends to make multi threaded use of the library way more painful.
EDIT:
C++ specific ideas on this would also be interesting to hear about as long as they are not involving exceptions since it's not an option for me at the moment...
I've used both approaches, and they both worked fine for me. Whichever one I use, I always try to apply this principle:
If the only possible errors are programmer errors, don't return an error code, use asserts inside the function.
An assertion that validates the inputs clearly communicates what the function expects, while too much error checking can obscure the program logic. Deciding what to do for all the various error cases can really complicate the design. Why figure out how functionX should handle a null pointer if you can instead insist that the programmer never pass one?
I like the error as return-value way. If you're designing the api and you want to make use of your library as painless as possible think about these additions:
store all possible error-states in one typedef'ed enum and use it in your lib. Don't just return ints or even worse, mix ints or different enumerations with return-codes.
provide a function that converts errors into something human readable. Can be simple. Just error-enum in, const char* out.
I know this idea makes multithreaded use a bit difficult, but it would be nice if application programmer can set an global error-callback. That way they will be able to put a breakpoint into the callback during bug-hunt sessions.
There's a nice set of slides from CMU's CERT with recommendations for when to use each of the common C (and C++) error handling techniques. One of the best slides is this decision tree:
I would personally change two things about this flowcart.
First, I would clarify that sometimes objects should use return values to indicate errors. If a function only extracts data from an object but doesn't mutate the object, then the integrity of the object itself is not at risk and indicating errors using a return value is more appropriate.
Second, it's not always appropriate to use exceptions in C++. Exceptions are good because they can reduce the amount of source code devoted to error handling, they mostly don't affect function signatures, and they're very flexible in what data they can pass up the callstack. On the other hand, exceptions might not be the right choice for a few reasons:
C++ exceptions have very particular semantics. If you don't want those semantics, then C++ exceptions are a bad choice. An exception must be dealt with immediately after being thrown and the design favors the case where an error will need to unwind the callstack a few levels.
C++ functions that throw exceptions can't later be wrapped to not throw exceptions, at least not without paying the full cost of exceptions anyway. Functions that return error codes can be wrapped to throw C++ exceptions, making them more flexible. C++'s new gets this right by providing a non-throwing variant.
C++ exceptions are relatively expensive but this downside is mostly overblown for programs making sensible use of exceptions. A program simply shouldn't throw exceptions on a codepath where performance is a concern. It doesn't really matter how fast your program can report an error and exit.
Sometimes C++ exceptions are not available. Either they're literally not available in one's C++ implementation, or one's code guidelines ban them.
Since the original question was about a multithreaded context, I think the local error indicator technique (what's described in SirDarius's answer) was underappreciated in the original answers. It's threadsafe, doesn't force the error to be immediately dealt with by the caller, and can bundle arbitrary data describing the error. The downside is that it must be held by an object (or I suppose somehow associated externally) and is arguably easier to ignore than a return code.
I use the first approach whenever I create a library. There are several advantages of using a typedef'ed enum as a return code.
If the function returns a more complicated output such as an array and its length you do not need to create arbitrary structures to return.
rc = func(..., int **return_array, size_t *array_length);
It allows for simple, standardized error handling.
if ((rc = func(...)) != API_SUCCESS) {
/* Error Handling */
}
It allows for simple error handling in the library function.
/* Check for valid arguments */
if (NULL == return_array || NULL == array_length)
return API_INVALID_ARGS;
Using a typedef'ed enum also allows for the enum name to be visible in the debugger. This allows for easier debugging without the need to constantly consult a header file. Having a function to translate this enum into a string is helpful as well.
The most important issue regardless of approach used is to be consistent. This applies to function and argument naming, argument ordering and error handling.
Returning error code is the usual approach for error handling in C.
But recently we experimented with the outgoing error pointer approach as well.
It has some advantages over the return value approach:
You can use the return value for more meaningful purposes.
Having to write out that error parameter reminds you to handle the error or propagate it. (You never forget checking the return value of fclose, don't you?)
If you use an error pointer, you can pass it down as you call functions. If any of the functions set it, the value won't get lost.
By setting a data breakpoint on the error variable, you can catch where does the error occurred first. By setting a conditional breakpoint you can catch specific errors too.
It makes it easier to automatize the check whether you handle all errors. The code convention may force you to call your error pointer as err and it must be the last argument. So the script can match the string err); then check if it's followed by if (*err. Actually in practice we made a macro called CER (check err return) and CEG (check err goto). So you don't need to type it out always when we just want to return on error, and can reduce the visual clutter.
Not all functions in our code has this outgoing parameter though.
This outgoing parameter thing are used for cases where you would normally throw an exception.
Here's a simple program to demonstrate the first 2 bullets of Nils Pipenbrinck's answer here.
His first 2 bullets are:
store all possible error-states in one typedef'ed enum and use it in your lib. Don't just return ints or even worse, mix ints or different enumerations with return-codes.
provide a function that converts errors into something human readable. Can be simple. Just error-enum in, const char* out.
Assume you have written a module named mymodule. First, in mymodule.h, you define your enum-based error codes, and you write some error strings which correspond to these codes. Here I am using an array of C strings (char *), which only works well if your first enum-based error code has value 0, and you don't manipulate the numbers thereafter. If you do use error code numbers with gaps or other starting values, you'll simply have to change from using a mapped C-string array (as I do below) to using a function which uses a switch statement or if / else if statements to map from enum error codes to printable C strings (which I don't demonstrate). The choice is yours.
mymodule.h
/// #brief Error codes for library "mymodule"
typedef enum mymodule_error_e
{
/// No error
MYMODULE_ERROR_OK = 0,
/// Invalid arguments (ex: NULL pointer where a valid pointer is required)
MYMODULE_ERROR_INVARG,
/// Out of memory (RAM)
MYMODULE_ERROR_NOMEM,
/// Make up your error codes as you see fit
MYMODULE_ERROR_MYERROR,
// etc etc
/// Total # of errors in this list (NOT AN ACTUAL ERROR CODE);
/// NOTE: that for this to work, it assumes your first error code is value 0 and you let it naturally
/// increment from there, as is done above, without explicitly altering any error values above
MYMODULE_ERROR_COUNT,
} mymodule_error_t;
// Array of strings to map enum error types to printable strings
// - see important NOTE above!
const char* const MYMODULE_ERROR_STRS[] =
{
"MYMODULE_ERROR_OK",
"MYMODULE_ERROR_INVARG",
"MYMODULE_ERROR_NOMEM",
"MYMODULE_ERROR_MYERROR",
};
// To get a printable error string
const char* mymodule_error_str(mymodule_error_t err);
// Other functions in mymodule
mymodule_error_t mymodule_func1(void);
mymodule_error_t mymodule_func2(void);
mymodule_error_t mymodule_func3(void);
mymodule.c contains my mapping function to map from enum error codes to printable C strings:
mymodule.c
#include <stdio.h>
/// #brief Function to get a printable string from an enum error type
/// #param[in] err a valid error code for this module
/// #return A printable C string corresponding to the error code input above, or NULL if an invalid error code
/// was passed in
const char* mymodule_error_str(mymodule_error_t err)
{
const char* err_str = NULL;
// Ensure error codes are within the valid array index range
if (err >= MYMODULE_ERROR_COUNT)
{
goto done;
}
err_str = MYMODULE_ERROR_STRS[err];
done:
return err_str;
}
// Let's just make some empty dummy functions to return some errors; fill these in as appropriate for your
// library module
mymodule_error_t mymodule_func1(void)
{
return MYMODULE_ERROR_OK;
}
mymodule_error_t mymodule_func2(void)
{
return MYMODULE_ERROR_INVARG;
}
mymodule_error_t mymodule_func3(void)
{
return MYMODULE_ERROR_MYERROR;
}
main.c contains a test program to demonstrate calling some functions and printing some error codes from them:
main.c
#include <stdio.h>
int main()
{
printf("Demonstration of enum-based error codes in C (or C++)\n");
printf("err code from mymodule_func1() = %s\n", mymodule_error_str(mymodule_func1()));
printf("err code from mymodule_func2() = %s\n", mymodule_error_str(mymodule_func2()));
printf("err code from mymodule_func3() = %s\n", mymodule_error_str(mymodule_func3()));
return 0;
}
Output:
Demonstration of enum-based error codes in C (or C++)
err code from mymodule_func1() = MYMODULE_ERROR_OK
err code from mymodule_func2() = MYMODULE_ERROR_INVARG
err code from mymodule_func3() = MYMODULE_ERROR_MYERROR
References:
You can run this code yourself here: https://onlinegdb.com/ByEbKLupS.
My answer I frequently reference to see this type of error handling: STM32 how to get last reset status
I personally prefer the former approach (returning an error indicator).
Where necessary the return result should just indicate that an error occurred, with another function being used to find out the exact error.
In your getSize() example I'd consider that sizes must always be zero or positive, so returning a negative result can indicate an error, much like UNIX system calls do.
I can't think of any library that I've used that goes for the latter approach with an error object passed in as a pointer. stdio, etc all go with a return value.
The UNIX approach is most similar to your second suggestion. Return either the result or a single "it went wrong" value. For instance, open will return the file descriptor on success or -1 on failure. On failure it also sets errno, an external global integer to indicate which failure occurred.
For what it's worth, Cocoa has also been adopting a similar approach. A number of methods return BOOL, and take an NSError ** parameter, so that on failure they set the error and return NO. Then the error handling looks like:
NSError *error = nil;
if ([myThing doThingError: &error] == NO)
{
// error handling
}
which is somewhere between your two options :-).
Use setjmp.
http://en.wikipedia.org/wiki/Setjmp.h
http://aszt.inf.elte.hu/~gsd/halado_cpp/ch02s03.html
http://www.di.unipi.it/~nids/docs/longjump_try_trow_catch.html
#include <setjmp.h>
#include <stdio.h>
jmp_buf x;
void f()
{
longjmp(x,5); // throw 5;
}
int main()
{
// output of this program is 5.
int i = 0;
if ( (i = setjmp(x)) == 0 )// try{
{
f();
} // } --> end of try{
else // catch(i){
{
switch( i )
{
case 1:
case 2:
default: fprintf( stdout, "error code = %d\n", i); break;
}
} // } --> end of catch(i){
return 0;
}
#include <stdio.h>
#include <setjmp.h>
#define TRY do{ jmp_buf ex_buf__; if( !setjmp(ex_buf__) ){
#define CATCH } else {
#define ETRY } }while(0)
#define THROW longjmp(ex_buf__, 1)
int
main(int argc, char** argv)
{
TRY
{
printf("In Try Statement\n");
THROW;
printf("I do not appear\n");
}
CATCH
{
printf("Got Exception!\n");
}
ETRY;
return 0;
}
When I write programs, during initialization, I usually spin off a thread for error handling, and initialize a special structure for errors, including a lock. Then, when I detect an error, through return values, I enter in the info from the exception into the structure and send a SIGIO to the exception handling thread, then see if I can't continue execution. If I can't, I send a SIGURG to the exception thread, which stops the program gracefully.
I have done a lot of C programming in the past. And I really apreciated the error code return value. But is has several possible pitfalls:
Duplicate error numbers, this can be solved with a global errors.h file.
Forgetting to check the error code, this should be solved with a cluebat and long debugging hours. But in the end you will learn (or you will know that someone else will do the debugging).
I ran into this Q&A a number of times, and wanted to contribute a more comprehensive answer. I think the best way to think about this is how to return errors to the caller, and what you return.
How
There are 3 ways to return information from a function:
Return Value
Out Argument(s)
Out of Band, that includes non-local goto (setjmp/longjmp),
file or global scoped variables, file system etc.
Return Value
You can only return a single value (object); however, it can be an arbitrarily complex value. Here is an example of an error returning function:
enum error hold_my_beer(void);
One benefit of return values is that it allows chaining of calls for less intrusive error handling:
!hold_my_beer() &&
!hold_my_cigarette() &&
!hold_my_pants() ||
abort();
This not just about readability, but may also allow processing an array of such function pointers in a uniform way.
Out Argument(s)
You can return more via more than one object via arguments, but best practice does suggest to keep the total number of arguments low (say, <=4):
void look_ma(enum error *e, char *what_broke);
enum error e;
look_ma(e);
if(e == FURNITURE) {
reorder(what_broke);
} else if(e == SELF) {
tell_doctor(what_broke);
}
This forces caller to pass in object which may make it more likely that it's being checked. If you have a set of calls all returning errors, and you decide to allocate a new variable to each, then it add some clutter in the caller.
Out of Band
The best known example is probably the (thread-local) errno variable, which the called function sets. It's very easy for the callee to not check this variable, and you only get one which may be an issue if your function is complicated (for instance, two parts of the function returning the same error code).
With setjmp() you define a place and how you want to handle an int value, and you transfer control to that location via a longjmp(). See Practical usage of setjmp and longjmp in C.
What
Indicator
Code
Object
Callback
Indicator
An error indicator only tells you that there is a problem but nothing about the nature of said problem:
struct foo *f = foo_init();
if(!f) {
/// handle the absence of foo
}
This is the least powerful way for a function to communicate error state; however, it's perfect if the caller cannot respond to the error in a graduated manner anyways.
Code
An error code tells the caller about the nature of the problem, and may allow for a suitable response (from the above). It can be a return value, or like the look_ma() example above an error argument.
Object
With an error object, the caller can be informed about arbitrarily complicated issues. For example, an error code and a suitable human-readable message. It can also inform the caller that multiple things went wrong, or an error per item when processing a collection:
struct collection friends;
enum error *e = malloc(c.size * sizeof(enum error));
...
ask_for_favor(friends, reason);
for(int i = 0; i < c.size; i++) {
if(reason[i] == NOT_FOUND) find(friends[i]);
}
Instead of pre-allocating the error array, you can also (re)allocate it dynamically as needed of course.
Callback
Callback is the most powerful way to handle errors, as you can tell the function what behavior you would like to see happen when something goes wrong. A callback argument can be added to each function, or if customization uis only required per instance of a struct like this:
struct foo {
...
void (error_handler)(char *);
};
void default_error_handler(char *message) {
assert(f);
printf("%s", message);
}
void foo_set_error_handler(struct foo *f, void (*eh)(char *)) {
assert(f);
f->error_handler = eh;
}
struct foo *foo_init() {
struct foo *f = malloc(sizeof(struct foo));
foo_set_error_handler(f, default_error_handler);
return f;
}
struct foo *f = foo_init();
foo_something();
One interesting benefit of a callback is that it can be invoked multiple times, or none at all in the absence of errors in which there is no overhead on the happy path.
There is, however, an inversion of control. The calling code does not know if the callback was invoked. As such, it may make sense to use an indicator as well.
I was pondering this issue recently as well, and wrote up some macros for C that simulate try-catch-finally semantics using purely local return values. Hope you find it useful.
Here is an approach which I think is interesting, while requiring some discipline.
This assumes a handle-type variable is the instance on which operate all API functions.
The idea is that the struct behind the handle stores the previous error as a struct with necessary data (code, message...), and the user is provided with a function that returns a pointer to this error object. Each operation will update the pointed object so the user can check its status without even calling functions. As opposed to the errno pattern, the error code is not global, which make the approach thread-safe, as long as each handle is properly used.
Example:
MyHandle * h = MyApiCreateHandle();
/* first call checks for pointer nullity, since we cannot retrieve error code
on a NULL pointer */
if (h == NULL)
return 0;
/* from here h is a valid handle */
/* get a pointer to the error struct that will be updated with each call */
MyApiError * err = MyApiGetError(h);
MyApiFileDescriptor * fd = MyApiOpenFile("/path/to/file.ext");
/* we want to know what can go wrong */
if (err->code != MyApi_ERROR_OK) {
fprintf(stderr, "(%d) %s\n", err->code, err->message);
MyApiDestroy(h);
return 0;
}
MyApiRecord record;
/* here the API could refuse to execute the operation if the previous one
yielded an error, and eventually close the file descriptor itself if
the error is not recoverable */
MyApiReadFileRecord(h, &record, sizeof(record));
/* we want to know what can go wrong, here using a macro checking for failure */
if (MyApi_FAILED(err)) {
fprintf(stderr, "(%d) %s\n", err->code, err->message);
MyApiDestroy(h);
return 0;
}
First approach is better IMHO:
It's easier to write function that way. When you notice an error in the middle of the function you just return an error value. In second approach you need to assign error value to one of the parameters and then return something.... but what would you return - you don't have correct value and you don't return error value.
it's more popular so it will be easier to understand, maintain
I definitely prefer the first solution :
int size;
if(getObjectSize(h, &size) != MYAPI_SUCCESS) {
// Error handling
}
i would slightly modify it, to:
int size;
MYAPIError rc;
rc = getObjectSize(h, &size)
if ( rc != MYAPI_SUCCESS) {
// Error handling
}
In additional i will never mix legitimate return value with error even if currently the scope of function allowing you to do so, you never know which way function implementation will go in the future.
And if we already talking about error handling i would suggest goto Error; as error handling code, unless some undo function can be called to handle error handling correctly.
What you could do instead of returning your error, and thus forbidding you from returning data with your function, is using a wrapper for your return type:
typedef struct {
enum {SUCCESS, ERROR} status;
union {
int errCode;
MyType value;
} ret;
} MyTypeWrapper;
Then, in the called function:
MyTypeWrapper MYAPIFunction(MYAPIHandle h) {
MyTypeWrapper wrapper;
// [...]
// If there is an error somewhere:
wrapper.status = ERROR;
wrapper.ret.errCode = MY_ERROR_CODE;
// Everything went well:
wrapper.status = SUCCESS;
wrapper.ret.value = myProcessedData;
return wrapper;
}
Please note that with the following method, the wrapper will have the size of MyType plus one byte (on most compilers), which is quite profitable; and you won't have to push another argument on the stack when you call your function (returnedSize or returnedError in both of the methods you presented).
In addition to what has been said, prior to returning your error code, fire off an assert or similar diagnostic when an error is returned, as it will make tracing a lot easier. The way I do this is to have a customised assert that still gets compiled in at release but only gets fired when the software is in diagnostics mode, with an option to silently report to a log file or pause on screen.
I personally return error codes as negative integers with no_error as zero , but it does leave you with the possible following bug
if (MyFunc())
DoSomething();
An alternative is have a failure always returned as zero, and use a LastError() function to provide details of the actual error.
EDIT:If you need access only to the last error, and you don't work in multithreaded environment.
You can return only true/false (or some kind of #define if you work in C and don't support bool variables), and have a global Error buffer that will hold the last error:
int getObjectSize(MYAPIHandle h, int* returnedSize);
MYAPI_ERROR LastError;
MYAPI_ERROR* getLastError() {return LastError;};
#define FUNC_SUCCESS 1
#define FUNC_FAIL 0
if(getObjectSize(h, &size) != FUNC_SUCCESS ) {
MYAPI_ERROR* error = getLastError();
// error handling
}
Second approach lets the compiler produce more optimized code, because when address of a variable is passed to a function, the compiler cannot keep its value in register(s) during subsequent calls to other functions. The completion code usually is used only once, just after the call, whereas "real" data returned from the call may be used more often
I prefer error handling in C using the following technique:
struct lnode *insert(char *data, int len, struct lnode *list) {
struct lnode *p, *q;
uint8_t good;
struct {
uint8_t alloc_node : 1;
uint8_t alloc_str : 1;
} cleanup = { 0, 0 };
// allocate node.
p = (struct lnode *)malloc(sizeof(struct lnode));
good = cleanup.alloc_node = (p != NULL);
// good? then allocate str
if (good) {
p->str = (char *)malloc(sizeof(char)*len);
good = cleanup.alloc_str = (p->str != NULL);
}
// good? copy data
if(good) {
memcpy ( p->str, data, len );
}
// still good? insert in list
if(good) {
if(NULL == list) {
p->next = NULL;
list = p;
} else {
q = list;
while(q->next != NULL && good) {
// duplicate found--not good
good = (strcmp(q->str,p->str) != 0);
q = q->next;
}
if (good) {
p->next = q->next;
q->next = p;
}
}
}
// not-good? cleanup.
if(!good) {
if(cleanup.alloc_str) free(p->str);
if(cleanup.alloc_node) free(p);
}
// good? return list or else return NULL
return (good ? list : NULL);
}
Source: http://blog.staila.com/?p=114
In addition the other great answers, I suggest that you try to separate the error flag and the error code in order to save one line on each call, i.e.:
if( !doit(a, b, c, &errcode) )
{ (* handle *)
(* thine *)
(* error *)
}
When you have lots of error-checking, this little simplification really helps.
I have seen five main approaches used in error reporting by functions in C:
return value with no error code reporting or no return value
return value that is an error code only
return value that is a valid value or an error code value
return value indicating an error with some way of fetching an error code possibly with error context information
function argument that returns a value with an error code possibly with error context information
In addition to the choice of function error return mechanism there is also the consideration of error code mnemonics and ensuring that the error code mnemonics do not clash with any other error code mnemonics being used. Typically this requires the use of a Three Letter Prefix approach to the naming of mnemonics defining them with #define, enum, or const static int. See this discussion "static const" vs "#define" vs "enum"
There are a couple of different outcomes once an error is detected and that may be a consideration how functions provide error codes and error information. These outcomes are really divided into two camps, recoverable errors and unrecoverable errors:
document the system state and then abort
wait and retry the failed action
notify a human being and request assistance
continue execution in a degraded state
An error type may use more than one of these outcomes depending on the context of the error. For instance a file open that fails because the file doesn't exist may be retried with a different file name or notify a user and ask for assistance or continue execution in a degraded state.
Details on Five Main Approaches
Some functions do not provide an error code. The functions either can't fail or if they fail, they fail silently. An example of this type of function are the various is character test functions such as isdigit() which indicates if a character value is a digit or is not. A character value either is or is not a digit or an alphabetic character. Similarly with the strcmp() function, comparing two strings results in a value indicating which one is higher in the collating sequence than the other should they not be the same.
In some cases an error code is not necessary because a value indicating failure is a valid result. For example the strchr() function from the Standard Library returns a pointer to the searched for character if found in the string to be scanned or NULL if it is not found. In this case a failure to find the character is a valid and useful indicator. A function using strchr() may require the character searched for not be in the string to be successful and finding the character is an error condition.
Other functions do not return an error code but instead report an error through an external mechanism. This is used by most of the math library functions in the Standard Library which require the user to set errno to a value of zero, call the function, and then check that the value of errno is still zero. The range of output values from many of the math functions do not allow a special return value to be used to indicate an error and they do not have an error reporting argument in their interfaces.
Some functions perform an action and return an error code value with one of the possible error code values indicating success and the rest of the range of values indicating an error code. For example a function may return a value of 0 if successful or a positive or negative non-zero value indicating an error with the value returned being the error code.
Some functions may perform an action and return either a value from a range of valid values if successful or a value from a range of invalid values indicating an error code. A simple approach is to use a positive value (0, 1, 2, ...) for valid values and a negative value for error codes allowing a check such as if(status < 0) return error;.
Some functions return a valid value or an invalid value indicating an error requiring the additional step of fetching the error code by some means. For example the fopen() function returns either a pointer to a FILE object or it returns an invalid pointer value of NULL and sets errno to an error code indicating the reason for the failure. A number of Windows API functions that return a HANDLE value to reference a resource may also return a value of INVALID_HANDLE_VALUE and the function GetLastError() is used to obtain the error code. The OPOS Control Objects standard requires an OPOS Control Object to provide two functions, GetResultCode() and GetResultCodeExtended(), to allow for the retrieval of error status information in the event a COM object method call fails.
This same approach is used in other APIs that use a handle or reference to a resource in which there is a range of valid values with one or more values outside of that range used to indicate an error. A mechanism is then provided to fetch additional error information such as an error code.
A similar approach is used with functions that return a boolean value of true to indicate the function was successful or false to indicate an error. The programmer must then examine other data to determine an error code such as GetLastError() with the Windows API.
Some functions have a pointer argument containing the address of a memory area for the function called to provide an error code or error information. Where this approach really shines is when in addition to a simple error code there is additional, error context information that helps to pin point the error. For example a JSON string parsing function may not only return an error code but also a pointer to where in the JSON string the parsing failed.
I have also seen functions where the function returned an error indicator such as a boolean value with the argument used for error information. I recall that the error information argument could in some cases be NULL indicating the caller didn't want to know the specifics of a failure.
This approach to returning error code or error information seems to be uncommon in my experience though for some reason I think I've seen it used in the Windows API from time to time or perhaps with an XML parser.
Considerations for multi-threading
When using the approach of an additional error code access through a mechanism as in checking a global such as errno or using a function such as GetLastError() there is the problem of sharing the global across multiple threads.
Modern compilers and libraries deal with this by using thread local storage to ensure that each thread has its own storage that is not shared by other threads. However there is still the issue of multiple functions sharing the same thread local storage location for status information which may require some accomodation. For instance, a function that uses several files may need to work around the issue that all of the fopen() calls that may fail share a single errno in the same thread.
If the API uses some type of handle or reference then error code storage can be made handle specific. The fopen() function could be wrapped in another function which performs the fopen() and then sets an API control block with both the FILE * returned by the fopen() as well as the value of errno.
The approach I prefer
My preference is for an error code to be returned as a function return value so that I can either check it at the point of call or save it for later. In most cases, an error is something to be dealt with immediately which is why I prefer this approach.
An approach I have used with functions is to have the function return a simple struct which contains two members, a status code and the return value. For example:
struct FuncRet {
short sStatus; // status or error code
double dValue; // calculated value
};
struct FuncRet Func(double dInput)
{
struct FuncRet = {0, 0}; // sStatus == 0 indicates success
// calculate return value FuncRet.dValue and set
// status code FuncRet.sStatus in the event of an error.
return FuncRet;
}
// ... source code before using our function.
{
struct FuncRet s;
if ((s = Func(aDble)).sStatus == 0) {
// do things with the valid value s.dValue
} else {
// error so deal with the error reported in s.sStatus
}
}
This allows me to do an immediate check for an error. Many functions end up returning a status without returning an actual value as well because the data returned is complex. One or more arguments may be modified by the function but the function doesn't return a value other than a status code.

How to write unit tests in plain C?

I've started to dig into the GLib documentation and discovered that it also offers a unit testing framework.
But how could you do unit tests in a procedural language? Or does it require to program OO in C?
Unit testing only requires "cut-planes" or boundaries at which testing can be done. It is quite straightforward to test C functions which do not call other functions, or which call only other functions that are also tested. Some examples of this are functions which perform calculations or logic operations, and are functional in nature. Functional in the sense that the same input always results in the same output. Testing these functions can have a huge benefit, even though it is a small part of what is normally thought of as unit testing.
More sophisticated testing, such as the use of mocks or stubs is also possible, but it is not nearly as easy as it is in more dynamic languages, or even just object oriented languages such as C++. One way to approach this is to use #defines. One example of this is this article, Unit testing OpenGL applications, which shows how to mock out OpenGL calls. This allows you to test that valid sequences of OpenGL calls are made.
Another option is to take advantage of weak symbols. For example, all MPI API functions are weak symbols, so if you define the same symbol in your own application, your implementation overrides the weak implementation in the library. If the symbols in the library weren't weak, you would get duplicate symbol errors at link time. You can then implement what is effectively a mock of the entire MPI C API, which allows you to ensure that calls are matched up properly and that there aren't any extra calls that could cause deadlocks. It is also possible to load the library's weak symbols using dlopen() and dlsym(), and pass the call on if necessary. MPI actually provides the PMPI symbols, which are strong, so it is not necessary to use dlopen() and friends.
You can realize many of the benefits of unit testing for C. It is slightly harder, and it may not be possible to get the same level of coverage you might expect from something written in Ruby or Java, but it's definitely worth doing.
At the most basic level, unit tests are just bits of code that execute other bits of code and tell you if they worked as expected.
You could simply make a new console app, with a main() function, that executed a series of test functions. Each test would call a function in your app and return a 0 for success or another value for failure.
I'd give you some example code, but I'm really rusty with C. I'm sure there are some frameworks out there that would make this a little easier too.
You can use libtap which provides a number of functions which can provide diagnostics when a test fails. An example of its use:
#include <mystuff.h>
#include <tap.h>
int main () {
plan(3);
ok(foo(), "foo returns 1");
is(bar(), "bar", "bar returns the string bar");
cmp_ok(baz(), ">", foo(), "baz returns a higher number than foo");
done_testing;
}
Its similar to tap libraries in other languages.
Here's an example of how you would implement multiple tests in a single test program for a given function that might call a library function.
Suppose we want to test the following module:
#include <stdlib.h>
int my_div(int x, int y)
{
if (y==0) exit(2);
return x/y;
}
We then create the following test program:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <setjmp.h>
// redefine assert to set a boolean flag
#ifdef assert
#undef assert
#endif
#define assert(x) (rslt = rslt && (x))
// the function to test
int my_div(int x, int y);
// main result return code used by redefined assert
static int rslt;
// variables controling stub functions
static int expected_code;
static int should_exit;
static jmp_buf jump_env;
// test suite main variables
static int done;
static int num_tests;
static int tests_passed;
// utility function
void TestStart(char *name)
{
num_tests++;
rslt = 1;
printf("-- Testing %s ... ",name);
}
// utility function
void TestEnd()
{
if (rslt) tests_passed++;
printf("%s\n", rslt ? "success" : "fail");
}
// stub function
void exit(int code)
{
if (!done)
{
assert(should_exit==1);
assert(expected_code==code);
longjmp(jump_env, 1);
}
else
{
_exit(code);
}
}
// test case
void test_normal()
{
int jmp_rval;
int r;
TestStart("test_normal");
should_exit = 0;
if (!(jmp_rval=setjmp(jump_env)))
{
r = my_div(12,3);
}
assert(jmp_rval==0);
assert(r==4);
TestEnd();
}
// test case
void test_div0()
{
int jmp_rval;
int r;
TestStart("test_div0");
should_exit = 1;
expected_code = 2;
if (!(jmp_rval=setjmp(jump_env)))
{
r = my_div(2,0);
}
assert(jmp_rval==1);
TestEnd();
}
int main()
{
num_tests = 0;
tests_passed = 0;
done = 0;
test_normal();
test_div0();
printf("Total tests passed: %d\n", tests_passed);
done = 1;
return !(tests_passed == num_tests);
}
By redefining assert to update a boolean variable, you can continue on if an assertion fails and run multiple tests, keeping track of how many succeeded and how many failed.
At the start of each test, set rslt (the variables used by the assert macro) to 1, and set any variables that control your stub functions. If one of your stubs gets called more than once, you can set up arrays of control variables so that the stubs can check for different conditions on different calls.
Since many library functions are weak symbols, they can be redefined in your test program so that they get called instead. Prior to calling the function to test, you can set a number of state variables to control the behavior of the stub function and check conditions on the function parameters.
In cases where you can't redefine like that, give the stub function a different name and redefine the symbol in the code to test. For example, if you want to stub fopen but find that it isn't a weak symbol, define your stub as my_fopen and compile the file to test with -Dfopen=my_fopen.
In this particular case, the function to be tested may call exit. This is tricky, since exit can't return to the function being tested. This is one of the rare times when it makes sense to use setjmp and longjmp. You use setjmp before entering the function to test, then in the stubbed exit you call longjmp to return directly back to your test case.
Also note that the redefined exit has a special variable that it checks to see if you actually want to exit the program and calls _exit to do so. If you don't do this, your test program may not quit cleanly.
This test suite also counts the number of attempted and failed tests and returns 0 if all tests passed and 1 otherwise. That way, make can check for test failures and act accordingly.
The above test code will output the following:
-- Testing test_normal ... success
-- Testing test_div0 ... success
Total tests passed: 2
And the return code will be 0.
There is nothing intrinsically object-oriented about testing small pieces of code in isolation. In procedural languages you test functions and collections thereof.
If you are desperate, and you'd have to be desperate, I banged together a little C preprocessor and gmake based framework. It started as a toy, and never really grew up, but I have used it to develop and test a couple of medium sized (10,000+ line) projects.
Dave's Unit Test is minimally intrusive yet it can do some tests I had originally thought would not be possible for a preprocessor based framework (you can demand that a certain stretch of code throw a segmentation fault under certain conditions, and it will test it for you).
It is also an example of why making heavy use of the preprocessor is hard to do safely.
The simplest way of doing a unit test is to build a simple driver code that gets linked with the other code, and call each function in each case...and assert the values of the results of the functions and build up bit by bit...that's how I do it anyway
int main(int argc, char **argv){
// call some function
int x = foo();
assert(x > 1);
// and so on....
}
Hope this helps.
With C it must go further than simply implementing a framework on top of existing code.
One thing I've always done is make a testing module (with a main) that you can run little tests from to test your code. This allows you to do very small increments between code and test cycles.
The bigger concern is writing your code to be testable. Focus on small, independent functions that do not rely on shared variables or state. Try writing in a "Functional" manner (without state), this will be easier to test. If you have a dependency that can't always be there or is slow (like a database), you may have to write an entire "mock" layer that can be substituted for your database during tests.
The principle unit testing goals still apply: ensure the code under test always resets to a given state, test constantly, etc...
When I wrote code in C (back before Windows) I had a batch file that would bring up an editor, then when I was done editing and exited, it would compile, link, execute tests and then bring up the editor with the build results, test results and the code in different windows. After my break (a minute to several hours depending on what was being compiled) I could just review results and go straight back to editing. I'm sure this process could be improved upon these days :)
I use assert. It's not really a framework though.
You can write a simple minimalistic test framework yourself:
// test_framework.h
#define BEGIN_TESTING int main(int argc, char **argv) {
#define END_TESTING return 0;}
#define TEST(TEST_NAME) if (run_test(TEST_NAME, argc, argv))
int run_test(const char* test_name, int argc, char **argv) {
// we run every test by default
if (argc == 1) { return 1; }
// else we run only the test specified as a command line argument
for (int i = 1; i < argc; i++) {
if (!strcmp(test_name, argv[i])) { return 0; }
}
return 0;
}
Now in the actual test file do this:
#include test_framework.h
BEGIN_TESTING
TEST("MyPassingTest") {
assert(1 == 1);
}
TEST("MyFailingTest") {
assert(1 == 2);
}
END_TESTING
If you want to run all tests, execute ./binary without command line arguments, if you want to run just a particular test, execute ./binary MyFailingTest

how to deal with error return in c

How does one deal with error return of a routine in C, when function calls go deep?
Since C does not provide an exception throw mechanism, we have to check return values for each function. For example, the "a" routine may be called by "b", and "b" may called by many other routines, so if "a" returns an error, we then have to check it in "b" and all other routines calling "b".
It can make the code complicated if "a" is a very basic routine. Is there any solution for such problem?
Actually, here I want to get a quick return path if such kind error happens, so we only need to deal with this error in one place.
You can use setjmp() and longjmp() to simulate exceptions in C.
http://en.wikipedia.org/wiki/Setjmp.h
There are several strategies, but the one I find the most useful is that every function returns zero on success and nonzero for an error, where the specific value indicates the specific error.
This combined with early return logic actually makes the functions quite easy to read:
int
func (int param)
{
int rc;
rc = func2 (param);
if (rc)
return rc;
rc = func3 (param);
if (rc)
return rc;
// do something else
return 0;
}
I'm afraid that's the way it is. Without exceptions, you have to check the return value of every function in the call chain.
In the general case, no. You'll want to make sure your function calls worked as expected. Return codes are your main mechanism for ensuring this (although setting a global error number or error flag may also be appropriate, depending on context - not that it simplifies things much).
Adopting one of the techniques others have suggested should allow you to make your error checking uniform and easier to read. This will go a long way towards keeping things maintainable.
For some basic functions though, the odds of failure may be low enough not to bother, eg.
int sum(int a, int b) {
return a + b;
}
really doesn't need to be checked. But that system call to create a new window really should be.
The best way is to design functions, whenever possible, in ways that cannot fail. This is impossible if they do I/O or memory allocation or other things with side effects, so avoid those. For example, instead of having a function that allocates memory and copies a string, have a function that gets pre-allocated memory to which it copies a string. Or you might have only one place where I/O happens, the rest of the program just manipulates data in memory.
Alternatively, you may decide that certain kinds of errors warrant killing the process. For example, if you're out of memory, it is hard to recover from that, so you might as well crash. (But do that in a way that is user-friendly: checkpoint relevant data to disk continuously so the user may recover.) This way, functions can pretend they never fail.
The setjmp suggestion Murali VP is also worth checking out.
You make a list of error_codes (I use enum for that) and use them "flat" in all your app.
So if b calls a, and get one of the error codes, you can decide if you go on, or return back the original error code.
The user/programmer should have a list of all error codes...
You can use an ugly if pyramid like:
if (getting resource 1 succeeds) {
if (getting resource 2 succeeds) {
if (getting resource 3 succeeds) {
do something;
return success;
}
free resource 2;
}
free resource 1;
}
return failure;
or the equivalent with goto (which looks much nicer):
if (getting resource 1 failed) goto err1;
if (getting resource 2 failed) goto err2;
if (getting resource 3 failed) goto err3;
do something;
return success;
err3:
free resource 2;
err2:
free resource 1;
err1:
return failure;
AFAIK C is a structural programming language.
If this is the problem, the same would apply to RTL functions like fopen, fscanf etc ...
So I guess it is better to propagate errors.
You could use a macro.
#define FAIL_FUNC( funcname, ... ) if ( !funcname( _VA_ARGS_ ) ) \
return false;
This way you maintain the same system but without having to write the same code each time ...
There's a way similar to what R.. GitHub STOP HELPING ICE suggests. It's possible to reduce the number of labels using the fact that free(NULL) does nothing.
// initialize all resources to be empty at the beginning
resource1 = NULL;
resource2 = NULL;
resource3 = NULL;
err = SUCCESS;
// allocate resources
// in case of error simply jump to the end
err1 = get_resource_1(&resource1);
if (err1) {
err = FAIL1;
goto end;
}
err2 = get_resource_2(&resource2);
if (err2) {
err = FAIL2;
goto end;
}
err3 = get_resource_3(&resource3);
if (err3) {
err = FAIL3;
goto end;
}
do_something();
// assignment to the output parameter must come at the end
// where it's known there were no errors
*out_resource2 = resource2;
// if some of the resources are needed outside of the function
// don't forget to assign its local variables to NULL so that
// they don't get freed
resource2 = NULL;
end:
// execution comes here in any case
// all the resources that are still owned need to be freed here
free(resource3);
free(resource2);
free(resource1);
// in case of success err will be SUCCESS
// in case of error err will hold corresponding error
return err;
In order to reduce error handling boilerplate it's possible to use macro as Goz suggested or a function that would convert between external error type and internal one. In which case there would be no need to manually assign err in each branch.
#define E1 convert_error_1
#define E2 convert_error_1
#define E3 convert_error_1
my_error convert_error_1(error1 err) {
switch (err) {
case ERROR1_INVALID_ARGUMENT:
// it's our responsibility not to pass invalid
// argument to get_resource_2, this error means we did
// so it's a bug in our code and it's hard to handle
// in a way other than aborting
abort();
case ERROR1_SOMETHING_SOMETHING:
return MYERROR_SOMETHING_SOMETHING;
...
}
}
...
// allocate resources
// in case of error simply jump to the end
err = E1(get_resource_1(&resource1));
if (err) goto end;
err = E2(get_resource_2(&resource2));
if (err) goto end;
err = E3(get_resource_3(&resource3));
if (err) goto end;
...
Decide what kind of errors are worth dealing with.
In some cases, printing an error message on stderr and then calling exit with a non-zero argument is the best way to go.
This is often done when protecting malloc. A wrapper xmalloc is written which calls malloc and in case of failure prints an error message and then exits. You can find a real example of this here: (https://github.com/sailfishos-mirror/readline/blob/master/xmalloc.c).

C : How do you simulate an 'exception'?

I come from a C# background, but I'm learning C at the moment. In C#, when one wants to signal that an error has occurred, you throw an exception. But what do you do in C?
Say for example you have a stack with push and pop functions. What is the best way to signal that the stack is empty during a pop ? What do you return from that function?
double pop(void)
{
if(sp > 0)
return val[--sp];
else {
printf("error: stack empty\n");
return 0.0;
}
}
K&R's example from page 77 (code above) returns a 0.0. But what if the user pushed a 0.0 earlier on the stack, how do you know whether the stack is empty or whether a correct value was returned?
Exception-like behavior in C is accomplished via setjmp/longjmp. However, what you really want here is an error code. If all values are potentially returnable, then you may want to take in an out-parameter as a pointer, and use that to return the value, like so:
int pop(double* outval)
{
if(outval == 0) return -1;
if(sp > 0)
*outval = val[--sp];
else {
printf("error: stack empty\n");
return -1;
}
return 0;
}
Not ideal, obviously, but such are the limitations of C.
Also, if you go this road, you may want to define symbolic constants for your error codes (or use some of the standard ones), so that a user can distinguish between "stack empty" and "you gave me a null pointer, dumbass".
You could build an exception system on top of longjmp/setjmp: Exceptions in C with Longjmp and Setjmp. It actually works quite well, and the article is a good read as well. Here's how your code could look like if you used the exception system from the linked article:
TRY {
...
THROW(MY_EXCEPTION);
/* Unreachable */
} CATCH(MY_EXCEPTION) {
...
} CATCH(OTHER_EXCEPTION) {
...
} FINALLY {
...
}
It's amazing what you can do with a little macros, right? It's equally amazing how hard it is to figure out what the heck is going on if you don't already know what the macros do.
longjmp/setjmp are portable: C89, C99, and POSIX.1-2001 specify setjmp().
Note, however, that exceptions implemented in this way will still have some limitations compared to "real" exceptions in C# or C++. A major problem is that only your code will be compatible with this exception system. As there is no established standard for exceptions in C, system and third party libraries just won't interoperate optimally with your homegrown exception system. Still, this can sometimes turn out to be a useful hack.
I don't recommend using this in serious code which programmers other than yourself are supposed to work with. It's just too easy to shoot yourself in the foot with this if you don't know exactly what is going on. Threading, resource management, and signal handling are problem areas which non-toy programs will encounter if you attempt to use longjmp "exceptions".
You have a few options:
1) Magic error value. Not always good enough, for the reason you describe. I guess in theory for this case you could return a NaN, but I don't recommend it.
2) Define that it is not valid to pop when the stack is empty. Then your code either just assumes it's non-empty (and goes undefined if it is), or asserts.
3) Change the signature of the function so that you can indicate success or failure:
int pop(double *dptr)
{
if(sp > 0) {
*dptr = val[--sp];
return 0;
} else {
return 1;
}
}
Document it as "If successful, returns 0 and writes the value to the location pointed to by dptr. On failure, returns a non-zero value."
Optionally, you could use the return value or errno to indicate the reason for failure, although for this particular example there is only one reason.
4) Pass an "exception" object into every function by pointer, and write a value to it on failure. Caller then checks it or not according to how they use the return value. This is a lot like using "errno", but without it being a thread-wide value.
5) As others have said, implement exceptions with setjmp/longjmp. It's doable, but requires either passing an extra parameter everywhere (the target of the longjmp to perform on failure), or else hiding it in globals. It also makes typical C-style resource handling a nightmare, because you can't call anything that might jump out past your stack level if you're holding a resource which you're responsible for freeing.
One approach is to specify that pop() has undefined behaviour if the stack is empty. You then have to provide an is_empty() function that can be called to check the stack.
Another approach is to use C++, which does have exceptions :-)
This actually is a perfect example of the evils of trying to overload the return type with magic values and just plain questionable interface design.
One solution I might use to eliminate the ambiguity (and thus the need for "exception like behaviour") in the example is to define a proper return type:
struct stack{
double* pData;
uint32 size;
};
struct popRC{
double value;
uint32 size_before_pop;
};
popRC pop(struct stack* pS){
popRC rc;
rc.size=pS->size;
if(rc.size){
--pS->size;
rc.value=pS->pData[pS->size];
}
return rc;
}
Usage of course is:
popRC rc = pop(&stack);
if(rc.size_before_pop!=0){
....use rc.value
This happens ALL the time, but in C++ to avoid such ambiguities one usually just returns a
std::pair<something,bool>
where the bool is a success indicator - look at some of:
std::set<...>::insert
std::map<...>::insert
Alternatively add a double* to the interface and return a(n UNOVERLOADED!) return code, say an enum indicating success.
Of course one did not have to return the size in struct popRC. It could have been
enum{FAIL,SUCCESS};
But since size might serve as a useful hint to the pop'er you might as well use it.
BTW, I heartily agree that the struct stack interface should have
int empty(struct stack* pS){
return (pS->size == 0) ? 1 : 0;
}
In cases such as this, you usually do one of
Leave it to the caller. e.g. it's up to the caller to know if it's safe to pop()(e.g. call a stack->is_empty() function before popping the stack), and if the caller messes up, it's his fault and good luck.
Signal the error via an out parameter, or return value.
e.g. you either do
double pop(int *error)
{
if(sp > 0) {
return val[--sp];
*error = 0;
} else {
*error = 1;
printf("error: stack empty\n");
return 0.0;
}
}
or
int pop(double *d)
{
if(sp > 0) {
*d = val[--sp];
return 0;
} else {
return 1;
}
}
There is no equivalent to exceptions in straight C. You have to design your function signature to return error information, if that's what you want.
The mechanisms available in C are:
Non-local gotos with setjmp/longjmp
Signals
However, none of these has semantics remotely resembling C# (or C++) exceptions.
1) You return a flag value to show it failed, or you use a TryGet syntax where the return is a boolean for success while the value is passed through an output parameter.
2) If this is under Windows, there is an OS-level, pure C form of exceptions, called Structed Exception Handling, using syntax like "_try". I mention it, but I do not recommend it for this case.
setjmp, longjmp, and macros. It's been done any number of times—the oldest implementation I know of is by Eric Roberts and Mark vanderVoorde—but the one I use currently is part of Dave Hanson's C Interfaces and Implementations and is free from Princeton.
you can return a pointer to double:
non-NULL -> valid
NULL -> invalid
There are already some good answers here, just wanted to mention that something close to "exception", can be done with the use of a macro, as been done in the awesome MinUnit (this only returns the "exception" to the caller function).
Something that nobody has mentioned yet, it's pretty ugly though:
int ok=0;
do
{
/* Do stuff here */
/* If there is an error */
break;
/* If we got to the end without an error */
ok=1;
} while(0);
if (ok == 0)
{
printf("Fail.\n");
}
else
{
printf("Ok.\n");
}

Resources