I have a dll with a c interface, the functions of which will return error codes, I will also provide an additional function that returns the last error. Does this sound sensible? can anyone point me at any examples that i can use as a template please?
"The last error" is not a very useful or reliable concept in the context of a DLL. What if the DLL is being used by several processes or threads?
I will also provide an additional function that returns the last error
That would entail having an errno-style global variable holding the last error, right? I'd advise against that, as it would make your library hard to use in a multithreaded application, unless you use thread-local storage. Still, if you want to do this, then the standard C library with its errno variable/macro would be a good example.
A simpler and, IMHO, better approach is to just return error codes and if necessary provide some functions that operate on your error codes; e.g., you might want to have a mylib_strerror to convert them to human-readable string representations. So, the usage would look like
int err = mylib_operation_that_might_fail();
if (err != 0) {
fprintf("%s\n", mylib_strerror(err));
exit(1);
}
A good example of this style is the getaddrinfo API specified in RFC 3493.
Related
I am using get_nproc() and get_nprocs_conf() top get number of online and all processor cores present in my machine.
How do I check for errors with these functions. Any specific error values?
Do they even notify of errors ? not sure.
I would really like to check for error with the call as my program would depend on the returned values a lot.
FYI - As these functions are available from GNU library I prefer these over
sysconf (_SC_NPROCESSORS_ONLN) and sysconf (_SC_NPROCESSORS_CONF)
so basically I want to avoid including extra file
Also, I see that these are declared in -- sys/sysinfo.h but wasn't able to find the definition. Any idea on where can I get that ?
get_nprocs and get_nprocs_conf are GNU extensions which are not documented to return errors. These functions are very unlikely to fail because they parse interfaces provided by the kernel in /sys or /proc. Still, failures can happen, either due to a kernel misconfiguration, a bug in the parser, or (most likely) a lack of resources causing open() to fail. In that case, the current implementation of both functions returns 1 without setting an error flag.
In other words, you are expected to use the return value as if the functions cannot fail. Since the fallback value returned in the unlikely case of error is fairly reasonable, doing just that does not seem like it will cause problems.
Here's a copy of the appropriate manual page: http://www.unix.com/man-page/linux/3/get_nprocs/
No error indicators are documented, though it follows from the function descriptions that if either function ever returns a value less than 1 then the function must have failed (else the function could not have run).
Are there any standards for error code handling? What I mean by this is not how a particular error event is handled but what is actually returned in the process. An example of this would be when an error in opening a file is identified. While the open() function may return its own value, the function that called the open() function may return a different value.
I don't think ther's a standard, all errors must be detected and handled (the caller should always handle errors).
in Unix in general:
the standard C library for exemple always return -1 on fail and set the global variable errno to the correct value.
Some libraries for example return NULL for inexistant field rather than aborting.
You should always return as much useful information as possible.
Hope this help.
Regards.
It sounds entirely context dependent to me. In some cases it's even advisable to just abort() the whole process. The failing function is called from a program or library using its own coding standards, you should probably adhere to that.
I know many questions have been asked previously about error handling in C but this is specifically about errno stuff.
I want to ask whether we should use the errno/perror functionality to handle errors gracefully at runtime.I am asking this because MSVC uses it and Win32 api also uses it heavily.I don't know anything about gcc or 'linux api'.Today both gcc and MSVC say that errno/perror can be used safely in a multithreaded environment.So what's your view?
thanks.
Note that using errno alone is a bad idea: standard library functions invoke other standard library functions to do their work. If one of the called functions fails, errno will be set to indicate the cause of the error, and the library function might still succeed, if it has been programmed in a manner that it can fall back to other mechanisms.
Consider malloc(3) -- it might be programmed to try mmap(.., MAP_PRIVATE|MAP_ANONYMOUS) as a first attempt, and if that fails fall back to sbrk(2) to allocate memory. Or consider execvp(3) -- it may probe a dozen directories when attempting to execute a program, and many of them might fail first. The 'local failure' doesn't mean a larger failure. And the function you called won't set errno back to 0 before returning to you -- it might have a legitimate but irrelevant value left over from earlier.
You cannot simply check the value of errno to see if you have encountered an error. errno only makes sense if the standard library function involved also returned an error return. (Such as NULL from getcwd(3) or -1 from read(2), or "a negative value" from printf(3).)
But in the cases when standard library functions do fail, errno is the only way to discover why they failed. When other library functions (not supplied by the standard libraries) fail, they might use errno or they might provide similar but different tools (see e.g. ERR_print_errors(3ssl) or gai_strerror(3).) You'll have to check the documentation of the libraries you're using for full details.
I don't know if it is really a question of "should" but if you are programming in C and using the low level C/posix API, there really is no other option. Of course you can wrap it up if this offends your stylistic sensibilities, but under the hood that is how it has to work (at least as long as POSIX is a standard).
In Linux, errno is safe to read/write in multiple thread or process, but not with perror(). It's a standard library that not re-entrant.
I am fairly comfortable coding in languages like Java and C#, but I need to use C for a project (because of low level OS API calls) and I am having some difficulty dealing with pointers and memory management (as seen here)
Right now I am basically typing up code and feeding it to the compiler to see if it works. That just doesn't feel right for me. Can anyone point me to good resources for me to understand pointers and memory management, coming from managed languages?
k&r - http://en.wikipedia.org/wiki/The_C_Programming_Language_(book)
nuff said
One of the good resources you found already, SO.
Of course you are compiling with all warnings on, don't you?
Learning by doing largely depends on the quality of your compiler and the warnings / errors he feeds you. The best in that respect that I found in the linux / POSIX world is clang. Nicely traces the origin of errors and tells you about missing header files quite well.
Some tips:
By default varibles are stored in the stack.
Varibles are passed into functions by Value
Stick to the same process for allocating and freeing memory. eg allocate and free in the same the function
C's equivalent of
Integer i = new Integer();
i=5;
is
int *p;
p=malloc(sizeof(int));
*p=5;
Memory Allocation(malloc) can fail, so check the pointer for null before you use it.
OS functions can fail and this can be detected by the return values.
Learn to use gdb to step through your code and print variable values (compile with -g to enable debugging symbols).
Use valgrind to check for memory leaks and other related problems (like heap corruption).
The C language doesn't do anything you don't explicitly tell it to do.
There are no destructors automatically called for you, which is both good and bad (since bugs in destructors can be a pain).
A simple way to get somewhat automatic destructor behavior is to use scoping to construct and destruct things. This can get ugly since nested scopes move things further and further to the right.
if (var = malloc(SIZE)) { // try to keep this line
use_var(var);
free(var); // and this line close and with easy to comprehend code between them
} else {
error_action();
}
return; // try to limit the number of return statements so that you can ensure resources
// are freed for all code paths
Trying to make your code look like this as much as possible will help, though it's not always possible.
Making a set of macros or inline functions that initialize your objects is a good idea. Also make another set of functions that allocate your objects' memory and pass that to your initializer functions. This allows for both local and dynamically allocated objects to easily be initialized. Similar operations for destructor-like functions is also a good idea.
Using OO techniques is good practice in many instances, and doing so in C just requires a little bit more typing (but allows for more control). Putters, getters, and other helper functions can help keep objects in consistent states and decrease the changes you have to make when you find an error, if you can keep the interface the same.
You should also look into the perror function and the errno "variabl".
Usually you will want to avoid using anything like exceptions in C. I generally try to avoid them in C++ as well, and only use them for really bad errors -- ones that aren't supposed to happen. One of the main reasons for avoiding them is that there are no destructor calls magically made in C, so non-local GOTOs will often leak (or otherwise screw up) some type of resource. That being said, there are things in C which provide a similar functionality.
The main exception like mechanism in C are the setjmp and longjmp functions. setjmp is called from one location in code and passed a (opaque) variable (jmp_buf) which can later be passed to longjmp. When a call to longjmp is made it doesn't actually return to the caller, but returns as the previously called setjmp with that jmp_buf. setjmp will return a value specified by the call to longjmp. Regular calls to setjmp return 0.
Other exception like functionality is more platform specific, but includes signals (which have their own gotchas).
Other things to look into are:
The assert macro, which can be used to cause program exit when the parameter (a logical test of some sort) fails. Calls to assert go away when you #define NDEBUG before you #include <assert.h>, so after testing you can easily remove the assertions. This is really good for testing for NULL pointers before dereferencing them, as well as several other conditions. If a condition fails assert attempts to print the source file name and line number of the failed test.
The abort function causes the program to exit with failure without doing all of the clean up that calling exit does. This may be done with a signal on some platforms. assert calls abort.
I have been thinking about the difficulty incurred with C error handling.. like who actually does
if(printf("hello world")==-1){exit(1);}
But you break common standards by not doing such verbose, and usually useless coding. Well what if you had a wrapper around the libc? like so you could do something like..
//main...
error_catchall(my_errors);
printf("hello world"); //this will automatically call my_errors on an error of printf
ignore=1; //this makes it so the function will return like normal and we can check error values ourself
if(fopen.... //we want to know if the file opened or not and handle it ourself.
}
int my_errors(){
if(ignore==0){
_exit(1); //exit if we aren't handling this error by flagging ignore
}
return 0;
//this is called when there is an error anywhere in the libc
}
...
I am considering making such a wrapper as I am synthesizing my own BSD licensed libc(so I already have to touch the untouchable..), but I would like to know what people think about it..
would this actually work in real life and be more useful than returning -1?
during this years I've seen several attempts to mimics try/catch in ANSI C:
http://simgrid.gforge.inria.fr/doc/group__XBT__ex.html
http://llg.cubic.org/trycatch/
I think that try/catch approach is more simple than your.
But how would you be able to catch the error when it was expected? For example I might expect a file open to fail and want to deal with it in code instead of the generic error catcher.
To do this you would need two versions of every function. One that trapped errors and one the returns errors.
I did something like this long ago without modifying the library. I just created wrapper functions for common calls that did error checking. So my errchk_malloc call checked the return and raised an error if the allocation failed. Then I just used this version everywhere in place of the built in malloc.
if the goal is to exit cleanly as soon as you encounter an error that's ok... but if you want to do a minimum of error recovery, i can't see how your approach is useful...
To avoid this kind of problem, I sometimes use LD_PRELOAD_PATH to integrate my error management (only for my own projects since this is not a really good practice...)
Do you really want to change the standard behaviors of your LIBC ? You could add a few extensions around common functions.
For example, Gnome uses g_malloc and g_try_malloc. The former will abort on failure while the later will simply yield a null-pointer like malloc.