I am using get_nproc() and get_nprocs_conf() top get number of online and all processor cores present in my machine.
How do I check for errors with these functions. Any specific error values?
Do they even notify of errors ? not sure.
I would really like to check for error with the call as my program would depend on the returned values a lot.
FYI - As these functions are available from GNU library I prefer these over
sysconf (_SC_NPROCESSORS_ONLN) and sysconf (_SC_NPROCESSORS_CONF)
so basically I want to avoid including extra file
Also, I see that these are declared in -- sys/sysinfo.h but wasn't able to find the definition. Any idea on where can I get that ?
get_nprocs and get_nprocs_conf are GNU extensions which are not documented to return errors. These functions are very unlikely to fail because they parse interfaces provided by the kernel in /sys or /proc. Still, failures can happen, either due to a kernel misconfiguration, a bug in the parser, or (most likely) a lack of resources causing open() to fail. In that case, the current implementation of both functions returns 1 without setting an error flag.
In other words, you are expected to use the return value as if the functions cannot fail. Since the fallback value returned in the unlikely case of error is fairly reasonable, doing just that does not seem like it will cause problems.
Here's a copy of the appropriate manual page: http://www.unix.com/man-page/linux/3/get_nprocs/
No error indicators are documented, though it follows from the function descriptions that if either function ever returns a value less than 1 then the function must have failed (else the function could not have run).
Related
I want to get the return address of the caller function. I'm using __builtin_return_address() funtion, but if I give index value greater than 0 it is returning NULL.
Please help me with this or tell me any other function to get the same.
See this answer to a related question.
__builtin_return_address is GCC and processor specific (also available in some versions of Clang on some processors with some -lack of- optimizations), and documented as
On some machines it may be impossible to determine the return address of any function other than the current one
The compiler might optimize a function (e.g. when it is compiled with -fomit-frame-pointer, or for tail-calls, or by function inlining) without the relevant information.
So probably you are getting NULL because the information is not available!
In addition to compiler optimisation reasons (which IMO is the most likely reason for the issue you're facing), the GCC documentation states quite plainly:
Calling this function with a nonzero argument can have unpredictable effects, including crashing the calling program. As a result, calls that are considered unsafe are diagnosed when the -Wframe-address option is in effect. Such calls should only be made in debugging situations.
As Basile said, since it's a compiler builtin (read: very processor specific and a bad idea to use) the behaviour is exceptionally loosely defined (as it is not required by any standards and does not have to make any guarantees).
Just use backtrace(3), it's POSIX-compliant and doesn't rely on compiler builtins.
Are there any standards for error code handling? What I mean by this is not how a particular error event is handled but what is actually returned in the process. An example of this would be when an error in opening a file is identified. While the open() function may return its own value, the function that called the open() function may return a different value.
I don't think ther's a standard, all errors must be detected and handled (the caller should always handle errors).
in Unix in general:
the standard C library for exemple always return -1 on fail and set the global variable errno to the correct value.
Some libraries for example return NULL for inexistant field rather than aborting.
You should always return as much useful information as possible.
Hope this help.
Regards.
It sounds entirely context dependent to me. In some cases it's even advisable to just abort() the whole process. The failing function is called from a program or library using its own coding standards, you should probably adhere to that.
I have the following C code that uses sqlite3:
if(SQLITE_OK == sqlite3_initialize()) {
self->db_open_result = sqlite3_open(self->db_uri, &(self->db));
} else {
self->db_open_result = SQLITE_ERROR;
}
Obviously I have a pretty high confidence that the code is correct and will behave as expected. However, I am measuring code coverage of my unit tests using gcov/lcov and I'm curious about how I might get my coverage number to 100% in this case. Under normal circumstances sqlite3_initialize() is never going to fail, so the else clause will never execute.
Is there a way to cause this to fail that isn't totally disruptive?
You want your unit-tests to test your code. But you also want to know that all of your test code has been properly exercised. One way to do that is to use "mocking", i.e. you replace your actual libraries (such as SQLite) with fake, or "mock" libraries and then run your programs against these fake libraries.
Whether this library replacement is done at compile time or runtime is really incidental, but in C it's easier to do it at compile time. You can do this mocking by hand, or you can use a tool such as Cmock.
In the faked library, you then provoke various errors and failures. Notably, the faked library doesn't even have to do anything or even keep track of much or any state, you can often get quite far by returning "OK" or "FAIL".
Is there a way to cause this to fail that isn't totally disruptive?
For portability reasons you should verify function success. What's happens if you not installed SQLite library? You cannot initialize the library if that happens.
"If for some reason, sqlite3_initialize() is unable to initialize the library (perhaps it is unable to allocate a needed resource such as a mutex) it returns an error code..."
So, if you want portability check the error.
I know many questions have been asked previously about error handling in C but this is specifically about errno stuff.
I want to ask whether we should use the errno/perror functionality to handle errors gracefully at runtime.I am asking this because MSVC uses it and Win32 api also uses it heavily.I don't know anything about gcc or 'linux api'.Today both gcc and MSVC say that errno/perror can be used safely in a multithreaded environment.So what's your view?
thanks.
Note that using errno alone is a bad idea: standard library functions invoke other standard library functions to do their work. If one of the called functions fails, errno will be set to indicate the cause of the error, and the library function might still succeed, if it has been programmed in a manner that it can fall back to other mechanisms.
Consider malloc(3) -- it might be programmed to try mmap(.., MAP_PRIVATE|MAP_ANONYMOUS) as a first attempt, and if that fails fall back to sbrk(2) to allocate memory. Or consider execvp(3) -- it may probe a dozen directories when attempting to execute a program, and many of them might fail first. The 'local failure' doesn't mean a larger failure. And the function you called won't set errno back to 0 before returning to you -- it might have a legitimate but irrelevant value left over from earlier.
You cannot simply check the value of errno to see if you have encountered an error. errno only makes sense if the standard library function involved also returned an error return. (Such as NULL from getcwd(3) or -1 from read(2), or "a negative value" from printf(3).)
But in the cases when standard library functions do fail, errno is the only way to discover why they failed. When other library functions (not supplied by the standard libraries) fail, they might use errno or they might provide similar but different tools (see e.g. ERR_print_errors(3ssl) or gai_strerror(3).) You'll have to check the documentation of the libraries you're using for full details.
I don't know if it is really a question of "should" but if you are programming in C and using the low level C/posix API, there really is no other option. Of course you can wrap it up if this offends your stylistic sensibilities, but under the hood that is how it has to work (at least as long as POSIX is a standard).
In Linux, errno is safe to read/write in multiple thread or process, but not with perror(). It's a standard library that not re-entrant.
I have an ANSI C program that dynamically loads a .so file using dlopen() passing RTLD_LAZY. I receive
Undefined symbol "_nss_cache_cycle_prevention_function"
warnings whenever the .so file is accessed in FreeBSD 7.2. nss_cache_cycle_prevention_function() is not one of my program's functions and I imagine must be coming from FreeBSD. This might also be a problem on Linux too although I am not experiencing the issue there. I would prefer not to load FreeBSD specific header files into my program. I would like to either include this function in a portable way or suppress these warnings.
What do you mean saying "I receive warning"? Does your program
examine the value returned by dlerror() and prints it if it is not NULL?
The _nss_cache_cycle_prevention_function is a marker symbol which is used by nsdispatch(3) on FreeBSD to determine whether to employ the services of nscd(8), the name service caching daemon. It is perfectly normal that it does not exist in an
executable or a shared library.
But when nsdispatch(3) executes dlsym(3), and the symbol is not found, the error will be set. And dlerror(3) returns the description of the last error, and not the description of the error of the last call. I suspect that's what you are hitting.
The solution (quite portable) would be to:
for dlopen(3), check its return value before using dlerror() to see whether there was an error at all;
for dlsym(3), since NULL is a valid return value,
to call dlerror() in a void context before the call to dlsym(3); that will clear any previous error, so that whatever the second call to dlerror(3) returns later on can be trusted.
In general, it will not harm anything to call an empty dlerror() before any other dl* calls.