This question already has answers here:
Testing pointers for validity (C/C++)
(30 answers)
Closed 6 years ago.
I am well aware of the ugliness of this question, I'd still like to know if it's possible:
When a program tries to read/write to an invalid pointer (NULL, unallocated block, etc') windows crashes the application with an access violation exception.
The question is, is there a way to check wether this pointer will generate an exception before trying to use it, or else, is there a way to catch this exception?
The best bet if you must use raw pointers is to make sure that it is either a valid pointer or NULL. Then you can check if it is valid by checking if it is equal to NULL.
But to answer your question, you can catch these kinds of things with structured exception handling (SEH).
That being said, SEH is a bad idea.
Catching this kind of errors is addressing the symptoms, not the root cause.
The following idioms will work on any platform:
initialize all pointers to zero
if you cannot guarantee a pointer is valid, check that it is non-0 before indirecting it
when deleting objects, set the pointer to 0 after deletion
be careful of object ownership issues when passing pointers to other functions
There are functions IsBadReadPtr and IsBadWritePtr that might do what you want. There is also an article that explains why you shouldn't use them.
You can certainly test if the pointer is NULL!
if ( ptr == NULL ) {
// don't use it
}
That is the only portable test you can do. Windows does provide various APIs to test pointers, but as others have pointed out, using them can be problematic.
IsBadReadPointer / IsBadWritePointer
I think you're looking for something like IsBadReadPtr.
Documentation:
http://msdn.microsoft.com/en-us/library/aa366713(VS.85).aspx
I don't know about Windows, but in *nix you could install a SIGSEGV signal handler to intercept the exception.
Related
In looking at wlanapi examples I recently saw the following pattern a few times:
if (ptr) {
WlanFreeMemory(ptr);
}
I wrote a small program to call WlanFreeMemory on a null pointer, and it executed without any errors (that I could observe), but I'm still not convinced.
I first had assumed this was a common issue of a programmer adding an unnecessary NULL check before delete, free, etc. However, I don't see anything on the msdn page to confirm that the function is safe to call with NULL. Perhaps someone more experienced in windows programming knows the answer to this?
The pointer may not be null. If null were allowed, then the pointer would have been annotated as _In_opt_ rather than _In_.
I'm making a data structures and algorithms library in C for learning purposes (so this doesn't necessarily have to be bullet-proof), and I'm wondering how void functions should handle errors on preconditions. If I have a function for destroying a list as follows:
void List_destroy(List* list) {
/*
...
free()'ing pointers in the list. Nothing to return.
...
*/
}
Which has a precondition that list != NULL, otherwise the function will blow up in the caller's face with a segfault.
So as far as I can tell I have a few options: one, I throw in an assert() statement to check the precondition, but that means the function would still blow up in the caller's face (which, as far as I have been told, is a big no-no when it comes to libraries), but at least I could provide an error message; or two, I check the precondition, and if it fails I jump to an error block and just return;, silently chugging along, but then the caller doesn't know the List* was NULL.
Neither of these options seem particularly appealing. Moreover, implementing a return value for a simple destroy() function seems like it should be unnecessary.
EDIT: Thank you everyone. I settled on implementing (in all my basic list functions, actually) consistent behavior for NULL List* pointers being passed to the functions. All the functions jump to an error block and exit(1) as well as report an error message to stderr along the lines of "Cannot destroy NULL list." (or push, or pop, or whatever). I reasoned that there's really no sensible reason why a caller should be passing NULL List* pointers anyway, and if they didn't know they were then by all means I should probably let them know.
Destructors (in the abstract sense, not the C++ sense) should indeed never fail, no matter what. Consistent with this, free is specified to return without doing anything if passed a null pointer. Therefore, I would consider it reasonable for your List_destroy to do the same.
However, a prompt crash would also be reasonable, because in general the expectation is that C library functions crash when handed invalid pointers. If you take this option, you should crash by going ahead and dereferencing the pointer and letting the kernel fire a SIGSEGV, not by assert, because assert has a different crash signature.
Absolutely do not change the function signature so that it can potentially return a failure code. That is the mistake made by the authors of close() for which we are still paying 40 years later.
Generally, you have several options if a constraint of one of your functions is violated:
Do nothing, successfully
Return some value indicating failure (or set something pointed-to by an argument to some error code)
Crash randomly (i.e. introduce undefined behaviour)
Crash reliably (i.e. use assert or call abort or exit or the like)
Where (but this is my personal opinion) this is a good rule of thumb:
the first option is the right choice if you think it's OK to not obey the constraints (i.e. they aren't real constraints), a good example for this is free.
the second option is the right choice, if the caller can't know in advance if the call will succeed; a good example is fopen.
the third and fourth option are a good choice if the former two don't apply. A good example is memcpy. I prefer the use of assert (one of the fourth options) because it enables both: Crashing reliably if someone is unwilling to read your documentation and introduce undefined behaviour for people who do read it (they will prevent that by obeying your constraints), depending on whether they compile with NDEBUG defined or not. Dereferencing a pointer argument can serve as an assert, because it will make your program crash (which is the right thing, people not reading your documentation should crash as early as possible) if these people pass an invalid pointer.
So, in your case, I would make it similar to free and would succeed without doing anything.
HTH
If you wish not to return any value from function, then it is good idea to have one more argument for errCode.
void List_destroy(List* list, int* ErrCode) {
*ErrCode = ...
}
Edit:
Changed & to * as question is tagged for C.
I would say that simply returning in case the list is NULL would make sense at this would indicate that list is empty(not an error condition). If list is an invalid pointer, you cant detect that and let kernel handle it for you by giving a seg fault and let programmer fix it.
I'm writing a small library that takes a FILE * pointer as input.
If I immediately check this FILE * pointer and find it leads to a segfault, is it more correct to handle the signal, set errno, and exit gracefully; or to do nothing and use the caller's installed signal handler, if he has one?
The prevailing wisdom seems to be "libraries should never cause a crash." But my thinking is that, since this particular signal is certainly the caller's fault, then I shouldn't attempt to hide that information from him. He may have his own handler installed to react to the problem in his own way. The same information CAN be retrieved with errno, but the default disposition for SIGSEGV was set for a good reason, and passing the signal up respects this philosophy by either forcing the caller to be handle his errors, or by crashing and protecting him from further damage.
Would you agree with this analysis, or do you see some compelling reason to handle SIGSEGV in this situation?
Taking over handlers is not library business, I'd say it's somewhat offensive of them unless explicitly asked for. To minimize crashes library may validate their input to some certain extent. Beyond that: garbage in — garbage out.
The prevailing wisdom seems to be "libraries should never cause a crash."
I don't know where you got that from - if they pass an invalid pointer, you should crash. Any library will.
I would consider it reasonable to check for the special case of a NULL pointer. But beyond that, if they pass junk, they violated the function's contract and they get a crash.
This is a subjective question, and possibly not fit for SO, but I will present my opinion:
Think about it this way: If you have a function that takes a nul-terminated char * string and is documented as such, and the caller passes a string without the nul terminator, should you catch the signal and slap the caller on the wrist? Or should you let it crash and make the bad programmer using your API fix his/her code?
If your code takes a FILE * pointer, and your documentation says "pass any open FILE *", and they pass a closed or invalidated FILE * object, they've broken the contract. Checking for this case would slow down the code of people who properly use your library to accommodate people who don't, whereas letting it crash will keep the code as fast as possible for the people who read the documentation and write good code.
Do you expect someone who passes an invalid FILE * pointer to check for and correctly handle an error? Or are they more likely to blindly carry on, causing another crash later, in which case handling this crash may just disguise the error?
Kernels shouldn't crash if you feed them a bad pointer, but libraries probably should. That doesn't mean you should do no error checking; a good program dies immediately in the face of unreasonably bad data. I'd much rather a library call bail with assert(f != NULL) than to just trundle on and eventually dereference the NULL pointer.
Sorry, but people who say a library should crash are just being lazy (perhaps in consideration time, as well as development efforts). Libraries are collections of functions. Library code should not "just crash" any more than other functions in your software should "just crash".
Granted, libraries may have some issues around how to pass errors across the API boundary, if multiple languages or (relatively) exotic language features like exceptions would normally be involved, but there's nothing TOO special about that. Really, it's just part of the burden of writing libraries, as opposed to in-application code.
Except where you really can't justify the overhead, every interface between systems should implement sanity checking, or better, design by contract, to prevent security issues, as well as bugs.
There are a number of ways to handle this, What you should probably do, in order of preference, is one of:
Use a language that supports exceptions (or better, design by contract) within libraries, and throw an exception on or allow the contract to fail.
Provide an error handling signal/slot or hook/callback mechanism, and call any registered handlers. Require that, when your library is initialised, at least one error handler is registered.
Support returning some error code in every function that could possibly fail, for any reason. But this is the old, relatively insane way of doing things from C (as opposed to C++) days.
Set some global "an error has occurred flag", and allow clearing that flag before calls. This is also old, and completely insane, mostly because it moves error status maintence burden to the caller, AND is unsafe when it comes to threading.
I have a fairly complex multi threaded application (server) that from time to time will crash due to an assert:
/usr/include/boost/smart_ptr/shared_ptr.hpp:418: T* boost::shared_ptr< <template-parameter-1-1> >::operator->() const [with T = msg::Player]: Assertion `px != 0' failed.
I have been unable to pinpoint the cause and was wondering if this is a problem with boost::shared_ptr or it is me?
I tried g++ 4.4.3-4ubuntu5 and llvm-g++ (GCC) 4.2.1 with optimization and without optimization and libboost1.40-dev (= 1.40.0-4ubuntu4).
There should be no problem with using boost::shared_ptr as long as you initialize your shared pointers correctly and use the same memory management context for all your shared object libraries.
In your case you are trying to use an uninitialized shared pointer.
boost::shared_ptr<Obj> obj;
obj->Something(); // assertion failed
boost::shared_ptr<Obj> obj(new Obj);
obj->Something(); // ok
I would advise to initialize them right on declaration whenever possible. Exception handling can create a lot of "invisible" paths for the code to run and it might be quite difficult to identify the non initialized shared pointers.
PS: There are other issues if you load/unload modules where shared_ptr are in use leading to chaos. This is very hard to solve but in this case you would have a non zero pointer. This is not what is happening to you right now.
PPS: The pattern used is called RAII (Resource Acquisition Is Initialization)
you might want to make sure that you
always use a named smart pointer variable to hold the result of new
like it is recommended here: boost::shared_ptr - Best Practices
Regards,
Jonny
Here's to reviving an ancient question. I just hit this, and it was due to a timing issue. I was trying to use the shared_ptr from one thread before I'd finished initializing it in another.
So if someone hits the above message, check your timing to ensure your shared_ptr has been initialized.
We are using DevPartners boundchecker for detecting memory leak issues. It is doing a wonderful job, though it does not find string overflows like the following
char szTest [1] = "";
for (i = 0; i < 100; i ++) {
strcat (szTest, "hi");
}
Question-1: Is their any way, I can make BoundsChecker to detect this?
Question-2: Is their any other tool that can detect such issues?
I tried it in my devpartner (msvc6.6) (devpartner 7.2.0.372)
I confirm your observed behavior.
I get an access violation after about 63 passes of the loop.
What does compuware have to say about the issue?
CppCheck will detect this issue.
One option is to simply ban the use of string functions that don't have information about the destination buffer. A set of macros like the following in a universally included header can be helpful:
#define strcpy strcpy_is_banned_use_strlcpy
#define strcat strcat_is_banned_use_strlcat
#define strncpy strncpy_is_banned_use_strlcpy
#define strncat strncat_is_banned_use_strlcat
#define sprintf sprintf_is_banned_use_snprintf
So any attempted uses of the 'banned' routines will result in a linker error that also tells you what you should use instead. MSVC has done something similar that can be controlled using macros like _CRT_SECURE_NO_DEPRECATE.
The drawback to this technique is that if you have a large set of existing code, it can be a huge chore to get things moved over to using the new, safer routines. It can drive you crazy until you've gotten rid of the functions considered dangerous.
valgrind will detect writing past dynamically allocated data, but I don't think it can do so for automatic arrays like in your example. If you are using strcat, strcpy, etc., you have to make sure that the destination is big enough.
Edit: I was right about valgrind, but there is some hope:
Unfortunately, Memcheck doesn't do bounds checking on static or stack arrays. We'd like to, but it's just not possible to do in a reasonable way that fits with how Memcheck works. Sorry.
However, the experimental tool Ptrcheck can detect errors like this. Run Valgrind with the --tool=exp-ptrcheck option to try it, but beware that it is not as robust as Memcheck.
I haven't used Ptrcheck.
You may find that your compiler can help. For example, in Visual Studio 2008, check the project properties - C/C++ - Code Generation page. Theres a "Buffer Security Check" option.
My guess would be that it reserves a bit of extra memory and writes a known sequence in there. If that sequence gets modified, it assumes a buffer overrun. I'm not sure, though - I remember reading this somewhere, but I don't remember for certain if it was about VC++.
Given that you've tagged this C++, why use a pointer to char at all?
std::stringstream test;
std::fill_n(std::ostream_iterator<std::string>(test), 100, "hi");
If you enable the /RTCs compiler switch, it may help catch problems like this. With this switch on, the test caused an access violation when running the strcat only one time.
Another useful utility that helps with problems like this (more heap-oriented than stack but extremely helpful) is application verifier. It is free and can catch a lot of problems related to heap overflow.
An alternative: our Memory Safety Checker.
I think it will handle this case.
The problem was that by default, the API Validation subsystem is not enabled, and the messages you were interested in come from there.
I can't speak for older versions of BoundsChecker, but version 10.5 has no particular problems with this test. It reports the correct results and BoundsChecker itself does not crash. The test application does, however, because this particular test case completely corrupts the call stack that led to the function where the test code was, and as soon as that function terminated, the application did too.
The results: 100 messages about write overrun to a local variable, and 99 messages about the destination string not being null terminated. Technically, that second message is not right, but BoundsChecker only searches for the null termination within the bounds of the destination string itself, and after the first strcat call, it no longer contains a zero byte within its bounds.
Disclaimer: I work for MicroFocus as a developer working on BoundsChecker.