Need help with buffer overrun - c

I've got a buffer overrun I absolutely can't see to figure out (in C). First of all, it only happens maybe 10% of the time or so. The data that it is pulling from the DB each time doesn't seem to be all that much different between executions... at least not different enough for me to find any discernible pattern as to when it happens. The exact message from Visual Studio is this:
A buffer overrun has occurred in
hub.exe which has corrupted the
program's internal state. Press
Break to debug the program or Continue
to terminate the program.
For more details please see Help topic
'How to debug Buffer Overrun Issues'.
If I debug, I find that it is broken in __report_gsfailure() which I'm pretty sure is from the /GS flag on the compiler and also signifies that this is an overrun on the stack rather than the heap. I can also see the function it threw this on as it was leaving, but I can't see anything in there that would cause this behavior, the function has also existed for a long time (10+ years, albeit with some minor modifications) and as far as I know, this has never happened.
I'd post the code of the function, but it's decently long and references a lot of proprietary functions/variables/etc.
I'm basically just looking for either some idea of what I should be looking for that I haven't or perhaps some tools that may help. Unfortunately, nearly every tool I've found only helps with debugging overruns on the heap, and unless I'm mistaken, this is on the stack. Thanks in advance.

You could try putting some local variables on either end of the buffer, or even sentinels into the (slightly expanded) buffer itself, and trigger a breakpoint if those values aren't what you think they should be. Obviously, using a pattern that is not likely in the data would be a good idea.

While it won't help you in Windows, Valgrind is by far the best tool for detecting bad memory behavior.
If you are debugging the stack, your need to get to low level tools - place a canary in the stack frame (perhaps a buffer filled with something like 0xA5) around any potential suspects. Run the program in a debugger and see which canaries are no longer the right size and contain the right contents. You will gobble up a large chunk of stack doing this, but it may help you spot exactly what is occurring.

One thing I have done in the past to help narrow down a mystery bug like this was to create a variable with global visibility named checkpoint. Inside the culprit function, I set checkpoint = 0; as the very first line. Then, I added ++checkpoint; statements before and after function calls or memory operations that I even remotely suspected might be able to cause an out-of-bounds memory reference (plus peppering the rest of the code so that I had a checkpoint at least every 10 lines or so). When your program crashes, the value of checkpoint will narrow down the range you need to focus on to a handful of lines of code. This may be a bit overkill, I do this sort of thing on embedded systems (where tools like valgrind can't be used) but it should still be useful.

Wrap it in an exception handler and dump out useful information when it occurs.

Does this program recurse at all? If so, I check there to ensure you don't have an infinite recursion bug. If you can't see it manually, sometimes you can catch it in the debugger by pausing frequently and observing the stack.

Related

Are memcheck errors ever acceptable?

The valgrind quickstart page mentions:
Try to make your program so clean that Memcheck reports no errors. Once you achieve this state, it is much easier to see when changes to the program cause Memcheck to report new errors. Experience from several years of Memcheck use shows that it is possible to make even huge programs run Memcheck-clean. For example, large parts of KDE, OpenOffice.org and Firefox are Memcheck-clean, or very close to it.
This block left me a little perplexed. Seeing as the way the C standard works, I would assume most (if not all) practices that produce memcheck errors would invoke undefined behavior on the program, and should therefore be avoided like the plague.
However, the last sentence in the quoted block implies there are in fact "famous" programs that run in production with memcheck errors. After reading this, I thought I'd put this to test and I tried running VLC with valgrind, getting a bunch of memcheck errors right after starting it.
This lead me to this question: are there ever good reasons not to eliminate such errors from a program in production? Is there ever anything to be gained from releasing a program that contains such errors and, if so, how do the developers keep it safe despite the fact that a program that contains such errors can, to my knowledge, act unpredictably and there is no way to make assumptions about its behavior in general? If so, can you provide real-world examples of cases in which the program is better off running with those errors than without?
There has been a case where fixing the errors reported by Valgrind actually led to security flaws, see e.g. https://research.swtch.com/openssl . Intention of the use of uninitialised memory was to increase entropy by having some random bytes, the fix led to more predictable random numbers, indeed weakening security.
In case of VLC, feel free to investigate ;-)
One instance is when you are deliberately writing non-portable code to take advantage of system-specific optimizations. Your code might be undefined behavior with respect to the C standard, but you happen to know that your target implementation does define the behavior in a way that you want.
A famous example is optimized strlen implementations such as those discussed at vectorized strlen getting away with reading unallocated memory. You can design such algorithms more efficiently if they are allowed to potentially read past the terminating null byte of the string. This is blatant UB for standard C, since this might be past the end of the array containing the string. But on a typical real-life machine (say for instance x86 Linux), you know what will actually happen: if the read touches an unmapped page, you will get SIGSEGV, and otherwise the read will succeed and give you whatever bytes happen to be in that region of memory. So if your algorithm checks alignment to avoid crossing page boundaries unnecessarily, it may still be perfectly safe for x86 Linux. (Of course you should use appropriate ifdef's to ensure that such code isn't used on systems where you can't guarantee its safety.)
Another instance, more relevant to memcheck, might be if you happen to know that your system's malloc implementation always rounds up allocation requests to, say, multiples of 32 bytes. If you have allocated a buffer with malloc(33) but now find that you need 20 more bytes, you could save yourself the overhead of realloc() because you know that you were actually given 64 bytes to play with.
memcheck is not perfect. Following are some problems and possible reasons for higher false positive rate:
memcheck's ability and shadow bit propagation related rules to decrease overhead - but it affects false positive rate
imprecise representation of flag registers
higher optimization level
From memcheck paper (published in usenix 2005) - but things might definitely have changed since then.
A system such as Memcheck cannot simultaneously be free of false
negatives and false positives, since that would be equivalent to
solving the Halting Problem. Our design attempts to almost completely
avoid false negatives and to minimise false positives. Experience in
practice shows this to be mostly successful. Even so, user feedback
over the past two years reveals an interesting fact: many users have
an (often unstated) expectation that Memcheck should not report any
false positives at all, no matter how strange the code being checked
is.
We believe this to be unrealistic. A better expectation is to accept
that false positives are rare but inevitable. Therefore it will
occasionally necessary to add dummy initialisations to code to make
Memcheck be quiet. This may lead to code which is slightly more
conservative than it strictly needs to be, but at least it gives a
stronger assurance that it really doesn't make use of any undefined
values.
A worthy aim is to achieve Memcheck-cleanness, so that new errors are
immediately apparent. This is no different from fixing source code to
remove all compiler warnings, even ones which are obviously harmless.
Many large programs now do run Memcheck-clean, or very nearly so. In
the authors' personal experience, recent Mozilla releases come close
to that, as do cleaned-up versions of the OpenOffice.org-680
development branch, and much of the KDE desktop environment. So this
is an achievable goal.
Finally, we would observe that the most effective use of Memcheck
comes not only from ad-hoc debugging, but also when routinely used on
applications running their automatic regression test suites. Such
suites tend to exercise dark corners of implementations, thereby
increasing their Memcheck-tested code coverage.
Here's a section on avoiding false positives:
Memcheck has a very low false positive rate. However, a few hand-coded assembly sequences, and a few very
rare compiler-generated idioms can cause false positives.
You can find the origin of the error using --track-origins=yes option, you may be able to see what's going on.
If a piece of code is running in a context that would never cause uninitialized storage to contain confidential information that it must not leak, some algorithms may benefit from a guarantee that reading uninitialized storage will have no side effects beyond yielding likely-meaningless values. For example, if it's necessary to quickly set up a hash map, which will often have only a few items placed in it before it's torn down, but might sometimes have many items, a useful approach is to have an array which holds data items and has values in the order they were added, along with a hash table that maps hash values to storage slot numbers. If the number of items stored into the table is N, an item's hash is H, and attempting to access hashTable[H] is guaranteed yield a value I that will either be the number stored there, if any, or else an arbitrary number, then one of three things will happen:
I might be greater than or equal to N. In that case, the table does not contain a value with a hash of H.
I might be less than N, but items[I].hash != H. In that case, the table does not contain a value with a hash of H.
I might be less than N, and items[I].hash == H. In that case, the table rather obviously contains at least one value (the one in slot I) with a hash of H.
Note that if the uninitialized hash table could contain confidential data, an adversary who can trigger hashing requests may be able to use timing attacks to gain some information about its contents. The only situations where the value read from a hash table slot could affect any aspect of function behavior other than execution time, however, would be those in which the hash table slot had been written.
To put things another way, the hash table would contain a mixture of initialized entries that would need to be read correctly, and meaningless uninitialized entries whose contents could not observably affect program behavior, but the code might not be able to determine whether the contents of an entry might affect program behavior until after it had read it.
For program to read uninitialized data when it's expecting to read initialized data would be a bug, and since most places where a program would attempt to read data would be expecting initialized data, most attempts to read uninitialized data would be bugs. If a language included a construct to explicitly request that an implementation either read data if it had been written, and otherwise or yield some arbitrary value with no side effects, it would make sense to regard attempts to read uninitialized data without such a construct as a defect. In a language without such a construct, however, the only way to avoid warnings about reading uninitialized data would be to forego some useful algorithms that could otherwise benefit from the aforementioned guarantee.
My experience of posts concerning Valgrind on Stack Overflow is that there is often either or both a misplaced sense of overconfidence or a lack of understanding of the what the compiler and Valgrind are doing [neither of these observations is aimed at the OP]. Ignoring errors for either of these reasons is a recipe for disaster.
Memcheck false positives are quite rare. I've used Valgrind for many years and I can count the types of false positives that I've encountered on one hand. That said, there is an ongoing battle by the Valgrind developers and the code that optimising compilers emit. For instance see this link (if anyone is interested, there are plenty other good presentations about Valgrind on the FOSDEM web site). In general, the problem is that optimizing compilers can make changes so long as there is no observable difference in the behaviour. Valgrind has baked in assumptions about how executables work, and if a new compiler optimization steps outside of those assumptions false positives can result.
False negatives usually mean that Valgrind has not correctly encapsulated some behaviour. Usually this will be a bug in Valgrind.
What Valgrind won't be able to tell you is how serious the error is. For instance, you may have a printf that is passed a pointer to character array that contains some uninitialized bytes but which is always nul terminated. Valgrind will detect an error, and at runtime you might get some random rubbish on the screen, which may be harmless.
One example that I've come across where a fix is probably not worth the effort is the use of the putenv function. If you need to put a dynamically allocated string into the environment then freeing that memory is a pain. You either need to save the pointer somewhere or save a flag that indicates that the env var has been set, and then call some cleanup function before your executable terminates. All that just for a leak of around 10-20 bytes.
My advice is
Aim for zero errors in your code. If you allow large numbers of errors then the only way to tell if you introduce new errors is to use scripts that filter the errors and compare them with some reference state.
Make sure that you understand the errors that Valgrind generates. Fix them if you can.
Use suppression files. Use them sparingly for errors in third party libraries that you cannot fix, harmless errors for which the fix is worse than the error and any false positives.
Use the -s/-v Valgrind options and remove unused suppressions when you can (this will probably require some scripting).

How to debug the memory is changed randomly issue

My application is a multi-thread program that runs on Solaris.
Recently, I found it may crash, and the reason is one member in a pointer array is changed from a valid value to NULL,so when accessing it, it crashed.
Because the occurrence ratio is very low, in the past 2 months, it only occurred twice, and the changed members in the array aren't the same. I can't find the repeated steps, and after reviewing code, there is no valuable clue gotten.
Could anyone give some advice on how to debug the memory is changed randomly issue?
Since you aren't able to reproduce the crash, debugging it isn't going to be easy.
However, there are some things you can do:
Go through the code and make a list of all of the places in the code that write to that variable--particularly the ones that could write a NULL to it. It's likely that one of them is your culprit.
Try to develop some kind of torture test that makes the fault more likely to occur (eg running through simulated or random transactions at top speed). If you can reproduce the crash this way you'll be in a much better situation, as you can then analyze the actual cause of the crash instead of just speculating.
If possible, run the program under valgrind or purify or similar. If they give any warnings, track down what is causing those warnings and fix it; it's possible that your program is eg accessing memory that has been freed, which might seem to work most of the time (if the free memory hasn't been reused for anything when it is accessed) but would fail occasionally (when something is reusing it)
Add a memory checker like Electric Fence to your code, or just replace free() with a custom version that overwrites the free memory with random garbage in the hopes that this will make the crash more likely to occur.
Recompile your program using different compilers (especially new/fancy ones like clang++ with the static analyzer enabled) and fix whatever they warn about. This may point you to your problem.
Run the program under different hardware and OS's; sometimes an obscure problem under one OS gives really obvious symptoms on another.
Review the various machines where the crash is known to have occurred. Do they all have anything in common? What about the machines where it hasn't crashed? Is there something different about them?
Step 2 is really the most important one, because even if you think you have fixed the problem, you won't be able to prove it unless you can reproduce the crash in the old code, and cannot reproduce it with the fixed code. Without being able to reproduce the fault, you're just guessing about whether a particular code change actually helps or not.

Float value suddenly becoming huge

I would rather not dump code, but explain my problem. After hours of debugging I managed to understand that at some point in my code, a float value that is not explicitly modified turns HUGE (more than 1e15). I do use a lot of memory in my program (a string array containing 800+ words), other than that though, I have no idea what could cause this.
If anyone has any ideas regarding this, please share. Otherwise, I'll post a pastebin of the
code soon.
EDIT:
Here is the code: http://pastebin.com/vgiZweNq. The problem rests in the next_generation() function, where the sumfit variable goes nuts at random times in the loop.
Also, I've compiled this on linux using -fno-stack-limit and -fstack-check, to avoid stack overflows.
EDIT 2:
I've changed the program to use a dynamically allocated linked list, to further avoid stack overflows. Still, sumfit gets changed to Floatzilla at random points, usually pretty early on.
Cheers!
Since the variable is obviously being modified from an unexpected point, you might want to check some possibilities:
Is it being modified from a different thread or from an interrupt / event handler? If so, is the access properly synchronized to prevent a data race?
Are you doing pointer arithmetic that might be buggy and cause access outside the intended buffer?
Are you casting pointers between types of different sizes?
Especially if you are working on an embedded device: Maybe the memory is full and your stack is overlapping the heap, or the global variables.
More information about the platform this happens on would be helpful.
You're using strcpy on the chrom array, but i don't see where they ever get null terminated.
Maybe I'm just missing it, though.
You've got a huge string array. I reckon you're probably going off the end of it. Keep track of the size of data going into that array.

C code on Linux under gdb runs differently if run standalone?

I have built a plain C code on Linux (Fedora) using code-sorcery tool-chain. This is for ARM Cortex-A8 target. This code is running on a Cortex A8 board, running embedded Linux.
When I run this code for some test case, which does dynamic memory allocation (malloc) for some large size (10MB), it crashes after some time giving error message as below:
select 1 (init), adj 0, size 61, to kill
select 1030 (syslogd), adj 0, size 64, to kill
select 1032 (klogd), adj 0, size 74, to kill
select 1227 (bash), adj 0, size 378, to kill
select 1254 (ppp), adj 0, size 1069, to kill
select 1255 (TheoraDec_Corte), adj 0, size 1159, to kill
send sigkill to 1255 (TheoraDec_Corte), adj 0, size 1159
Program terminated with signal SIGKILL, Killed.
Then, when I debug this code for the same test case using gdb built for the target, the point where this dynamic memory allocation happens, code fails to allocate that memory and malloc returns NULL. But during normal stand-alone run, I believe malloc should be failing to allocate but it strangely might not be returning NULL, but it crashes and the OS kills my process.
Why is this behaviour different when run under gdb and when without debugger?
Why would malloc fails yet not return a NULL. Could this be possible, or the reason for the error message I am getting is else?
How do I fix this?
thanks,
-AD
So, for this part of the question, there is a surefire answer:
Why would malloc fails yet not return a NULL. Could this be possible, or the reason for the error message i am getting is else?
In Linux, by default the kernel interfaces for allocating memory almost never fail outright. Instead, they set up your page table in such a way that on the first access to the memory you asked for, the CPU will generate a page fault, at which point the kernel handles this and looks for physical memory that will be used for that (virtual) page. So, in an out-of-memory situation, you can ask the kernel for memory, it will "succeed", and the first time you try to touch that memory it returned back, this is when the allocation actually fails, killing your process. (Or perhaps some other unfortunate victim. There are some heuristics for that, which I'm not incredibly familiar with. See "oom-killer".)
Some of your other questions, the answers are less clear for me.
Why is this behaviour different when run under gdb and when without debugger?It could be (just a guess really) that GDB has its own malloc, and is tracking your allocations somehow. On a somewhat related point, I've actually frequently found that heap bugs in my code often aren't reproducible under debuggers. This is frustrating and makes me scratch my head, but it's basically something I've pretty much figured one has to live with...
How do i fix this?
This is a bit of a sledgehammer solution (that is, it changes the behavior for all processes rather than just your own, and it's generally not a good idea to have your program alter global state like that), but you can write the string 2 to /proc/sys/vm/overcommit_memory. See this link that I got from a Google search.
Failing that... I'd just make sure you're not allocating more than you expect to.
By definition running under a debugger is different than running standalone. Debuggers can and do hide many of the bugs. If you compile for debugging you can add a fair amount of code, similar to compiling completely unoptimized (allowing you to single step or watch variables for example). Where compiling for release can remove debugging options and remove code that you needed, there are many optimization traps you can fall into. I dont know from your post who is controlling the compile options or what they are.
Unless you plan to deliver the product to be run under the debugger you should do your testing standalone. Ideally do your development without the debugger as well, saves you from having to do everything twice.
It sounds like a bug in your code, slowly re-read your code using new eyes as if you were explaining it to someone, or perhaps actually explain it to someone, line by line. There may be something right there that you cannot see because you have been looking at it the same way for too long. It is amazing how many times and how well that works.
I could also be a compiler bug. Doing things like printing out the return value, or not can cause the compiler to generate different code. Adding another variable and saving the result to that variable can kick the compiler to do something different. Try changing the compiler options, reduce or remove any optimization options, reduce or remove the debugger compiler options, etc.
Is this a proven system or are you developing on new hardware? Try running without any of the caches enabled for example. Working in a debugger and not in standalone, if not a compiler bug can be a timing issue, single stepping flushes the pipline, mixes the cache up differently, gives the cache and memory system an eternity to come up with a result which it doesnt have in real time.
In short there is a very long list of reasons why running under a debugger hides bugs that you cannot find until you test in the final deliverable like environment, I have only touched on a few. Having it work in the debugger and not in standalone is not unexpected, it is simply how the tools work. It is likely your code, the hardware, or your tools based on the description you have given so far.
The fastest way to eliminate it being your code or the tools is to disassemble the section and inspect how the passed values and return values are handled. If the return value is optimized out there is your answer.
Are you compiling for a shared C library or static? Perhaps compile for static...

Bizarre bug in C [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
So I have a C program. And I don't think I can post any code snippets due to complexity issues. But I'll outline my error, because it's weird, and see if anyone can give any insights.
I set a pointer to NULL. If, in the same function where I set the pointer to NULL, I printf() the pointer (with "%p"), I get 0x0, and when I print that same pointer a million miles away at the end of my program, I get 0x0. If I remove the printf() and make absolutely no other changes, then when the pointer is printed later, I get 0x1, and other random variables in my structure have incorrect values as well. I'm compiling it with GCC on -O2, but it has the same behavior if I take off optimization, so that's not hte problem.
This sounds like a Heisenbug, and I have no idea why it's happening, nor how to fix it. Does anyone who has dealt with something like this in the past have advice on how they approached this kind of problem? I know this may sound kind of vague.
EDIT: Somehow, it works now. Thank you, all of you, for your suggestions.
The debugger told me interesting things - that my variable was getting optimized away. So I rewrote the function so it didn't need the intermediate variable, and now it works with and without the printf(). I have a vague idea of what might have been happening, but I need sleep more than I need to know what was happening.
Are you using multiple threads? I've often found that the act of printing something out can be enough to effectively suppress a race condition (i.e. not remove the bug, just make it harder to spot).
As for how to diagnose/fix it... can you move the second print earlier and earlier until you can see where it's changing?
Do you always see 0x1 later on when you don't have the printf in there?
One way of avoiding the delay/synchronization of printf would be to copy the pointer value into another variable at the location of the first printf and then print out that value later on - so you can see what the value was at that point, but in a less time-critical spot. Of course, as you've got odd value "corruption" going on, that may not be as reliable as it sounds...
EDIT: The fact that you're always seeing 0x1 is encouraging. It should make it easier to track down. Not being multithreaded does make it slightly harder to explain, admittedly.
I wonder whether it's something to do with the extra printf call making a difference to the size of stack. What happens if you print the value of a different variable in the same place as the first printf call was?
EDIT: Okay, let's take the stack idea a bit further. Can you create another function with the same sort of signature as printf and with enough code to avoid it being inlined, but which doesn't actually print anything? Call that instead of printf, and see what happens. I suspect you'll still be okay.
Basically I suspect you're screwing with your stack memory somewhere, e.g. by writing past the end of an array on the stack; changing how the stack is used by calling a function may be disguising it.
If you're running on a processor that supports hardware data breakpoints (like x86), just set a breakpoint on writes to the pointer.
Do you have a debugger available to you? If so, what do the values look like in that? Can you set any kind of memory/hardware breakpoint on the value? Maybe there's something trampling over the memory elsewhere, and the printf moves things around enough to move or hide the bug?
Probably worth looking at the asm to see if there's anything obviously wrong there. Also, if you haven't already, do a full clean rebuild. If the definition of the struct has changed recently, there's a vague change that the compiler could be getting it wrong if the dependency checking failed to correctly rebuild everything it needed to.
Have you tried setting a condition in your debugger which notifies you when that value is modified? Or running it through Valgrind? These are the two major things that I would try, especially Valgrind if you're using Linux. There's no better way to figure out memory errors.
Without code, it's a little hard to help, but I understand why you don't want to foist copious amounts on us.
Here's my first suggestion: use a debugger and set a watchpoint on that pointer location.
If that's not possible, or the bug disappears again, here's my second suggestion.
1/ Start with the buggy code, the one where you print the pointer value and you see 0x1.
2/ Insert another printf a little way back from there (in terms of code execution path).
3/ If it's still 0x1, go back to step 2, moving a little back through the execution path each time.
4/ If it's 0x0, you know where the problem lies.
If there's nothing obvious between the 0x0 printf and the 0x1 printf, it's likely to be corruption of some sort. Without a watchpoint, that'll be hard to track down - you need to check every single stack variable to ensure there's no possibility of overrun.
I'm assuming that pointer is a global since you set it and print it "a million miles away". If it is, lok at the variables you define on either side of it (in the source). They're the ones most likely to be causing overrun.
Another possibility is to turn off the optimization to see if the problem still occurs. We've occasionally had to ship code like that in cases where we couldn't fix the bug before deadlines (we'll always go back and fix it later, of course).

Resources