MPIR coding error - c

Hi i've tried using mpir(a library) for my code. I've changed my code and converted everything to work with mpir. My code consists of a series of loops within loops and equations that are dependent upon each other so it's incredibly difficult to spot a mistake. I ran the code after debugging and it worked fine for the first 500 iterations of a certain loop then i got the following message:
GNU MP: Cannot allocate memory (size=24)
Press any key to continue . . .
I have no idea of the cause of this problem. Is it related to memory? If it worked fine for the initial iterations then why should there be a problem now if it isn't memory?
I created the code again and it ran further this time. It went for the first 2000 iterations before giving the message:
GNU MP: Cannot allocate memory (size=16)
Press any key to continue . . .
Anyone any idea what the problem could be?

It seems as though you already know. It's most likely a memory leak.
See section 3.7 of the manual for MPIR:
mpz_t and mpq_t variables never reduce their allocated space. Normally
this is the best policy, since it avoids frequent reallocation.
Applications that need to return memory to the heap at some particular
point can use mpz_realloc2 , or clear variables no longer needed.
Valgrind, a tool for helping to debug memory leaks, may also be helpful. Good luck.

Related

Running time in C program on gcc - with and without valgrind

I have written a program on C and now I am running it on gcc, with valgrind (the program that detects memory loses).
The thing is, when I run it without valgrind, it works much faster than with valgrind. I have tried it on several inputs and the result is that when the input is pretty high, is even can not end with valgrind, but without it it takes several seconds.
My program has a lot of calls to malloc in it, Can it be related?
Unfortunately I can not post my code, because it is a part of an assignment, and I have to keep it discrete. This assignment will probably be checked with valgrind, so I have to solve it.
A general answer and possible solutions could help very much.
Thanks
This is completely normal. Valgrind emulates your code, keeping trace of allocation, frees, memory access and so on.
From The Valgrind Quick Start Guide:
Your program will run much slower (eg. 20 to 30 times) than normal, and use a lot more memory.
Valgrind is among other thing intercepting calls to malloc and free to gather its statistic. This interception is slowing down the call. There is nothing to do against it.

How to debug the memory is changed randomly issue

My application is a multi-thread program that runs on Solaris.
Recently, I found it may crash, and the reason is one member in a pointer array is changed from a valid value to NULL,so when accessing it, it crashed.
Because the occurrence ratio is very low, in the past 2 months, it only occurred twice, and the changed members in the array aren't the same. I can't find the repeated steps, and after reviewing code, there is no valuable clue gotten.
Could anyone give some advice on how to debug the memory is changed randomly issue?
Since you aren't able to reproduce the crash, debugging it isn't going to be easy.
However, there are some things you can do:
Go through the code and make a list of all of the places in the code that write to that variable--particularly the ones that could write a NULL to it. It's likely that one of them is your culprit.
Try to develop some kind of torture test that makes the fault more likely to occur (eg running through simulated or random transactions at top speed). If you can reproduce the crash this way you'll be in a much better situation, as you can then analyze the actual cause of the crash instead of just speculating.
If possible, run the program under valgrind or purify or similar. If they give any warnings, track down what is causing those warnings and fix it; it's possible that your program is eg accessing memory that has been freed, which might seem to work most of the time (if the free memory hasn't been reused for anything when it is accessed) but would fail occasionally (when something is reusing it)
Add a memory checker like Electric Fence to your code, or just replace free() with a custom version that overwrites the free memory with random garbage in the hopes that this will make the crash more likely to occur.
Recompile your program using different compilers (especially new/fancy ones like clang++ with the static analyzer enabled) and fix whatever they warn about. This may point you to your problem.
Run the program under different hardware and OS's; sometimes an obscure problem under one OS gives really obvious symptoms on another.
Review the various machines where the crash is known to have occurred. Do they all have anything in common? What about the machines where it hasn't crashed? Is there something different about them?
Step 2 is really the most important one, because even if you think you have fixed the problem, you won't be able to prove it unless you can reproduce the crash in the old code, and cannot reproduce it with the fixed code. Without being able to reproduce the fault, you're just guessing about whether a particular code change actually helps or not.

finding which function caused "Address out of bounds" in gdb

I have a critical bug in my project. When I use gdb to open the .core it shows me something like(I didn't put all the gdb output for ease of reading):
This is very very suspicious, new written part of code ::
0x00000000004579fe in http_chunk_count_loop
(f=0x82e68dbf0, pl=0x817606e8a Address 0x817606e8a out of bounds)
This is very mature part of code which worked for a long time without problem::
0x000000000045c8a5 in packet_handler_http
(f=0x82e68dbf0, pl=0x817606e8a Address 0x817606e8a out of bounds)
Ok now what messes my mind is the pl=0x817606e8a Address 0x817606e8a out of bounds, gdb shows it was already out of bounds before it reached new written code. This make me think the problem caused by function which calls packet_handler_http.
But packet_handler_http is very mature and working for a long time without problem. And this makes me I am misundertanding gdb output.
The problem is with packet_handler_http I guess but because of this was already working code I am confused, am I right with my guess or am I missing something?
To detect "memory errors" you might like to run the program under Valgrind: http://valgrind.org
If having compiled the program with symbols (-g for gcc) you could quite reliably detect "out of bounds" conditions down to the line of code where the error occurrs, as well with the line of code having allocated the memory (if ever).
The problem is with packet_handler_http I guess
That guess is unlikely to be correct: if the packet_handler_http is really receiving invalid pointer, then the corruption has happened "upstream" from it.
This is very mature part of code which worked for a long time without problem
I routinely find bugs in code that worked "without problem" for 10+ years. Also, the corruption may be happening in newly-added code, but causing problems elsewhere. Heap and stack buffer overflows are often just like that.
As alk already suggested, run your executable under Valgrind, or Address Sanitizer (also included in GCC-4.8), and fix any problems they find.
Thanks guys for your contrubition , even gdb says opposite it turn out pointer was good.
There was a part in new code which causes out of bounds problem.
There was line like :: (goodpointer + offset) and this offset was http chunk size and I were taking it from network(data sniffing). And there was kind of attack that this offset were extremely big, which cause integer overflow. And this resulted out of bounds problem.
My conclusions : don't thrust the parameters from network never AND gdb may not always points the parameter correctly at coredump because at the moment of crush things can get messy in stack .

Float value suddenly becoming huge

I would rather not dump code, but explain my problem. After hours of debugging I managed to understand that at some point in my code, a float value that is not explicitly modified turns HUGE (more than 1e15). I do use a lot of memory in my program (a string array containing 800+ words), other than that though, I have no idea what could cause this.
If anyone has any ideas regarding this, please share. Otherwise, I'll post a pastebin of the
code soon.
EDIT:
Here is the code: http://pastebin.com/vgiZweNq. The problem rests in the next_generation() function, where the sumfit variable goes nuts at random times in the loop.
Also, I've compiled this on linux using -fno-stack-limit and -fstack-check, to avoid stack overflows.
EDIT 2:
I've changed the program to use a dynamically allocated linked list, to further avoid stack overflows. Still, sumfit gets changed to Floatzilla at random points, usually pretty early on.
Cheers!
Since the variable is obviously being modified from an unexpected point, you might want to check some possibilities:
Is it being modified from a different thread or from an interrupt / event handler? If so, is the access properly synchronized to prevent a data race?
Are you doing pointer arithmetic that might be buggy and cause access outside the intended buffer?
Are you casting pointers between types of different sizes?
Especially if you are working on an embedded device: Maybe the memory is full and your stack is overlapping the heap, or the global variables.
More information about the platform this happens on would be helpful.
You're using strcpy on the chrom array, but i don't see where they ever get null terminated.
Maybe I'm just missing it, though.
You've got a huge string array. I reckon you're probably going off the end of it. Keep track of the size of data going into that array.

Methods/Tools for solving a Mystery Segfault while running on condor

I'm writing a C application which is run across a compute cluster (using condor). I've tried many methods to reveal the offending code but to no avail.
Clues:
On Average when I run the code on 15 machines for 2 days, I get two or three segfaults (signal 11).
When I run the code locally I do not get a segfault. I ran it for nearly 3 weeks on my home machine.
Attempts:
I ran the code in valGrind for four days locally with no memory errors.
I captured the segfault signal by defining my own signal handler so that I can output some of the program state.
Now when a segfault happens I can print out the current stack using backtrace.
I can print out variable values.
I created a variable which is set to the current line number.
Have also tried commenting chunks of the code out, hoping that if the problem goes away I will discover the segfault.
Sadly the line number outputted is fairly random. I'm not entirely sure what I can do with the stacktrace. Am I correct in assuming that it only records the address of the function in which the segfault occurs?
Suspicions:
I suspect that the check pointing system which condor uses to move jobs across machines is more sensitive to memory corruption and this is why I don't see it locally.
That indices are being corrupted by the bug, and that these indices are causing the segfault. This would explain the fact that the segfaults are occurring on fairly random line numbers.
UPDATE
Researching this some more I've found the following links:
LibSegFault - a library for automatically catching and printing state data about segfaults.
Stack unwinding (stack trace) with GCC tutorial on catching segfaults and get the line numbers of the offending instructions.
UPDATE 2
Greg suggested looking at the condor log and to 'correlate the segfaults to when condor restarts the executable from a checkpoint'. Looking at the logs the segfaults all occur immediately after a restart. All of the failures appear to occur when a job switches from one type of machine to another type.
UPDATE 3
The segfault was being caused by differences between hosts, by setting the 'requiremets' field in the condor submit file to problem completely disappeared.
One can set individual machines:
requirements = machine == "hostname1" || machine == "hostname2"
or an entire class of machines:
requirements = classOfMachinesName
See requirements example here
if you can, compile with debugging, and run under gdb.
alternatively, get core dumped and load that into debugger.
mpich has built-in debugger, or you can buy commercial parallel debugger.
Then you can step through the code to see what happening in debugger
http://nmi.cs.wisc.edu/node/1610
http://nmi.cs.wisc.edu/node/1611
Can you create a core dump when your segfault happens? You can then debug this dump to try to figure out the state of the code when it crashed.
Look at what instruction caused the fault. Was it even a valid instruction or are you trying to execute data? If valid, what memory is it trying to access? Where did this pointer come from. You need to narrow down the location of your fault (stack corruption, heap corruption, uninitialized pointer, accessing invalid memory). If it's a corruption, see if if there's any tell-tale data in the corrupted area (pointers to symbols, data that looks like something in your structures, ...). Your memory allocator may already have built in features to debug some corruption (see MALLOC_CHECK_ on Linux or MallocGuardEdges on Mac OS). A common case for these is using memory that has been free()'d, so logging your malloc() / free() pairs might help.
If you have used the condor_compile tool to relink your code with the condor checkpointing code, it does a few things differently than a normal link. Most importantly, it statically links your code, and uses it's own malloc. Another big difference is that condor will then run it on a foreign machine, where the environment may be different enough from what you expect to cause problems.
The executable generated by condor_compile is runnable as a standalone binary outside of the condor system. If you run the binary emitted from condor_compile locally, outside of condor, do you still see the segfaults?
If it doesn't, can you correlate the segfaults to when condor restarts the executable from a checkpoint (the user log will tell you when this happens).
You've tried most of what I'd think of. The only other thing I'd suggest is start adding a lot of logging code and hope you can narrow down where the error is happening.
The one thing you do not say is how much flexibility you have to solve the problem.
Can you, for example, have the system come to a halt and just run your application?
Also how important are these crashes to solve?
I am assuming that for the most part you do. This may require a lot of resources.
The short term step is to put tons of "asserts" ( semi handwritten ) of each variable
to make sure it hasn't changed when you don't want it to. This can ccontinue to work as you go through the long term process.
Long term-- try running it on a cluster of two ( maybe your home computer and a VM ).
Do you still see the segfaults. If not increase the cluster size until you start seeing segfaults.
Run it on a minimum configuration ( to get segfaults ) and record all your inputs till a crash. Automate running the system with the inputs that you recorded, tweaking them until you can consistent get a crash with minimal input.
At that point look around. If you still can't find the bug, then you will have to ask again with some extra data you gathered with those runs.

Resources