I'm trying to debug a simple cross-platform commandline program (a C parser, itself written in C) and running into something strange.
On Windows, when I run it on a small dataset (the source code of glib) it completes successfully, and when I run it on a large dataset (the source code of the Linux kernel) it exits with an out of memory error. I'm not sure whether the latter is a bug in my code or just a consequence of not just having optimized the memory consumption yet, so I've been trying to run it on Linux so I can get some feedback from valgrind.
On Linux (Ubuntu 11.04 x64 in VirtualBox), when I run my program on a small dataset it completes successfully, and when I run it on a large dataset Linux locks up hard enough I have to reset the entire virtual box (mouse pointer still moves but other than that it's completely unresponsive; Windows task manager says the virtual box is using one hundred percent of a CPU core but not allocating memory).
I wouldn't have expected a bug in my code to crash Linux unless I was writing something like a device driver, and when I try simple test cases that allocate too much memory, go into an infinite loop or both, Linux can handle them just fine. What kind of bug should I be looking for, or what am I missing?
On Linux (Ubuntu 11.04 x64 in VirtualBox)
Probably you haven't reserved enough memory to your virtual machine.
This is most likely an infinite loop (easily done in a parser), which could easily take up 100% cpu or 100% ram.
Attach a debugger!
e.g. gdb
http://www.gnu.org/s/gdb/
gdb comes with gcc on Ubuntu etc...
Here's a how-to: http://www.unknownroad.com/rtfm/gdbtut/gdbtoc.html
EDIT: just saw you already tried gdb. So, try running strace on it, it might give you a hint.
Further to that, try adding log messages to see how far the program gets (primitive, but it'll work eventually!)
Related
I wrote a program to find speed of read and write to a flash drive in c. I have a big text file that gets written to a flash drive. It outputs the time it took to write the file, and then reads the newly written file, and outputs the time it took to read it.
I know that a computer I run the program on will be running other things in the background while I run the c program, which will make the times inaccurate.
In order to make times more accurate, I want to make it so the computer will devote all resources to my c program while it runs. Is there such a way to make a c program run in real time this way?
I will test this program on linux, mac, and windows.
What you are asking is impossible. Think about it for a while: If the computer were to do nothing else than execute your program: who would do the memory management while your program is running? You would have to be your own operating system!
There is another flaw in your test: The file is most-likely not read from the flash drive anyway. It would come from the disk cache i.e. RAM on any modern operating system. To forego this, you would have to clear that cache, for example by ejecting and re-inserting the drive.
No. Neither (normal, consumer-used) Linux, nor MacOS, nor Windows supports real-time processing. (Note that existing benchmarking suites ask you nicely not to run other programs while they're doing their thing.)
With that said, all of these OSes prioritize running foreground processes, so the results you get aren't likely to be too far off unless you've got a lot of other stuff running at the same time. You should use multiple trials to get the most precise results.
The absolute best you can do in one of those OSes is to run your benchmark as a kernel process. That's a very difficult thing to do, however, and makes it absolutely impossible to get any sort of cross-platform compatibility.
Alternatively, you can run your benchmark on an actual real-time OS.
In order to accomplish your goal, you will most likely have to interface with the hardware directly, without the assistance of the operating system. It can be done if you are willing to write a kernel module that also acts as a driver for the flash drive. As a kernel module, there are ways to force your code to run to completion before yielding the processor.
I have such setup. I need to program on some embedded device which in spec says to run Linux (although when you turn on the device, clearly the display doesn't show anything linux related - small display).
The embedded device has its own SDK.
Now, I thought using valgrind to check for memory management/allocation.
Can I use valgrind to check a program written for my device?
The problem I see is that the program might contain some device specific SDK calls, hence the program might not run on ordinary fedora linux that I run on my desktop for example.
What are my options?
Running valgrind on embedded devices can be quite challenging, if not impossible.
What you can do is to create unit tests, and execute them using valgrind on the host platform. That is a way to at least check memory problems of part of the code.
Other option is to use platform emulation, and run programs in emulators (again on the host system). QEMU is quite famous open source emulator.
Perhaps.
Make sure you really run Linux, of course.
Figure out the hardware platform; Valgrind supports quite a few platforms but not all.
Consider whether your platform has resources (memory and CPU speed) to spare; running Valgrind is quite costly.
If all of those check out ok, then you should be able to run Valgrind, assuming of course you can get it onto the target machine. You might need to build and install it yourself, of course.
I assume you have some form of terminal/console access, i.e. over serial port, telnet, or something that you can use to run programs on the target.
UPDATE: Based on feedback in comments, I'm starting to doubt the possibility for you to run Valgrind on your particular device.
I'm using XCode to develop a C command line utility and I'm attempting to using XCode's profiling capabilities to track program allocations and possibly memory leaks. I can attach an allocation tracker utility just fine to the program itself and it works, the problem is I cannot interact with the program from this point and it's just stuck in its initialized and waiting state (the program is definitely running in the background somewhere I just cannot get to it). I've tried tweaking various settings to no avail, any ideas would be greatly appreciated, thanks.
If you launch Instruments outside of Xcode, you should be able to use the pull-down list above 'Target' and 'Attach to Process' to profile any already-running program. So one option — given that the way you describe your program makes it sound interactive — is to launch your utility in a terminal then to attach Instruments to it.
E.g. vi isn't symbolicated but running Instruments against it has just revealed that when in insert mode, it spends about 14% of its time in write and 4.5% in strcmp (albeit that with something like vi the processing is so minuscule that there's bound to be sampling error in there).
I developed a command-line (non GUI) C program on Linux using QT Creator, which internally uses gdb as its debugger. When I debugged the program on Windows using Visual Studio, it reported that it was writing outside the bounds of allocated memory (although it did not report the violation at the exact time it occurred, so it was still hard to track down). I eventually managed to find a place in the code where a malloc call was allocating too little memory and that solved the problem.
However, it bothers me that this problem was never detected on the Linux side. Are there any switches or something that would enable this detection feature on Linux?
There are many in-code memory validators that work both for Windows and Linux. Check Wikipedia for their list. However, most Linux users use Valgrind as the ultimate tool for memory debugging.
HI, i am recently in a project in linux written in C.
This app has several processes and they share a block of shared memory...When the app run for about several hrs, a process collapsed without any footprints so it's very diffficult to know what the problem was or where i can start to review the codes....
well, it could be memory overflown or pointer malused...but i dunno exactly...
Do you have any tools or any methods to detect the problems...
It will very appreciated if it get resolved. thanx for your advice...
Before you start the program, enable core dumps:
ulimit -c unlimited
(and make sure the working directory of the process is writeable by the process)
After the process crashes, it should leave behind a core file, which you can then examine with gdb:
gdb /some/bin/executable core
Alternatively, you can run the process under gdb when you start it - gdb will wake up when the process crashes.
You could also run gdb in gdb-many-windows if you are running emacs. which give you better debugging options that lets you examine things like the stack, etc. This is much like Visual Studio IDE.
Here is a useful link
http://emacs-fu.blogspot.com/2009/02/fancy-debugging-with-gdb.html
Valgrind is where you need to go next. Chances are that you have a memory misuse problem which is benign -- until it isn't. Run the programs under valgrind and see what it says.
I agree with bmargulies -- Valgrind is absolutely the best tool out there to automatically detect incorrect memory usage. Almost all Linux distributions should have it, so just emerge valgrind or apt-get install valgrind or whatever your distro uses.
However, Valgrind is hardly the least cryptic thing in existence, and it usually only helps you tell where the program eventually ended up accessing memory incorrectly -- if you stored an incorrect array index in a variable and then accessed it later, then you will still have to figure that out. Especially when paired with a powerful debugger like GDB, however (the backtrace or bt command is your friend), Valgrind is an incredibly useful tool.
Just remember to compile with the -g flag (if you are using GCC, at least), or Valgrind and GDB will not be able to tell you where in the source the memory abuse occurred.