I am developing application using OpenSSL API. As it known, OpenSSL uses miriades of global variables which are taken by Valgrind as errors ("conditional jump or move..." etc). Thus the Valgrind's output gets clogged with errors from shared libraries. This is very inconvenient for debug purposes, because every time I get:
More than X total errors detected. I'm not reporting any more.
Final error counts will be inaccurate. Go fix your program!
The questions are:
Can I disable party libraries (-lssl and -lcrypto in my case) memory checks in Valgrind?
OR can I focus only on "definitly lost" errors?
Thank you.
Adding the option
--undef-value-errors=no
works for me (hide all "Conditional jump or move depends on uninitialised value(s)").
For more information see Valgrind's man page.
Valgrind can be configured to suppress errors in libraries.
Details on this you find here: http://valgrind.org/docs/manual/manual-core.html#manual-core.suppress
From the web-page linked above:
Note: By far the easiest way to add suppressions is to use the --gen-suppressions=yes option described in Core Command-line Options. This generates suppressions automatically. For best results, though, you may want to edit the output of --gen-suppressions=yes by hand, in which case it would be advisable to read through this section.
You need to compile OpenSSL with the PURIFY flag (-DPURIFY in CFLAGS) to get rid of the errors. Do not use the version compiled that way in your final application, only for debugging purpose, because it decreases the entropy used in various places.
For example, compile OpenSSL in debug mode with :
./config -d no-static shared zlib -Wa,--noexecstack -DPURIFY -O0 -ggdb3
Please note that you might also disable warnings generated by your own faulty code if you disable / supress all checks in OpenSSL. For example when you pass not fully initialized structures to OpenSSL functions, this can also result in "conditional jump or move..." errors and you probably want to see those.
Related
I'm running OS X 10.12 and I'm developing a basic text-based operating system. I have developed a boot loader and that seems to be running fine. My only problem is that when I attempt to compile my kernel into pure binary, the linker won't work. I have done some research and I think that this is because of the fact OS X runs the Darwin linker and not the GNU linker. Because of this, I have downloaded and installed the GNU binutils. However, it still won't work...
Here is my kernel:
void main() {
// Create pointer to a character and point it to the first cell of video
// memory (i.e. the top-left)
char* video_memory = (char*) 0xb8000;
// At that address, put an x
*video_memory = 'x';
}
And this is when I attempt to compile it:
Hazims-MacBook-Pro:32 bit root# gcc -ffreestanding -c kernel.c -o kernel.o
Hazims-MacBook-Pro:32 bit root# ld -o kernel.bin -T text 0x1000 kernel.o --oformat binary
ld: unknown option: -T
Hazims-MacBook-Pro:32 bit root#
I would love to know how to solve this issue. Thank you for your time.
-T is a gcc compiler flag, not a linker flag. Have a look at this:
With these components you can now actually build the final kernel. We use the compiler as the linker as it allows it greater control over the link process. Note that if your kernel is written in C++, you should use the C++ compiler instead.
You can then link your kernel using:
i686-elf-gcc -T linker.ld -o myos.bin -ffreestanding -O2 -nostdlib boot.o kernel.o -lgcc
Note: Some tutorials suggest linking with i686-elf-ld rather than the compiler, however this prevents the compiler from performing various tasks during linking.
The file myos.bin is now your kernel (all other files are no longer needed). Note that we are linking against libgcc, which implements various runtime routines that your cross-compiler depends on. Leaving it out will give you problems in the future. If you did not build and install libgcc as part of your cross-compiler, you should go back now and build a cross-compiler with libgcc. The compiler depends on this library and will use it regardless of whether you provide it or not.
This is all taken directly from OSDev, which documents the entire process, including a bare-bones kernel, very clearly.
You're correct in that you probably want binutils for this especially if you're coding baremetal; while clang as is purports to be a cross compiler it's far from optimal or usable here, for various reasons. noticing you're developing on ARM I infer; you want this.
https://developer.arm.com/open-source/gnu-toolchain/gnu-rm
Aside from the fact that gcc does this thing better than clang markedly, there's also the issue that ld does not build on OS X from the binutils package; it in some configurations silently fails so you may in fact never have actually installed it despite watching libiberty etc build, it will even go through the motions of compiling the source of that target sometimes and just refuse to link it... to the fellow with the lousy tone blaming OP, if you had relevant experience ie ever had built this under this condition you would know that is patently obnoxious. it'd be nice if you'd refrain from discouraging people from asking legitimate questions.
In the CXXfilt package they mumble about apple-darwin not being a target; try changing FAKE_TARGET to instead of mn10003000-whatever or whatever they used, to apple-rhapsody some time.
You're still in way better shape just building them from current if you say need to strip relocations from something or want to work on restoring static linkage to the system. which is missing by default from that clang installation as well...anyhow it's not really that ld couldn't work with macho, it's all there, codewise in fact...that i am sure of
Regarding locating things in memory, you may want to refer to a linker script
http://svn.screwjackllc.com/?p=noid.git;a=blob_plain;f=new_mbed_bs.link_script.ld
As i have some code in there that will directly place things in memory, rather than doing it on command line it is more reproducible to go with the linker script. it's a little complex but what it is doing is setting up a couple of regions of memory to be used with my memory allocators, you can use malloc, but you should prefer not to use actual malloc; dynamic memory is fine when it isn't dynamic...heh...
The script also sets flags for the stack and heap locations, although they are just markers, not loaded til go time, they actually get placed, stack and heap, by the startup code, which is in assembly and rather readable and well commented (hard to believe, i know)... neat trick, you have some persistence to volatile memory, so i set aside a very tiny bit to flip and you can do things like have it control what bootloader to run on the next power cycle. again you are 100% correct regarding the linker; seems to be you are headed the right direction. incidentally another way you can modify objects prior to loading them , and preload things in memory, similar to this method, well there are a ton of ways, but, check out objcopy and objdump...you can use gdb to dump srecs of structures in memory, note the address, and then before linking but after assembly use dd to insert the records you extracted with gdb back in to extracted sections..is one of my favorite ways just because is smartass route :D also, if you are tight on memory ever and need to precalculate constants it's one way to optimize things...that way is actually closer to what ld is doing, just doing it by hand... probably path of least resistance on this now though is linker script.
1) First I want to know, how to decode such variables ?
I know the solutions to this problem, remove optimization flag, make it volatile, I dont want to do all that. Is there any solution which can be done without compiling the source again ? The problem is whenever i make any changes, it takes ages to compile, so I dont want to compile it with different optimization flags, also I had tried once changing the optimization flag, but it crashed just because of change in compilation flags, for reasons I cant fathom.
Also I am not able to find documentation about understanding various registers when I do "info reg". i was expecting some variable ( whose value I knew, what would it be ) but info reg is showing me all different values. I am missing something here. The architecture I am working on is x86_64
2) I want to know what are the restrictions faced by gdb to decode such register variables ? Or is this problem already tackled by someone. I have read at many places that going through the assembly code, you can find out which variable is in that register. If thats true, why it cant be build into gdb. Please point me to relevant pages if there are solutions to this problem
If you don't have the source and compile with debug/no optimizations (i.e. 3rd party code.) the best you can do would be to disassemble the code and try to determine how the variables are stored.
In gdb the disassemble instruction will dump the assembly for the given function:
disassemble <function name>
Or if symbols have been stripped
disassemble <address>
where <address> is the entry point to the function.
You may also have to inspect where the function is called to determine the calling conventions used.
Once you've figured out the structure of the functions and variable layout (stack variables or registers), when debugging you can step through each instruction with nexti and stepi and watch how the values in the variables change by dumping the contents of the registers or memory locations.
I don't know any good primers or tutorials myself but this question and its answers may be of use to you. Personally I find myself referencing the Intel manuals the most. They can be downloaded in pdf from Intel's website. I don't have a link handy at the moment. If someone else does perhaps they can update my answer.
Have you looked at compiling your code un-optimized?
Try one of these in your gcc options:
-Og
Optimize debugging experience. -Og enables optimizations that do not interfere with debugging. It should be the optimization level of choice for the standard edit-compile-debug cycle, offering a reasonable level of optimization while maintaining fast compilation and a good debugging experience.
-O0
Reduce compilation time and make debugging produce the expected results. This is the default.
I am using GCC crosscompiler to compile to an ARM platform. I have a problem where, using opitmization -O3 gives me a "bad immediate value for offset (4104)" on a temp file ccm4baaa.s. Can't find this file either.
How do I debug this, or find the source of the error? I know that it's located somewhere in hyper.c, but it's impossible to find it because there is no errors showing in hyper.c. Only the cryptic error message above.
Best Regards
Mr Gigu
There have been similar known bugs in previous releases of GCC. It might just be a matter of updating your version of the GCC toolchain. Which one are you using currently?
In order to debug the problem and find the offending source, in these cases it helps to add the gcc option -save-temps to the compilation. The effect is that the compiler keeps the intermediate assembly files (and the pre-processor output) for you to examine.
By default, gcc will add symbol table to the executable, so gdb will get a readable stack trace.
Documentation for -ggdb1 option in gcc man page says:
Level 1 produces minimal information, enough for making backtraces in parts of the program that you don't plan to debug. This includes descriptions of functions and external variables, but no information about local variables and no line numbers.
...which looks to me the same as just calling gcc without any debug-related arguments. But there are clearly extra symbols emitted (.debug_frame, .debug_str, .debug_loc).
So what exactly is the difference, and is there any benefit of compiling with -ggdb1 as opposed to simply not stripping the executable?
Find dwarfdump utility (part of libdwarf) and see what debug info is emitted in those sections. Than decide for yourself whether there is any difference between compiling with level 1 debug info and not stripping the executable. The DWARF specification is also freely available.
Is it possible to tell valgrind to ignore some set of libraries?
Specifically glibc libraries..
Actual Problem:
I have some code that runs fine in normal execution. No leaks etc.
When I try to run it through valgrind, I get core dumps and program restarts/stops.
Core usually points to glibc functions (usually fseek, mutex etc).
I understand that there might be some issue with incompatible glibc / valgrind version.
I tried various valgrind releases and glibc versions but no luck.
Any suggestions?
This probably doesn't answer your question, but will provide you the specifics of how to suppress certain errors (which others have alluded to but have not described in detail):
First, run valgrind as follows:
valgrind --gen-suppressions=all --log-file=valgrind.out ./a.out
Now the output file valgrind.out will contain some automatically-generated suppression blocks like the following:
{
stupid sendmsg bug: http://sourceware.org/bugzilla/show_bug.cgi?id=14687
Memcheck:Param
sendmsg(mmsg[0].msg_hdr)
fun:sendmmsg
obj:/usr/lib/libresolv-2.17.so
fun:__libc_res_nquery
obj:/usr/lib/libresolv-2.17.so
fun:__libc_res_nsearch
fun:_nss_dns_gethostbyname4_r
fun:gaih_inet
fun:getaddrinfo
fun:get_socket_fd
fun:main
}
Where "stupid sendmsg bug" and the link are the name that I added to refer to this block. Now, save that block to sendmsg.supp and tell valgrind about that file on the next run:
valgrind --log-file=valgrind --suppressions=sendmsg.supp ./a.out
And valgrind will graciously ignore that stupid upstream bug.
As noted by unwind, valgrind has an elaborate mechanism for controlling which procedures are instrumented and how. But both valgrind and glibc are complicated beasts, and you really, really, really don't want to do this. The easy way to get a glibc and valgrind that are mutually compatible is to get both from the Linux distro of your choice. Things should "just work", and if they don't, you have somebody to complain to.
Yes, look into Valgrind's suppression system.
You probably want to ask about this on the Valgrind user's mailing list (which is extremely helpful). You can suppress output from certain calls, however, suppressing the noise is all you are doing. The calls are still going through Valgrind.
To accomplish what you need, you (ideally) match Valgrind appropriately with glibc or use the macros in valgrind/valgrind.h to work around them. Using those, yes, you can tell valgrind not to touch certain things. I'm not sure what calls are borking everything, however you can also (selectively) not run bits of code in your own program if its run under valgrind. See the RUNNING_ON_VALGRIND macro in valgrind/valgrind.h.
The other thing that comes to mind is to make sure that Valgrind was compiled correctly to deal with threads. Keep in mind that atomic operations under Valgrind could cause your program to crash during racey operations, where it otherwise might not, if not properly configured.
If you have been swapping versions of valgrind and glibc, there's a chance you found a match, but incorrectly configured valgrind at build time.