Instruct GDB 6.5 to use source embedded in object file - c

I've been trying to make GNU gdb 6.5-14 to use the source code embedded on the object file when debugging, instead of scanning some directories for it.
The main reason is that I develop for an embedded platform and I cross compile, which means that all the source is in my computer.
I read about the -ggdb3 flag, that includes a lot of extra info, including the source code. So I started compiling with that flag.
Doing a objdump -S src/lib/libfoo.so indeed prints out all the source code with the assembly code intermixed with the source code, so I'm guessing that it does indeed contain that info.
The only thing is that GDB does not print it, unless I run from a nfs mounted version of my workspace that contains the source.
Does anyone know how can I instruct gdb to look in the object file for code instead of relying on external files?

Employed Russian is correct -- gcc never embeds source code in object files.
What it does do (with any -g setting) is add paths to where the source file can be found.
GDB can use these paths to find the source files. And if you happen to set up the exact same paths on your embedded file system as the paths where you keep your source code on the host system, you can trick gdb into finding them.

Your guess about what -ggdb3 does is totally incorrect; the object files do not contain the source. You can prove that by running 'strings -a libfoo.so'.
Your best bet is to learn how to use remote debugging -- you can then use GDB from host (which has access to all the sources); with an added advantage that you need much less memory on target. See gdbserver in "info gdb".

Related

How can I compile & and run an ESQL/C program on Linux Platform?

I have checked the IBM official site for ESQL/C programming guide. I didn't find exact commands to compile and run. Do I need to install any packages to run? Can anyone tell me the commands to run these in Ubuntu?
ESQL/C (Embedded SQL in C) uses C code for the bulk of the code, but uses special markers (either $ in Informix ESQL/C or EXEC SQL in both standard and Informix ESQL/C) to indicate where SQL statements need preprocessing to be converted into an appropriate series of C library function calls and C variable definitions, etc. The esql script is the compiler that automatically converts Informix ESQL/C source into first C and then object code and an executable (under options that are mainly the same as the C compiler's options, most of which are passed to the C compiler unchanged).
You need to have the Informix ClientSDK (CSDK) installed to be able to compile ESQL/C programs. That is installed by default when the server is installed, so the chances are you're OK if you're on a machine with a working Informix server on it (if it also has a working C compiler and development environment). It is also available as a separate standalone product which you could install if you don't have, and don't want, an Informix server on your machine. There are advantages for testing if the server is local (quicker access, and less danger of damaging production systems, amongst others).
Assuming you have got CSDK installed, you need to set the environment variable INFORMIXDIR to point where the software is installed (unless you chose to install it in /usr/informix or create a symlink /usr/informix that points to to where CSDK is installed). You'll also want to add $INFORMIXDIR/bin to your PATH. You're now ready to compile:
Compile .ec (ESQL/C source) files to object with the esql command:
esql -c esqlc_source.ec
Add other C compiler options as needed. Note that -g is intercepted by the esql script and you have to work hard to get it passed to the C compiler.
Consider compiling .c (C source) files that use an ESQL/C header with the esql script too. This will pass the correct directory for the headers to the C compiler automatically. More likely, you'll use:
${CC} -c c_source.c -I${INFORMIXDIR}/incl/esql
For linking, use the esql script to do it. It will provide the correct libraries (and object files) to the compiler, which it will invoke as the linker:
esql -o program c_source.o esqlc_source.o
You can add other libraries and library directories as usual.
You have the program compiled; now you need to run it. The chances are that you won't find the libraries automatically. You will need to think about adding some directories to either LD_LIBRARY_PATH or modify /etc/ld.so.conf to pick up the extra directories, or create symlinks to the Informix libraries from a place that will be picked up automatically (e.g. /usr/lib or /usr/lib64, or perhaps /usr/local/lib) to where the libraries are installed.
You need to add at minimum:
$INFORMIXDIR/lib
$INFORMIXDIR/lib/esql
Under some circumstances, you may need to add other library directories found under $INFORMIXDIR/lib as well, but usually not.
You should then be able to run the program. Using ldd program will let you know when you've got the settings right.

What is the point of using `-L` when there is `LD_LIBRARY_PATH`?

After reading this question, my first reaction was that the user is not seeing the error because he specifies the location of the library with -L.
However, apparently, the -L option only influences where the linker looks, and has no influence over where the loader looks when you try to run the compiled application.
My question then is what's the point of -L? Since you won't be able to run your binary unless you have the proper directories in LD_LIBRARY_PATH anyway, why not just put them there in the first place, and drop the -L, since the linker looks in LD_LIBRARY_PATH automatically?
It might be the case that you are cross-compiling and the linker is targeting a system other than your own. For instance, MinGW can be used to compile Windows binaries on Linux. Here -L will point to the DLLs needed for linking and LD_LIBRARY_PATH will point to any libraries needed by linker to run. This allows compiling and linking of different architectures, OS ABIs, or processor types.
It's also helpful when trying to build special targets. I might be case that one links a static version of program against a different static library. This is the first step in Linux From Scratch, where one creates a separate mini-environment on the main system to become a chroot jail.
Setting LD_LIBRARY_PATH will affect all the commands you run to build your code (including the compiler itself).
That's not desirable in general (e.g. you might not want your compiler to run debug/instrumented libraries while it compiles - it might even go as far as breaking your compiles).
Use -L to tell the compiler where to look, LD_LIBRARY_PATH to influence runtime linking.
Building the binary and running the binary are two completely independent and unrelated processes. You seem to suggest that the running environment should affect the building environment, i.e. you seem to be making an assumption that the code build in some setup (account, machine) will be later run in the same setup. I find this assumption rather strange. I'd even say that in most cases the building and the running are done in different environments. I would actually prefer my compilers not to derive any assumptions about future running environment from the environment these compilers are invoked in. Looking onto the LD_LIBRARY_PATH of the building environment would be a major no-no.
The other answers are all good, but one nobody has mentioned yet is static libraries. Most of the time when you use -L it's with a static library built locally in your build tree that you don't intent to install, and it has nothing to do with LD_LIBRARY_PATH.
Compilers on Solaris support the -R /runtime/path/to/some/libs that adds to the path where libraries are to be searched by the run-time linker. On Linux the same could be achieved with -Wl,-rpath,/runtime/path/to/some/libs. It passes the -rpath /runtime/path/to/some/libs option to ld. GNU ld also supports the -R /path/to/libs for compatibility with other ELF linkers but this should be avoided as -R is normally used to specify symbol files to GNU ld.

How to tell whether an executable was compiled for the present machine?

I have some c code that I compile and run, in a directory that is accessible from many different unix computers (various linux and mac, occasionally others), with different OS's obviously needing different executables.
I have a simple shell script that invokes the appropriate executable, prog.$OSTYPE.$MACHTYPE, compiling it first if necessary. This is very simple (although it requires using csh in order to have $OSTYPE and $MACHTYPE be reliably defined) and it almost works.
However, it turns out that even $OSTYPE and $MACHTYPE are not enough: for example, compiling on OSX 10.5 yields an executable prog.darwin.i386 which, when invoked on OSX 10.4, crashes instantly.
Yes, recompiling every time I want to run the program is one way to solve this, but it seems excessive. I know having a bin directory on every machine is a standard solution, but a non-root user may not have much write access outside their home directory (which is common to all the machines).
So my question is, is there a better approach? The compiler (often gcc) obviously knows what kind of system it is compiling for -- is there a good portable way to find out what "kind of system" my script is running on, so it can invoke the correct executable, instead of one with undefined behavior?
You could use gcc -v to figure out what the installed/runnable gcc thinks is the target arch for hosted compiling (something like $(gcc -v 2>&1 | grep Target: | sed 's/.*: *//') in bash)
edit
If you really want to be able to do this without having anything in particular installed, you could extract the config.guess script from gcc (its in the top level directory of any gcc source package) and run that. Unfortunately this won't work for all systems and might not exactly match what the system gcc package uses for some distributions, but this is the script used to configure gcc for building unless you explicitly override it...
try to use file command from shell prompt
Download some open source packages written in C and have a look at the file ./configure which is basically a large shell script that collects info from many sources, often by compiling and running short C programs. This will tell you everything that you need to know.
Since you are dealing with recent Mac OS X make sure you choose a package that is currently being maintained and supports OS X versions.

GNAT - GVD: not in executable format: File format not recognized

I'm on an XP Virtual Machine running the GNU Visual Debugger 1.2.6, trying to open an Ada file (.adb), but keep getting the following error:
not in executable format: File format
not recognized
I should also mention that I've installed both the Ada compiler kit and win32 tools for GNAT 3.14p.
I've since tried opening other .adb files from the GVD and even .c files, but all with the same happy response above.
Any idea why this is happening?
GVD does not take a source file as an argument, it takes an executable program. Skipping a lot of if this and if that, to debug foo.adb you probably want to pass foo.exe to the debugger.
But this is Ada, and you shouldn't be here. ;-) If you got your source program to compile and produce an executable, you very seldom need to run the debugger. I can remember the last time I used the debugger with GNAT, and why. (A bug in Solaris, the workaround was change a constant to a variable--Solaris was overwriting the value passed in instead of using a temp.) But that was what? Five years ago?
It is much easier to put in some debugging code (see pragma Debug in the GNAT documentation), then run the program with the debug flag if necessary.
Oh, most important. You may need to look in C:\GNAT\2010\share\doc\ to find all the documentation that came with GNAT. Read it. Or at least figure out how to search it for what you need. ;-)

Debugging a Static Library with the Eclipse CDT

I'm working on getting set with Eclipse CDT for some embedded development and I'm having difficulty getting source level debugging working for static libraries.
I'm using my own Makefiles, so that is my first suspect right now, especially since gdb claims that no symbol table info is available for the functions with no source. When using a static library, is the debugging information from the library usually included in the ELF file from the final link stage? Right now I can see the full source/assembly mix if I point objdump -S at the .a file, but none of the debug info makes it into the .elf. The debugging info/source is present for the main application. Am I missing some switch to tell ld to include this?
Otherwise, what is the best way to get gdb to tell me what is looking for (and failing to find) with regard to debug information for a specific function.
Figured it out.
The lesson is very simple: always, always, triple check your makefiles.
Was still linking in an old copy of the static library built without debugging information.
I would guess that GDB is simply not finding the source files that go with that debug information. See http://web.mit.edu/gnu/doc/html/gdb_9.html#SEC51 for documentation on how to tell it where to find source files.

Resources