gdb get preprocessor macro info from file in different directory - c

I'm trying to debug some additions I made to a fairly large c program using gdb. The program I'm trying to debug makes extensive use of #define statements to set different values that are used throughout the code. I need to be able to see what these values are in order to help my debugging (as they include some very important information.
After some digging around I found that the info macro FOO and macro expand FOO commands should be able to print these values if the -g3 option (also tried the -gdwarf-2 and -ggdb3 flags as well) is passed to the compiler (as discussed here). However, whenever I try using this I get
The symbol `FOO' has no definition as a C/C++ preprocessor macro
at <user-defined>:-1
Now, I'm sure that the macro is defined otherwise the previous line of code would not have been able to run. In addition, I'm certain that I have passed the -g3 flag to the compiler. I have one idea as to where the issue might be and that is the location that the macro is defined at. Currently the macro is defined in a header file that is not in the same directory as the rest of the files (i.e. if the source files are in /foo/bar/blam/.. then the macro is defined in /def/mac/here/. Given this I thought maybe the problem was that gdb didn't know to look in this directory so I tried issuing the directory command in gdb and gave it the path to the directory containing the header file (base on this). This still did not solve the problem.
Does anyone know how I can get the values of these macros? If it is pertinent I'm running gdb version 7.11 and compiling the program using
cc and gcc both with Apple LLVM version 7.0.2 (clang-700.1.81). Also, gdb was installed/built using homebrew.

Related

Lack of debugging information in, well, debugger

Currently I am using Clion IDE plus latest version of Open Watcom v2 windows 32 bit compiler to develop some 16 bit MS-DOS application. The problem I have is I don't see all required debugging information when using watcom windows debugger (wdw.exe).
Being specific, I see global variables, global and any other types of functions, even those imported from asm files. But well, local variables list is empty all the time. But more importantly - the only c-code I can see is little test.c file which contains only main() function and nothing else except for includes.
What do I need to do to finally get c-level debugging for whole project? What am I missing?
I would be grateful for any help.
All source files is located in one directory, so, they all should be visible to debugger. But it sees only main c file.
Of course I am compiling with -d2 switch, as well as -hw. DEBUG WATCOM ALL is also presented in linker config file before any FILE directives. Reading manuals to compiler and linker... Well, it's nice that I've found many interesting things in manuals, but nothing helped with exactly that issue so far :)
List of compiler switches I currently using:
WCC.EXE:
CALL WCC.EXE -dTEST -bt=dos -0 -za99 -wx -we -mc -zp2 -hw -d2
%SRC_FULL_NAME%
WLINK:
CALL WLINK.EXE #..\CC.LK
CC.LK:
SYSTEM DOS
DEBUG WATCOM ALL
FILE TEST.OBJ
FILE LUTILS.OBJ
FILE LGL.OBJ
NAME TEST.EXE
OPTION ELIMINATE
...

autoconf configure results in C std lib header related compile errors

I am attempting to build a project that comes with an automake/autoconf build system. This is a well-used project, so I'm skeptical about a problem with the configure scripts, makefiles, or code as I received them. It is likely some kind of environment, path, flag, etc problem - something on my end with simply running the right commands with the right parameters.
The configuration step seems to complete in a satisfactory way. When I run make, I'm shown a set of errors primarily of these types:
error: ‘TRUE’ undeclared here (not in a function)
error: ‘struct work’ has no member named ‘version’
error: expected ‘)’ before ‘PRIu64’
Let's focus on the last one, which I have spent time researching - and I suspect all the errors are related to missing definitions. Apparently the print-friendly extended definitions from the C standard library header file inttypes.h is not being found. However, in the configure step everything is claimed to be in order:
configure:4930: checking for inttypes.h
configure:4930: /usr/bin/x86_64-linux-gnu-gcc -c -g -O2 conftest.c >&5
configure:4930: $? = 0
configure:4930: result: yes
All the INTTYPES flags are set correctly if I look in confdefs.h, config.h, config.log Output Variables, etc:
HAVE_INTTYPES_H='1'
#define HAVE_INTTYPES_H 1
The problem is the same whether doing a native build, or cross-compiling (for arm-linux-gnueabihf, aka armhf).
The source .c file in question does have config.h included as you'd expect, which by my understanding via the m4 macros mechanic should be adding an
#include <inttypes.h>
line. Yes, as you may be inclined to ask, if I enter this line myself into the .c file it appears to work and the PRIu64 errors go away.
I'm left with wondering how to debug this type of problem - essentially, everything I am aware of tells me I've done the configure properly, but I'm left with a bogus make process. Aside from trying every ./configure tweak and trick I can find, I've started looking at the auto-generated Makefile.in itself, but nothing so far. Also looking into how I can get the C pre-processor to tell me which header files it's actually inserting.
EDIT: I've confirmed that the -DHAVE_CONFIG_H mechanic looks good through configure, config.log, Makefile, etc.
autoconf does not automatically produce #include directives. You need to do that on your own based on the HAVE_* macros. So you'll have to add something like this:
#ifdef HAVE_INTTYPES_H
# include <inttypes.h>
#endif
If these lines show up in confdefs.h, a temporary header file used by configure scripts, this does excuse your application from performing these #includes. If configure writes them to confdefs.h, this is solely for the benefit of other configure tests, and not for application use.
First, run make -n for the target that failed. This is probably some .o file; you may need some tweaking to get its path correctly.
Now you have the command used to compile your file. If you don't find the problem by meditating on this command, try to run it, adding the -E to force preprocessor output text instead of invoking the compiler.
Note that now the .o file will be text, and you must rebuild it without -E later.
You may find some preprocessor flags useful to get more details: -dM or -dD, or others.

CLion fails to index C preprocessor macros when -std=gnuXX is set (Linux Kernel Headers)

I am trying to write a Linux kernel module with CLion. This is the cmake file:
cmake_minimum_required(VERSION 3.5)
project(labs)
set(KERNEL_HEADERS
/home/alex/Developer/linux/include
/home/alex/Developer/linux/arch/x86/include
/home/alex/Developer/linux/arch/x86/include/generated
/home/alex/Developer/linux/include/uapi
/home/alex/Developer/linux/include/generated/uapi
/home/alex/Developer/linux/arch/x86/include/uapi
/home/alex/Developer/linux/arch/x86/include/generated/uapi
)
set(MY_MODULE_SOURCES
chapter_03/lab_01/hello.c
)
add_definitions(-imacros /home/alex/Developer/linux/include/linux/kconfig.h)
add_definitions(-D__KERNEL__)
add_definitions(-DMODULE)
add_definitions(-std=gnu89)
include_directories(${KERNEL_HEADERS})
add_custom_target(labs COMMAND $(MAKE) -C ${labs_SOURCE_DIR}
PWD=${labs_SOURCE_DIR})
add_library(dummylib ${MY_MODULE_SOURCES})
The actual building of the kernel module is done with the externally called makefile using "add_custom_target". The "dummylib" is only there so that CLion actually starts to parse the header files and gives me auto completion. With my supplied definitions it does even compile the "dummylib" successfully (look at the screenshot). It is no kernel module though, but that does not matter ;)
My problem is the error you see in the screenshot. Somehow it says that it can't resolve all the macros defined in the kernel headers. Functions, structs and plain defines ( "MODULE_SIG_STRING ") do work (as you see). I do not understand why the editor says it cannot resolve the macro but can still build it. What is more strange is that I can even jump to the declaration using STRG+B of the marked macros. Clearly something is going wrong. The macros are really defined within linux/module.h.
Update
When I set -std=c89 instead of -std=gnu89 the editor recognizes the macros but the "dummylib" of course fails to build since the kernel needs the gnu extensions. I guess this is a bug in CLion. I posted it at the Jetbrains Bugtracker: https://youtrack.jetbrains.com/issue/CPP-6875

GDB: What to do when you type "list" to see the code in C, but it prints to you "No source file for address __________"

I'll try to simplify and make clear my other question here. I am basically trying to use gdb to see where myfile.c is segfaulting. However, I cannot directly examine myfile.c under gdb, but there I am given a driver program (vdriver) that will randomly test the methods I have provided for it in myfile.c
So, after compiling with "gcc -ggdb -c vdriver.c myfile.c myfile_depends_on_this.c" I run "gdb vdriver" until it segfaults. At that point, typing "list *$eip" just prints "No source file for address 0x804something"
I am also confused about how I should "gcc -ggdb -c etc,etc" for header files such as myfile.h and myfile_depends_on_this.h, because I'm not sure whether (or how) it should be included in the command or not.
But anyway, is there any way of fixing the "No source file for address" problem?
Here is how I understand your question (it's not quite clear to me):
how to debug after a segfault?
how to compile .h files?
As to
After crashes, you will no longer be in execution context and so no longer be able to use the regular debugging commands. Instead, gcc will produce a core file. You probably need to allocate space for a core file first, then debug, as described in (eg):
http://www.network-theory.co.uk/docs/gccintro/gccintro_38.html
.h files are not included in the list of files to be compiled. They are referenced from within your .c file with the usual #include (or #include "file.h") semantic
If this wasn't your question, kindly elaborate.

Including a library (lsusb) in a C program

I am still fairly new to programming with C and I am working on a program where I want to control the power to various ports on a hub I have. That is, however, not the issue I am having right now.
I found a program online that does what I want I am trying to compile it. However it uses #include<lsusb.h>. lsusb is located in a totally different folder than the file I am wanting to run (and not in a sub folder) and when I try to compile it, I, logically enough, get the error that the file lsusb.h is not found.
How can I link to this file so that it can be found?
This is more of a GCC toolchain question than a C question (although most C compilers do use the same Unixy flags).
The braces around the include file (<>) indicate you want the compiler to search its standard search path for the include file. So you can get access to that new include file either by putting it into a directory on your standard include file search path yourself, or by adding its directory to the file search path. With GCC you do the latter by giving gcc the flag -I"directoryname" where "directoryname" is the full file path to where you are keeping that new include file of yours.
Once your compiler finds it, your linker may have the exact same problem with the library file itself ("liblsusb.a"?). You fix that the same way. The flag GCC's linker will want is -L instead of -I.
See the "-I" parameter in the gcc man page. It allows you specify a directory in which to find a header file. See also -l and -L.
Or try #include "../../path_to_the_file/lsusb.h"

Resources