How can I compile a C source file that #include's header files from each of two independent source trees? Each source tree has its own set of makefiles, and the source trees are completely independent of each other.
I'm writing a Wireshark plugin which interprets packets of a particular network protocol. In order to compile the plugin, the compiler needs to resolve symbols against the Wireshark source tree. However, in order for the plugin to actually interpret the network packet contents when Wireshark gives it a byte array, the plugin must also include definitions of data structures and RPC XDR routines from an entirely separate source tree. So the compiler also needs to resolve symbols against both Wireshark and a completely separate source tree containing these files.
Is there an easy way to do this? Any suggestions at all would be very much appreciated.
Make sure you don't confuse compile with link. Not saying you are, but just pointing out there are two distinct steps.
To compile against tree1 and tree2, use the -I include directive to gcc. gcc -c -I/some/include/for/tree1 -I/some/include/for/tree2 input.c -o output.o
to link against two trees, create .so or .la files (static or dynamic libraries ) from each tree. Call them tree1.la tree2.la. put them in /path/to/tree1/libs and /path/to/tree2/libs
then link
gcc -o prog -ltree1 -ltree2 -L/path/to/tree1/libs -L/path/to/tree2/libs
If the trees are sufficiently large, they should end up creating static or dynamic libraries of object code. Then you just point to their headers to compile and point to their libs to link.
If you are using gcc / g++
Use -I flags to include the required header files for compiling.
eg:
g++ -I<includepath1> -I<includepath2> ... -c somefile.cpp -o somefile.o
Use -L flag to link against the libraries.
eg:
g++ -o pluginname.so somefile.o somefile2.o somefile3.o -L <libpath1> -l<libname1> -L <libpath2> -l <libname2> <fullpath to .a file for statically linking>
In windows the approach is similar only nomenclature is different, .dll file instead of .so and .lib files instead of .a files.
Related
I am attempting to compile this code:
#include <GLFW/glfw3.h>
int main() {
glfwInit();
glfwTerminate();
return 0;
}
Using this command in MSYS2 on Windows 10:
gcc -Wall runVulkan.c -o runVulkan
as well as this:
gcc -Wall -Llibs/glfw runVulkan.c -o runVulkan
libs/glfw is where I downloaded the library to.
For some reason I keep getting this:
runVulkan.c:1:10: fatal error: GLFW/glfw3.h: No such file or directory
1 | #include <GLFW/glfw3.h>
| ^~~~~~~~~~~~~~
compilation terminated.
It seems like I'm getting something very basic wrong.
I'm just getting started with C, I'm trying to import Vulkan libraries.
Run pacman -S mingw-w64-x86_64-glfw to install GLFW.
Then build using gcc -Wall runVulkan.c -o runVulkan runVulkan.c `pkg-config --cflags --libs glfw3`.
The pkg-config command prints the flags necessary to use GLFW, and the ` backticks pass its output to GCC as flags. You can run it separately and manually pass any printed flags to GCC.
Note that any -l... flags (those are included in pkg-config output) must be specified after .c or .o files, otherwise they'll have no effect.
For me pkg-config prints -I/mingw64/include -L/mingw64/lib -lglfw3.
-I fixes No such file or directory. It specifies a directory where the compiler will look for #included headers. Though it's unnecessrary when installing GLFW via pacman, since /mingw64/include is always searched by default.
-l fixes undefined reference errors, which you'd get after fixing the previous error. -lglfw3 needs a file called libglfw3.a or libglfw3.dll.a (or some other variants).
-L specifies a directory where -l should search for the .a files, though it's unnecessrary when installing GLFW via pacman, since /mingw64/lib is always searched by default.
#include are just headers, for declarations. gcc, as any compilers, needs to know where those .h should be searched.
You can specify that with -I option (or C_INCLUDE_PATH environment variable).
You'll also need -L option, this times to provide the library itself (.h does not contain the library. Just declarations that the compiler needs to know how to compile codes that use the library function's and types).
-L option tells the compiler where to search for libraries.
But here, you haven't specify any libraries (just headers. And I know that it seems logical that they go together. But strictly speaking, there is no way to guess from #include <GLFW/glfw3.h> which library that file contain headers for (that is not just theory. In practice, for example, the well known libc declarations are in many different headers)
So, you will also have to specify a -l option. In your case -lglfw.
This seems over complicated, because in your case you compile and like in a single command (goes from .c to executable directly). But that are two different operations done in one command.
Creation of an executable from .c code source is done in two stage.
Compilation itself. Creating .o from .c (many .c for big codes), so many compilation commands. Using command such as
gcc -I /path/where/to/find/headers -c mycode.c -o mycode.o
Those are not related to the library. So no -l (and therefore no -L) for that. What is compiled is your code, so just your code is needed at this stage. Plus the header files, because your code refers to unknown function and types, and the compiler needs to know, not their code, but at least declarations that they really exist, and what are the types expected and returned by the functions is the headers files.
Then, once all the .o are compiled, you need to put together all compiled code, yours (the .o) and the libraries (which are somehow a sort of .zip of .o) to create an executable. That is called linking. And is done with commands like
gcc -o myexec mycode1.o mycode2.o -L /path/where/to/search/for/libraries -lrary
(-lbla is a compact way to include /path/where/to/search/for/libraries/libbla.so or /path/where/to/search/for/libraries/libbla.a)
At this stage, you no longer need -I or anything related to headers. The code is already compiled, headers has no role left. But you need everything needed to find the compile code of the libraries.
So, tl;dr
At compilation stage (the stage that raises the error you have for now), you need -I option so that the compiler knows where to find GLFW/glfw3.h
But that alone wont avoid you the next error that will occur at linking stage. At this stage, you need -lglfw to specify that you want to use that library, and a -L option so that the compiler knows where to find a libglfw.so
I have a C library which I am using in a project. It consists of .c and .h files in src and include directories. I wrote a CMakeLists.txt file that generates a Makefile which compiles library.so.
The thing is, the library also includes .c files for tests, compatibility headers for other operating systems, and other files which I don't actually use. I would like to determine which src/header files are actually compiled into the .so library. Is there a way to do so automatically, based on CMakeLists.txt or Makefile, without going through and examining each file?
If you are using gccthen you can trick the compiler into telling you which source files it used by generating the dependency information for the next make run. If I am right, the necessary flags are:
-MT $# -MMD -MP -MF $(AUTODEP_DIR)$(notdir $#).d
This should produce for foo.cpp a file foo.o.d which contains target-prerequisites lines like:
foo.o : foo.cpp bar.hpp baz.hpp
foo.cpp :
bar.hpp :
baz.hpp :
where the line with the object file as target (the file before the :) displays what got used as C/C++ source in the compile run (all files after the :). Starting with the list of .o files which are in the libary, this should give you the exact list of files that you are looking for. It requires a bit of scripting but doesn't look too complicated.
clang and other compilers of course have their own, maybe differing set of flags but it shouldn't be too hard to pick them.
I've written several c text processing functions that I've placed in a the files: string_functions.c and string_functions.h.
I was using these functions for one project and that worked out well. Now I want to use these same functions in a completely different project at the same time. I'm using gcc in Debian.
Is there a good way to use the same c source code in multiple projects at the same time. The projects are in different sub-directories with the same parent directory.
How do I structure the make files to do this?
Or do I just place a copy of the string_functions.c(h) in both projects. This seems like it would make it harder to maintain the source code.
Best way to do this is to build your C files (.h and .c) into a shared library.
There are many tutorials available on how to do this with gcc; one is at this link
Once the shared library is built, you can then link it into many other projects.
Briefly, these are the steps.
Ensure your string_functions.c includes string_functions.h and builds, of course.
Then compile position independent (that's what -fPIC is for)
$gcc -Wall -fPIC -c string_functions.c
Finally build your shared library like this
$gcc -shared -o my_stringfunctions.so string_functions.o
To link to your new shared library from some other program, ensure that whatever directory
you put it in is in your LD_LIBRARY_PATH.
Then you may link using something like
$gcc my_otherprogram.c -L/path/to/my/lib -lmy_stringfunctions
As pointed out, one should put include files (.h) used by a shared library in some directory path, and add the location to the include search path using the -I option:
$gcc my_otherprogram.c -I/path/to/include/files -L/path/to/my/lib -lmy_stringfunctions
If this is how your directory looks:
/parent
/project1
...
string_functions.h
string_functions.c
/project2
...
string_functions.h
string_functions.c
Then all you have to do is store it in a common location, and then point to that location when building your code. This is the standard way of doing it for custom installed libraries in /opt/, for example.
Hence, one suggestion is to do your directory structure like this:
/parent
/include
string_functions.h
string_functions.c
/project1
...
/project2
...
And when building your respective projects, you include that search path when compiling (using the -I flag):
gcc mainfile.c -I/parent/include <other options>
I'm using a pre-built library called 'libdscud-6.02.a', which includes a lot of low level I/O calls for some specific hardware. From this, I created some wrapper functions which I compiled into an object file called 'io.o'.
Now, I have a few programs which I'm compiling with these I/O functions and instead of having to do this:
gcc libdscud-6.02a io.o -o test test.c
I would like to just have this:
gcc io.o -o test test.c
Is there any way to link the .a file into the .o file so I only need to include the .o file when compiling binaries?
You could do the opposite and add the io.o file to the .a file using ar:
ar q libdscud-6.02.a io.o
One solution would be simply to use a make variable:
IO_STUFF = libdscud-6.02a io.o
...
$(CC) $(IO_STUFF) ...
AFAIK it is not possible to link .a library and .o file to create another intermediate file i.e. file which is not linked like .o file.
The solution provided by Burton Samograd look like a very good option; but in case you are not allowed to modify .a library file then you can follow suggestion provided by DarkDust in case you are building using make.
However you can create a shared library .so file, from a .a library file and .o file (I think that is what Michael Burr is trying to convey). You can use only the shared library instead of both .a & .o file to generate your executable as follows:
Generate shared library gcc io.o libdscud-6.02.a -shared -o io.so (Please note that the order of files passed for linking is important)
Build your source with gcc io.so -o test test.c . To execute your executable path of io.so should be in the look up path of loader (ld) i.e. LD_LIBRARY_PATH.
The right way to work with shared object would be to create a libio.so which is the naming convention and not io.so and build code as gcc test.c -o test -L<path_to_libio.so> -lio and path to libio.so should be in ld's look up path for executing the output executable.
I know creating shared library just to avoid addition of another file for compilation does not seem to be what you want to do ...but it is just for your info in case you didn't already know :)
I'm working on some embedded code for a class project that currently (per requirements) creates a number of srec files and merges them. I'd like to be able to load this code into QEMU, but it is generally only happy with ELF files. What is the esiest way to merge the original ELF files instead of the srecs.
Also acceptable, a method to convert the srec back into an ELF and have the resulting file be loadable (objcopy seems to produce fairly broken files doing this (no architecture amoung others).
The tools must be capable of working with m68k binaries, but the host system is plain x86.
Please look at ELFIO library. It contains WriteObj and Writer examples. By using the library, you will be able to create ELF binary files on different host platforms.
Easy...
let's assume : a.c and b.c
gcc a.c -c -o a.o
gcc b.c -c -o b.o
ld -r a.o b.oc -o c.o
c.o now containes both a.o and b.o as a relocatable ELF file.
--Ivan
Use your linker perhaps?
An srec file contains only the linked/located loadable binary, an elf file contains additional meta-data that is lost in the generation of the binary, so reversing back to elf may not be possible, especially if the elf needs to be relocatable.
I found the easiest solution to my original problem was actually to add SREC loading to qemu. I am already modifying the source to add board support, so SREC support isn't much additional work. I found some code on github from someone who had already done so and used it as a basis for my work.
https://github.com/MegabytePhreak/qemu-mcf5307/commit/d3bceb911893b37b2524d6e804bac96691d4d33c