I am working with a different compiler CC. It doesn't work like GCC.
When I was using GCC, I can do "gcc -o exe_filename source_filename" and the output would be a exe file.
When I use CC, I need 2 steps. First I compile the source files (suppose it involve a .c and a .h file ) and it create a .lis file and a .obj file. Then I do a link command which created a .exe file.
What is the relationship between LIS, OBJ and EXE files? I ask this because I wonder which files do I need if I want to use the exe in another machine without including unnecessary files. If LIS and OBJ were only used for compilation, I don't need it in another machine.
The compiler takes C files (and includes H files as referenced) and produces object (OBJ) and listing (LIS) files. The object file contains the code and data, but has unresolved external references. The listing typically includes line numbers, error and warning messages, and optional sections such as a type and variable cross-reference.
The linker combines object files and resolves external references to libraries. The result is an executable (EXE) image. (Or shareable image when creating libraries.)
Only the executable file needs to be copied from one system to another to run the application. The listing may be useful for interpreting error messages as it provides the properly correlated line numbers. The object could be useful if the application needs to be relinked due to changes in libraries, particularly if the target system has older versions than the original system.
the OBJ files are the compiled C files in a format that they can be "Linked" together by a linker and turned into an EXE.
Compile -> OBJ -> Link -> EXE
the LIS file is just informational output of the C that the compiler ends up compiling.
All you need once compiled and linked is the EXE
You don't need the other files. The exe will work fine by itself.
I don't have much idea on LIS. But the difference between OBJ and EXE is OBJ file may contain unresolved symbols and in EXE file all symbols are linked and resolved.
If another machine also has same hardware then u can use direct exe to run else you have to cross compile
Related
I have an embedded software entirely written in C and Assembly and for its build process I am using Scons and GCC. The source code is organized in different folders and each folder represents a "standalone" sub-project (i.e., application elf, static libray, etc.).
The main SConstruct file, located inside the main folder of the project, builds a default environment, adding to it some compilation flags shared among all the sub-projects, then it invokes the SConscripts placed in every sub-project, exporting the default environment.
Each sub-project builds a new environment cloning the default one and customize it with new builders, new compilation flags, etc.
The issue comes out because I have to add, for all sub-projects, some actions after each object file's compilation. In short, I need to generate preprocessed file for each ".c" and ".S", invoking the preprocessor with the same flags used by the normal compilation. To do this I think the best solution is to add the compilation flag "-save-temps=obj" to the default environment (this flag tell to the compiler to keep the temporary files), so all sub-projects will inherit this behavior.
The issue is that SCons doesn't track each generated temporary file. Considering that:
for each .c file, gcc will create the two temporary file .i and .s
for each .S file, gcc will create the single temporary file .s
I need to add to the default Object builder:
A SideEffect to tell scons that a .c to .o compilation will create also a .i file;
A SideEffect to tell scons that a .c to .o compilation will create also a .s file;
A SideEffect to tell scons that a .S to .o compilation will create also a .s file;
Is there a way to do this only using the file suffixes and without the enumeration of each target object file?
Furthermore, on each temporary file, I need to invoke a custom tool, again with the same compilation flags used for its compilation, in order to create other files with other debug information. How can I do this? Is there a way to add a post-action to the Object builder?
Thanks,
Ciro
In my homemade build system, I create many .a files. I then want to link these into one final image.
The issue is that ld naturally assumes these are libraries, and therefore links none of the symbols in, producing an empty image as output.
Can I force ld to treat these as groups of object files?
It seems to me only answer is: you can't put the main in a library!
If you keep main as object file, then it will pull the threads of the whole program, and even if the rest of it is in .a files it should link just fine.
I work for a group in which our test bucket has hundreds of .c source programs. The .c programs are fairly small and they all include the same 10 .h header files. These .h files are fairly large.
Each time we get a new library file to link our test programs to test, we run a script to recompile and run our test bucket against. The problem is that the compiling takes fairly long, especially if the environment is virtual.
Is there a way to compile the .h header files once, put in a separate object file and have those many .c source files link to said object file? I think this will speed up compiling time. I am willing to change/remove all the #include in the .c source programs.
Any suggestions to speeding up compile time is greatly appreciated.
Also, I should say that a script executes a makefile PER .c source test program! The makefile is not told to compile all programs in the current directory. Each test program is compiled into its own executable.
You could use precompiled header feature. See http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html
You've asked further suggestions to speed up your compilation.
One way can be using ccache. Basically, ccache keeps a cache of the object files compiled so far and returns them (instead of re-compiling again over and over) when it recognises that the same source file is being compiled again.
Using it should be as simple as
Install ccache
Prefix your gcc/cc/g++ command with ccache
Rewrite your headers. Strip off all definition and leave in header. Strip off all implementation and put in new .c. Compile as library. Link with solution. Distribute library on runtime system.
If I understand correctly, the way libraries typically work is by using precompiled code in object files ( .so on Linux systems? ), while providing header files ( .h ) for use in projects.
What happens is when you compile, the #include <library.h> directive finds that header and pastes its contents in the source file being compiled. Then, once the source file is compiled, it is linked to the precompiled object file. That way, the library can be included in a huge number of projects without it needing to be compiled from source each time. The only part that must be recompiled when linking to a library is the ( relatively ) small amount of code in the headers, which essentially makes library functions and variables accessible to the source code.
All this means is that to drastically speed up compilation, your best bet is to take all of the functions out of the 10 .h files, and instead leave only the function prototypes in the headers. Once you have all of the functions in separate .c source files, you can compile them into an object file ( typically -c flag ). Then, whenever you need to compile a new program against the 10 headers you typically use, you can instead include your stripped down version of the headers, and link to the precompiled object. Since only the new code in the .c file has to be compiled, instead of all of the code in the headers, the process should be much faster.
Hello Stack Overflow Community,
i am working on a c project to interleave multiple c programs into one binary, which can run the interleaved programs as treads or forks for benchmarking purposes.
Therefore i run make in each program folder of the desired programs and prelink all .o files with "ld -r" to one new .o file. After that i add a specific named function to each of these "big" .o files, which does nothing but run the main() of each program and providing the argc and argv. Then i use objcopy to localize every global Symbol except the unknown ones and the one of my specific function which shall run the main(). At last i link these manipulated .o files together with my program which runs the specific named functions as threads, or forks or after another.
Now to my Question/Problem:
I ran into a problem with static libs. I was using ffmpeg for testing, and it builds static libs such as libavcodc and libavutil and so on. Unfortunately, "ld -r" does not link .a files. So i tried to extract these libs with ar -x and then link the extracted .o files in the way mentioned above to the "big" new .o file. But i did not work because libavcodec and libavutil both include the file ff_inverse.o. That is obviously not a problem when i just build ffmpeg, which will link these static libraries. But still, both libraries include it, so there must be a machanism which makes the choice, which ff_inverse.o to use and to link. So my Question: How does this work? Where is the difference?
The way ld does it with normal linking is to prioritize the libraries. Libraries listed first in the command line are linked in first, and only if symbols still are unresolved does it move on to the next library. When linking static libraries, it ignores the name of each .o file, because the name is unnecessary, only the exported symbols are necessary. You may want to emulate that behavior, by extracting libraries in a sorted order.
I have a set of C files that I would like to use. Can I just copy them to my include directory, or do I have to compile them. I would think they would be compiled in the final binary.
You need to compile those C files if you want to use them.
To make use of what's in those C files, you'll nead a header file that declares what's inside them.
Those header files is what you'd put in your include folder, and you'll compile the C files together with your other C files. (Or you could make a library out of those C files)
Yes, they need to be compiled so that they are available at the linking step. C is not an interpreted language, so having the sources present in an include directory would do nothing for execution.
You can keep the source files at the same location. The include files will be in the include directory. You can use the compilation option -I./<include-file-directory> to specify from where to fetch the include files.
The final binary will be compiled version of all your source files which you give to the compiler. You have to explicitly specify every file to be compiled along the with final executable name.
In case you dont do so a default executable is created with the name a.out(i am assuming the platform to be linux and compiler to be gcc) in the directory where you compile.
Check the link for more details on compilation using Makefile.