How Linux C code find conf file(usually in /etc) - c

I know when I install a Linux app from source ,I execute ./configure --sysconfdir=/etc, then this app's conf file(such as httpd.conf) will goto /etc.
But from the view of source code, how do source code know the conf file is under /etc when parse it. I mean the code like fopen("/../../app.conf", "r"); is determined before we install it, will the configure file change source code or some other mechanism exist ?

The configure script will generate the necessary Makefile that will use the C compiler's -DMACRO=content functionality to essentially inject C preprocessor #define MACRO content statements into the compilation units. So, sysconfdir could be used via Make rules:
foo.o: foo.c
$(CC) -DCONFDIR=$(sysconfdir) -o $# $<
(That says to build the foo.o object file when foo.c is updated; to build it, use the $(CC) variable to run the C compiler, define CONFDIR with the contents of $(sysconfdir) (supplied via the ./configure script), put output into the target file ($#) and give the source file ($<) as the lone input to the compiler.))
Then the C code in foo.c could use it like this:
FILE *conf;
if (conf = fopen(CONFDIR "/foo", "r")) {
/* read config file */
} else {
/* unable to open config, either error and die or supply defaults */
}
Note the C string concatenation is performed before compiling the program -- super convenient for exactly this kind of use.
More details here: http://www.gnu.org/software/hello/manual/autoconf/Installation-Directory-Variables.html#Installation-Directory-Variables

When you execute ./configure, it typically generates a makefile that includes the command options for the C compiler. These options will include -D... options that (in effect) "#define" various CPP symbols. One of these will have the "/etc" value that you supplied when you ran ./configure --sysconfdir=/etc.
From there, the "/etc" string gets compiled into the code anywhere that the source code uses the #defined symbol.

Related

Compile and Link to .com file with Turbo C

I'm trying to compile and link a simple program to a DOS .com file using Turbo C compiler and linker. By that I try the simplest C-program I can think of.
void main()
{}
Are there command line arguments to link to com files in the Turbo C Linker?
The Error Message I get from the Linker is the following:
"Fatal: Cannot generate COM file: invalid entry point address"
I know that com files need entry point to be at 100h. Does Turbo C have an option to set this address?
It has been a long time since I have genuinely tried to use Turbo-C for this kind of thing. If you are compiling and linking on the command line separately with TCC.EXE and TLINK.EXE then this may work for you.
To compile and link to a COM file you can do this for each one of your C source files creating an OBJ file for each:
tcc -IF:\TURBOC3\INCLUDE -c -mt file1.c
tcc -IF:\TURBOC3\INCLUDE -c -mt file2.c
tcc -IF:\TURBOC3\INCLUDE -c -mt file3.c
tlink -t -LF:\TURBOC3\LIB c0t.obj file1.obj file2.obj file3.obj,myprog.com,myprog.map,cs.lib
Each C file is compiled individually using -mt (tiny memory model) to a corresponding OBJ file. The -I option specifies the path of the INCLUDE directory in your environment (change accordingly). The -c option tell TCC to compile to a OBJ file only.
When linking -t tells the linker to generate a COM program (and not an EXE), -LF:\TURBOC3\LIB is the path to the library directory in your environment (change accordingly). C0T.OBJ is the C runtime file for the tiny memory model. This includes the main entry point that you are missing. You then list all the other OBJ files separated by a space. After the first comma is the output file name. If using -t option name the program with a COM extension. After the second comma is the MAP file name (you can leave the file name blank if you don't want a MAP file). After the third comma is the list of libraries separated by spaces. With the tiny model you want to use the small model libraries. The C library for the small memory model is called CS.LIB .
As an example if we have a single source file called TEST.C that looks like:
#include<stdio.h>
int main()
{
printf("Hello, world!\n");
return 0;
}
If we want to compile and link this the commands would be:
tcc -IF:\TURBOC3\INCLUDE -c -mt test.c
tlink -t -LF:\TURBOC3\LIB c0t.obj test.obj,test.com,test.map,cs.lib
You will have to use the paths for your own environment. These commands should produce a program called TEST.COM. When run it should print:
Hello, world!
You can generate COM file while still using IDE to generate EXE. Following worked on TC 2.01. Change memory model to Tiny in the options, then compile the program and generate EXE file, then go to command prompt, and run EXE2BIN PROG.EXE PROG.COM. Replace PROG with your program name.
Your problem is about "entry point"
some compiler or linker can recognize void main() like entry point omiting a return value but no all of them.
You shoud use int main() entry point instead for better control of app and compiler can recognize main function as entry point
example:
int main() {
/* some compiler return 0 when you don't for main,
they can ask for return value */
}
from geekforgeeks:
A conforming implementation may provide more versions of main(), but they must all have return type int. The int returned by main() is a way for a program to return a value to “the system” that invokes it. On systems that doesn’t provide such a facility the return value is ignored, but that doesn’t make “void main()” legal C++ or legal C. Even if your compiler accepts “void main()” avoid it, or risk being considered ignorant by C and C++ programmers.
In C++, main() need not contain an explicit return statement. In that case, the value returned is 0, meaning successful execution.
source: https://www.geeksforgeeks.org/fine-write-void-main-cc/

Why is gcov generating gcda files with only the executable bit set when open is wrapped?

I have a C project at hand with cmocka tests and it is built using CMake. Now I try to use gcov to determine the test coverage and use this CMake module: https://github.com/bilke/cmake-modules/blob/master/CodeCoverage.cmake
That module provides a make target which runs the test target executable (which is to run gcov) and then runs lcov and genhtml to generate a report.
Now, the problem is, when the test target is executed, it creates the .gcda files with only the owner's executable bit set, i. e. the read bit is missing. Subsequently, lcov cannot read these files and produces a report with a coverage of 0%. When I chmod u+r the gcda files manually afterwards and run the post-test lcov commands by hand, the report is successfully generated (displays something is actually covered). So the gcda files are created and valid, but they have unsuitable permissions set.
The problem seems to stem from wrapping (with ld --wrap) the open function for capturing the returned file descriptor in a test case. Here a minimum compiling example:
/* wrapped_open.c */
int main(void)
{
return 0;
}
int __wrap_open(const char *filename, int flags)
{
return __real_open(filename, flags);
}
# CMakeLists.txt
cmake_minimum_required(VERSION 2.8)
project(gcov-mvce C)
add_executable(wrapped_open wrapped_open.c)
target_link_libraries(wrapped_open
-Wl,--wrap=open
)
set(CMAKE_MODULE_PATH "${CMAKE_MODULE_PATH};${CMAKE_SOURCE_DIR}/cmake")
include(CodeCoverage)
set_target_properties(wrapped_open PROPERTIES
COMPILE_FLAGS "-g -O0 --coverage -fprofile-arcs -ftest-coverage"
LINK_FLAGS "-lgcov --coverage")
setup_target_for_coverage(wrapped_open_coverage wrapped_open "coverage")
# build like this:
cmake . -DCMAKE_BUILD_TYPE=Debug # in-source build
make
# receive coverage report like this
make wrapped_open_coverage
# simple gcc command line for compiling (no cmake required)
gcc -g -O0 --coverage -fprofile-arcs -ftest-coverage -lgcov -Wl,--wrap=open -o wrapped_open wrapped_open.c
When the wrapping of open and the wrap function definition are removed from the linker flags and the code, respectively, it works. But with the files above, the file wrapped_open.c.gcda is created with the access mask 0100 and the following is reported by lcov:
(bulid-directory)/CMakeFiles/wrapped_open.dir/wrapped_open.c.gcda:cannot open data file, assuming not executed
...resulting in a coverage of 0/4 lines and 0/2 functions.
Why are the access bits wrong when the open function is wrapped like above, even though each path still calls the original function with unmodified parameters (at least that is what it is intended to do)? An obvious workaround would be to modify the cmake module to do the chmod for me, but I would rather like to understand what is going wrong when open is wrapped.
Please tell me in the comments if and which additional information might be required to answer this.
As pointed out in the comments, open() is a function with variable arguments. If a file is created, the third argument is the file's mode. In my __wrap_open implementation I omitted that third parameter because I did not think of that other code than the code under test would call open() as well. Of course, gcov eventually does it to create its gcda files and since I did not specify the third argument to __real_open, something undefined went in there for the mode.
So, the solution is to always include all possible arguments in wrapper functions.

is header file path reference in .c file included in object file (.o)

I compile an example.c file that has the line:
#include "parse/properties/properties.h"
The compiler creates get an example.o file. Is the path to the header file included in the example.o file? or is that information external?
It may or may not, the object file format is not standardised (the standard does not even mention "object files"). A compiler might insert the #include for debugging purposes, or it may skip it completely.
Note also that #include'ing is done by the compiler in what the standard desrcibes as the first phase in translation, using a textual preprocessor; the #include-directive tells the preprocessor to copy verbatim and inplace the contents of another file. This happens long before actual object files would be produced
It is implementation defined but generally when you compile with debugging options ( eg -g in gcc ) the file paths are included to aid you in debugging

purpose of creating DEPENDENCIES_OUTPUT file while compiling c program

gcc -MD file.c creates a dependency output file named file.d. But I dont understand the need of creating this file ( dependency file ), because when error comes while compilation, no dependency file is generated. So can anyone throw some light when he/she has used this dependency file or some usefulness of this file / feature of gcc.
The file.d file can be understand by make. You often first generate the .d files, include them into your Makefile and then compile the c-files only if one of the included headers has changed.
Don't bother about if you don't use make.
GCC documentation says:
Instead of outputting the result of preprocessing, output a rule suitable for make describing the dependencies of the main source file. The preprocessor outputs one make rule containing the object file name for that source file, a colon, and the names of all the included files, including those coming from -include or -imacros command line options.

Creating one C file when compiling multiple sources

I have a set of C files to compile using gcc and make. The build process works fine.
I want to know if I can obtain - during compilation - one C file containing all the source code without any preprocessor macro.
One simple was would be to make a file that included all the other source files.
$cat *.c > metafile.c
This would construct such a file, depending on how you set you 'pragma once' and ifndef's this file would probably not be able to compile on its own.
On the other hand, if what you want in a file where all the preprocessor macro's have been unfolded and evaluated, then the answer is to add the following to gcc:
-save-temps
then the file .ii will contain the unfolded and evaluated macros
If you include all files to the gcc compiler at once you could use
gcc -E main.c other.c another.c
This will also include the stdlib functions maybe use -nostdinc
You can't - normally you invoke the compiler to compile just a single source file, resulting in an object file. Later you call the linker on all of the object files to create the executable - it doesn't have the original C source code available.
You can, however, create a separate shell script that calls gcc with the -E option just to preprocess the source files, and then use the cat utility to put all the sources in a single file.
You can use the -save-temps option to get the intermediate outputs. However it will be one output file per source file. Each source file gets compiled separately and represents a compilation unit which can't be mixed up.
You can also use the -E option, however that will only run the preprocessor and not continue compilation.

Resources