CMake: how to break a PRE_LINK infinite loop? - loops

I'm trying to automatically label my application sign-on line with a build number. This application is a plain vanilla C one without graphic UI; it is intended for command line, therefore it is a "simple" one.
The sign-on id is located in a "template" source file which is customized by CMake with a configure_file() command. Recently, I fancied to include a build number in this sign-on id. Consequently, the customization can no longer be statically done at CMake time, but everytime make is invoked.
To achieve that, there are two possibilities in CMake:
add_custom_target(), but it is triggered even when nothing else changes in the source tree which does not reflect the state of the tree;
add_custom_command(), which can be triggered only when the application (target) needs to be linked again.
I opted for the second solution and did not succeed.
Here is an extract of my CMakeLists.txt, the sign-on id being in file ErrAux.c (template in PROJECT_SOURCE_DIR, configured in PROJECT_BINARY_DIR):
add_executable(anathem ... ${PROJECT_BINARY_DIR}/ErrAux.c ...)
add_custom_command(TARGET anathem PRE_LINK
COMMAND "${CMAKE_COMMAND}" "-DVERS=${PROJECT_VERSION}"
"-DSRC=${PROJECT_SOURCE_DIR}"
"-DDST=${PROJECT_BINARY_DIR}"
-P "${CMAKE_HOME_DIRECTORY}/BuildNumber.cmake"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
COMMENT "Numbering build"
VERBATIM
)
This launches script BuildNumber.cmake just before the link step. It computes the next build number and customizes ErrAux.c with configure_file().
It works fine, except ...
It happens late in the make sequence and the update to ErrAux.c goes unnoticed. The sign-on id in the executable contains the previous build number.
Next time I run make, make notices the generated ErrAux.c is younger than its object module and causes it to be compiled again, which in turn causes a link which triggers a build number update. This happens even if no other file has changed and this loop can't be broken. This is clearly shown in the compiling log:
Scanning dependencies of target anathem
[ 13%] Building C object AnaThem/CMakeFiles/anathem.dir/ErrAux.c.o
[ 14%] Linking C executable anathem
Numbering build
3.0.0-45
[ 36%] Built target anathem
The crux seems to be that add_custom_command(TARGET ...) can't specify an output file like add_custom_command(OUTPUT ...) does. But this latter form can't be triggered in PRE_LINK mode.
As a workaround, I forced a compilation to "refresh" the object module with:
add_custom_command(TARGET anathem PRE_LINK
COMMAND "${CMAKE_COMMAND}" "-DVERS=${PROJECT_VERSION}"
"-DSRC=${PROJECT_SOURCE_DIR}"
"-DDST=${PROJECT_BINARY_DIR}"
-P "${CMAKE_HOME_DIRECTORY}/BuildNumber.cmake"
COMMAND echo "Numbering"
COMMAND echo "${CMAKE_C_COMPILER}" "\$(C_DEFINES)" "\$(C_INCLUDES)" "\$(C_FLAGS)" -c "${PROJECT_BINARY_DIR}/ErrAux.c"
COMMAND "${CMAKE_C_COMPILER}" "\$(C_DEFINES)" "\$(C_INCLUDES)" "\$(C_FLAGS)" -c "${PROJECT_BINARY_DIR}/ErrAux.c"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
COMMENT "Numbering build"
VERBATIM
)
An explicit compilation is forced after sign-on id customization. It mimics what is found in the various Makefile's and my not be safe for production. It's a cheat trick on both CMake and make.
UPDATE: Option -c is required to postpone link step until the final application liniking process.
This addition creates havoc in the link, as shown by the log, where you see a double compilation (the standard make one and the add_custom_command() one):
Scanning dependencies of target anathem
[ 13%] Building C object AnaThem/CMakeFiles/anathem.dir/ErrAux.c.o
[ 14%] Linking C executable anathem
Numbering build
3.0.0-47
Numbering
/usr/bin/cc -DANA_DEBUG=1 -I/home/prog/projects/AnaLLysis/build/AnaThem -I/home/prog/projects/AnaLLysis/AnaThem -g /home/prog/projects/AnaLLysis/build/AnaThem/ErrAux.c
/usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
collect2: error: ld returned 1 exit status
AnaThem/CMakeFiles/anathem.dir/build.make:798: recipe for target 'AnaThem/anathem' failed
make[2]: *** [AnaThem/anathem] Error 1
If I force a full recompilation, to make sure all sources are compiled, *main.c* included, I get the same error on `main`.
The only logical explanation is my manual C invocation is faulty and somehow destroys vital information. I checked with *readelf* that `main` is still in the symbol table for *main.c.o* and that it is still taken into account by the link step (from file *link.txt*).
UPDATE: Even with the correct link, I'm still experiencing the infinite loop syndrom. The generated application still has its sign-on id lagging behind the actual build counter.
Can someone give me a clue for the right direction?
FYI I'm quite new to CMake, so I may do things really wrong. Don't hesitate to criticize my mistakes.

The key to the solution is to put the generated module where make expects to find it. CMake organizes the build tree in a non-trivial way.
The shortcomming in my added compilation in add_custom_command() was to believe that by default the binary will be stored in the "usual" CMake locations. Since I forge manually my compiler command, this is not the case.
I found the module in the source directory, which is a consequence of the WORKING_DIRECTORY option, with name ErrAux.o and not ErrAux.c.o.
To obtain the correct behavior, I force an output location with:
-o "${PROJECT_BINARY_DIR}/CMakeFiles/anathem.dir/ErrAux.c.o"
Now, when I run make again, nothing happens since nothing changed.
Side question
To make the solution portable (if needed), are there CMake variables for CMakeFiles and anathem.dir directories? Or in the latter case, for the current target as "anathem" as the target name in add_custom_command()?

Related

need "ensure dependency is up to date"

I was watching Neil's discussing shake at ICFP. He mentions in the talk that the need function ensures that the dependency is "up to date". What does this mean exactly? Below is the code used in the talk:
"Foo.o" *> \_ -> do
need ["Foo.c"]
...
...
system' "gcc" ["-c", "Foo.c"]
Does this mean that the Shake framework expects there to be a "rule" on how to build "Foo.c", and will run that rule when figuring out if it needs to re-run the rule for building "Foo.o"? If that is the case, does Shake in essence have a map from File to Rule? What happens when my dependency is a file that simply exists on my system? If Shake is not used to generate it, and I use need ["Somefile.txt"], no rule will exist for how to build "Somefile.txt". Will Shake crash? At the root of it all, we have to start from some files that already exist.
P.S. I am new to build systems and to Shake; any guidance is appreciated.
A dependency is "up to date" if all its dependencies are up to date, and it has been run with those dependencies in their current value. But the important point in this question seems to be that Foo.o in Shake can refer to two things:
There can be a rule "Foo.o" *> which runs some commands, probably depending on source files, and produces an output file Foo.o.
If there are no rules to produce Foo.o, then Shake assumes Foo.o is a source file. At the leaves there must be files that are source files.
You can see this in the error message Shake produces:
$ shake shakeOptions $ action $ need ["hello.txt"]
Error, file does not exist and no rule available:
hello.txt
The fact that rules are named after the file they produced, and the absence of a rule implies it's a source file is shared with build systems like Make. However, this property is different from build systems like Buck/Bazel where targets and sources have distinct namespaces.

Remove symbolic information from C language executable on z/OS

Having built my application, initially using debug, I now move to make it production ready. I have changed my compile options from
-c -W"c,debug,LP64,sscomm,dll"
to
-c -s -W"c,LP64,sscomm,dll"
which reduces the size of the resultant executable to 60% of the debug version.
I changed my link options from
-W"l,LP64,DYNAM=DLL"
to
-s -W"l,LP64,DYNAM=DLL"
which further reduces the size of the resultant executable to 20% of the original debug version.
So it certainly seems to be doing something. But when I view the executable, I can still see all the function name eye-catchers in the executable, and when I force an abend, the CEEDUMP generated still shows all the function names in that file. I expected -s to remove all symbolic information.
So my question is "how do I remove all symbolic information?"
In addition, once linked with -s I can no longer copy the module to an MVS dataset, from the USS file where it is generated. I use the following command:-
cp -X prog "//'ME.USER.LOAD(PROG)'"
which fails with:-
IEW2523E 3702 MEMBER *NULL* IDENTIFIED BY DDNAME /0000002 IS NOT AN EDITABLE
MODULE AND CANNOT BE INCLUDED.
IEW2510W 3704 ENTRY OFFSET 0 IN MODULE *NULL* IDENTIFIED BY DDNAME /0000002
DOES NOT EXIST IN A VALID SECTION.
cp: FSUMF140 IEWBIND function "INCLUDE" failed with return code 8 and reason code 83000505
This error message seems to say that I need the EDIT linkage option, but if I add that in, it appears to negate the step of using -s on the link, as the size goes back up to 60% of the debug version size.
So my second question is, "how do I copy the file to an MVS dataset and also remove symbolic information?"
Maybe there is a subsequent step that I can take to drive the binder again to remove symbolic information from the USS file and from the MVS dataset after the copy?
You can use COMPRESS compiler option and to some extent COMPACT. The COMPRESS option will suppress emitting function names in control blocks, while the COMPACT option will influence the compiler optimization choices to favor smaller object size.
Even though you are compiling and linking your executable in USS, you do not need to produce the executable in USS and then copy it to a data set. You can put your executable straight into the data set by using -o "//'ME.USER.LOAD(PROG)'" syntax. Just make sure your output data set is a PDSE.
Since you are compiling and linking in USS, you should use the xlc utility with -q syntax for compiler options as this syntax avoids the use of parenthesis which have special meaning in shell.

(CLion/CMake) Why does my c file not belong to any target project when it is saved within the project directory?

Preface: I am very new to c and CLion, so apologies in advance if my phrasing is very wrong.
Essentially, I have an assignment that involves two c files (a "main", and one performing a conversion between imperial and metric units). The main c file simply #include-s the conversion file, performs a function within the conversion file, and prints the resulting value to the user. Simple enough, but I keep getting a message every time I try to run it:
"undefined reference to 'conversion'"
I have tried to suss out the problem, and my only idea relates to the banner at the top of conversion.c which says "This file does not belong to any project target, code insight features may not work properly.". I do not understand why I receive this message, because conversion.c and main.c are both within the main project directory, and this setup worked perfectly fine in my previous assignment.
I have searched for solutions online, and the only one that seemed to make sense was to update my CMakeLists.txt file to include add_executable(project conversion.c). This is what my CMakeLists.txt file looks like before I add this line:
cmake_minimum_required(VERSION 3.12)
project(project C)
set(CMAKE_C_STANDARD 11)
add_executable(project main.c)
However, when I add it, I get the error:
CMake Error at CMakeLists.txt:7 (add_executable):
add_executable cannot create target "directory" because another
target with the same name already exists. The existing target is an
executable created in source directory
"/home/john_s/CLionProjects/project". See documentation for
policy CMP0002 for more details.
Presumably this is because the previous line I have (add_executable(project main.c)) is linking to the same directory, but I have no idea how to resolve this. Any suggestions?
From cmake manual:
add_executable(< name> [WIN32] [MACOSX_BUNDLE]
[EXCLUDE_FROM_ALL]
[source1] [source2 ...])
Adds an executable target called to be built from the source files listed in the command invocation. (The source files can be omitted here if they are added later using target_sources().)
So to combile a single executable using two source files, you just use:
add_executable(target_name source1.c source2.c)

Compiling and Running C code in notepad++

I have a problem with compiling and running C codes in notepad++.
I am using the nppexec plugin and wrote the following in the script section after pressing F6:
C:\MinGW\bin\gcc.exe -g "$(FULL_CURRENT_PATH)" -o "$(CURRENT_DIRECTORY)\$(NAME_PART).exe"
$(CURRENT_DIRECTORY)\$(NAME_PART).exe
After pressing OK, I get the following on the console:
C:\MinGW\bin\gcc.exe -g "D:\Silent\Documents\College Stuff\6th sem\NETWORKING lab\substitutioncypher.C" -o "D:\Silent\Documents\College Stuff\6th sem\NETWORKING lab\substitutioncypher.exe"
Process started >>>
<<< Process finished. (Exit code 0)
D:\Silent\Documents\College Stuff\6th sem\NETWORKING lab\substitutioncypher.exe
Process started >>>
Here, substitution.c is my program to be run. The problem is that the gcc part is working fine but I am not able to execute the program from here as there is no response.
As you see, it just says process started and after that nothing. No response to a key being pressed, it just accepts everything like a text editor.
If I go to the working directory and execute the program from there (double clicking the exe file) then it seems to run perfectly fine. The problem seems to be in my script or the plugin.
Please, can anyone find out what is wrong with my compiling and running script?
In addition to #paxdiablo 's answer, you may also find useful the following NppExec script for single file projects:
npp_save
cd "$(CURRENT_DIRECTORY)"
cmd /c del "$(NAME_PART)".o "$(NAME_PART)".exe *.o
C:\MinGW\bin\gcc.exe -g3 -std=c89 -pedantic -Wall -Wextra -Wno-nonnull "$(NAME_PART)".c -o "$(NAME_PART)".exe
npp_run "$(NAME_PART)".exe
The 1st line saves the document that is currently active inside notepad++.
The 2nd line ensures your current directory is the one of the active document. This let you refraining from using the "$(CURRENT_DIRECTORY)" variable in the rest of the lines.
The 3rd line removes any executables and object-file leftovers from previous successful compilations. Removing the last executable is a nice idea, because if you don't then the last line will cause your .exe produced by the last compilation to be run anyway, even if your current compilation fails. A failed compilation does not produce an .exe, so normally you don't want NppExec to run the previous .exe. Removing the previously produced object-file is optional, but it does ensure that it will not affect fresh compilations (it makes more sense in multi-file projects, as an alternative to the touch command-line tool).
The 4th line compiles the active document. Feel free to modify gcc's options according to your needs. If you add C:\MinGW\bin into the Windows PATH environment variable, and assuming you are using only one gcc installation on your system, then you can skip the absolute path, and write just gcc instead.
The last line executes the produced executable. The npp_run command tells NppExec to launch a new command-prompt window, and run your program in it (unless it is a WIN32 GUI program). I personally find it more convenient compared to the NppExec console embed in notepad++. It looks more natural and it also avoids some I/O redirection problems of the NppExec console.
However, if your program is a console app that does not interact with the user say via a loop, then this approach will cause the launched command-prompt window to close immediately after your program terminates, not giving you the chance to inspect its output. In that case you should have you program waiting for a key to be pressed by the user just before its termination. A quick-and-dirty way is to put a system("pause"); right before your main() function's return and/or exit() statements (it is much better though to write a simple cross-platform function or macro for this).
You may experiment with the above script by typing it in F6's <temporary script> and save it permanently for general use when you are happy with its behavior.
On a side note, you may also find it useful to have a look at this post, where I'm trying to explain how to setup NppExec so it jumps to the appropriate line in the source code, by double-clicking on any error gcc spits in the NppExec console during compilation.

How do you export a system library using cmake?

How can I export the libraries that a cmake library depends on, such that an executable depending on that library does not have to manually depend on the dependencies of that library?
That's a bit of a mouthful, so here's an example:
dummy (application) ----> depends on liba
liba ----> depends on libpng
Compiling dummy generates errors:
-- Found LIBPNG
-- Found LIBA
-- Configuring done
-- Generating done
-- Build files have been written to: /home/doug/projects/dummy/build
Linking C executable dummy
../deps/liba/build/liba.a(a.c.o): In function `a_dummy':
/home/doug/projects/dummy/deps/liba/src/a.c:6: undefined reference to `png_sig_cmp'
collect2: ld returned 1 exit status
make[2]: *** [dummy] Error 1
make[1]: *** [CMakeFiles/dummy.dir/all] Error 2
make: *** [all] Error 2
I can fix that by adding this into CMakeLists.txt for dummy:
TARGET_LINK_LIBRARIES(dummy png)
However, dummy has no knowledge of how liba implements its api. At some point that may change to being libjpg, or something else, which will break the dummy application.
After getting some help from the cmake mailing list I've been directed to this example for exporting things:
http://www.cmake.org/Wiki/CMake/Tutorials/How_to_create_a_ProjectConfig.cmake_file
However, following that approach leaves me stuck at this line:
export(TARGETS ${LIBPNG_LIBRARY} FILE "${PROJECT_BINARY_DIR}/ALibraryDepends.cmake")
Clearly I'm missing something here; this 'export' command looks like its designed to export sub-projects to a high level; ie. nested projects inside liba.
However, that is not the problem here.
When configuring liba (or any cmake library) I will always generate a list of dependencies which are not part of that project.
How can I export those so they appear as part of LIBA_LIBRARY when I use find_package() to resolve liba?
Using static libraries is not an option (static library for something that links to opengl? no.)
Given your comment to arrowdodger's answer about the fear of
installing something would mess up your system I chose to give
a conceptional comment in form of an answer because of its
length.
Chaining cmake project works via find_package, which looks for
*Config.cmake and *-config.cmake files.
Project A's CMakeLists.txt:
#CMakeLists.txt
project(A)
install(FILES
${CMAKE_CURRENT_SOURCE_DIR}/AConfig.cmake share/A/cmake
)
#AConfig.cmake
message("Yepp, you've found me.")
$ mkdir build
$ cd build
$ cmake -DCMAKE_INSTALL_PREFIX=/tmp/test-install ..
$ make install
Project B's CMakeLists.txt:
project(B)
find_package(A)
Then
$ mkdir build
$ cd build
$ cmake -DCMAKE_INSTALL_PREFIX=/tmp/test-install ..
$ make install
results in
...
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
Yepp, you've found me.
B found A because it installed AConfig.cmake into a location
where cmake will find it 'share/A/cmake' AND was given the same
value for CMAKE_INSTALL_PREFIX.
Now this is that. Lets think about what you can do in
AConfig.cmake: AFAIK everything you want to. But the most common
task is to pull information about the targets of A via include(),
do some additional find_package invoctions for 3rd party
packages (HINT HINT) and create the variables
A_LIBRARIES
A_INCLUDE_DIRS
What you want to include is a file that was created by
install(EXPORT A-targets
DESTINATION share/A/cmake
)
in A's CMakeLists.txt , where A-targets refers to a global cmake
variable that accumulated all target informations when used in
install(TARGETS ...
EXPORT A-targets
...
)
statments. What is created at make install is
/tmp/test-install/share/A/cmake/A-targets.cmake
which then resides alongside AConfig.cmake in the same directory.
Please take another look at the wiki page on how to use this file
within AConfig.cmake.
Regarding the export() command: This comes handy if your
projects have gotten HUGE and it takes a considerable amount of
time to install them. To speed things up, you want to use what's
in A's build/ directory directly. It's an optimization and also
explained in the wiki. It still works via find_package(), see
http://cmake.org/cmake/help/cmake-2-8-docs.html#command:export
But I strongly suggest that you go for the usual make install
route for now.
I found my own solution to this problem using the accepted solution above, which I leave here for others:
In liba/CMakeLists.txt:
# Self
set(A_INCLUDE_DIRS ${A_INCLUDE_DIRS} "${PROJECT_SOURCE_DIR}/include")
set(A_LIBRARIES ${A_LIBRARIES} "${PROJECT_BINARY_DIR}/liba.a")
# Libpng
FIND_PACKAGE(libpng REQUIRED)
set(A_INCLUDE_DIRS ${A_INCLUDE_DIRS} ${LIBPNG_INCLUDE_DIRS})
set(A_LIBRARIES ${A_LIBRARIES} ${LIBPNG_LIBRARIES})
ADD_LIBRARY(a ${SOURCES})
# Includes
INCLUDE_DIRECTORIES(${A_INCLUDE_DIRS})
# Allow other projects to use this
configure_file(AConfig.cmake.in "${PROJECT_BINARY_DIR}/AConfig.cmake")
In liba/AConfig.cmake:
set(A_LIBRARIES #A_LIBRARIES#)
set(A_INCLUDE_DIRS #A_INCLUDE_DIRS#)
In dummy/CMakeLists.txt:
FIND_PACKAGE(A REQUIRED)
INCLUDE_DIRECTORIES(${A_INCLUDE_DIRS})
TARGET_LINK_LIBRARIES(dummy ${A_LIBRARIES})
This yields an AConfig.cmake that reads:
set(A_LIBRARIES /home/doug/projects/dummy/deps/liba/build/liba.a;/usr/lib/libpng.so)
set(A_INCLUDE_DIRS /home/doug/projects/dummy/deps/liba/include;/usr/include)
And a verbose compile that reads:
/usr/bin/gcc -std=c99 -g CMakeFiles/dummy.dir/src/main.c.o -o dummy -rdynamic ../deps/liba/build/liba.a -lpng
Which is exactly what I was looking for.
If liba doesn't provide any means to determine it's dependencies, you can't do anything.
If liba is library developed by you and you are using CMake to build it, then you should install libaConfig.cmake file with liba itself, which would contain necessary definitions. Then you include libaConfig in dummy's CMakeLists.txt to obtain information about how liba have been built.
You can look how it's done in LLVM project, relevant files have cmake.in extension
http://llvm.org/viewvc/llvm-project/llvm/trunk/cmake/modules/
In the end, in dummy project you should use
target_link_libraries( ${LIBA_LIBRARIES} )
include_directories( ${LIBA_INCLUDE_DIR} )
link_directories( ${LIBA_LIBRARY_DIR} )
If that liba is used only by dummy, you can build it from single CMake project. This is more convenient, since you don't need to install liba each time you recompile it and it will be rebuilt and relinked with dummy automatically every time you run make.
If you liked this approach, the only thing you should do - define in liba' CMakeLists.txt variables you need with PARENT_SCOPE option (see set() command manual).
Finally, you can use shared libs, .so's don't have such problem.

Resources