I was watching Neil's discussing shake at ICFP. He mentions in the talk that the need function ensures that the dependency is "up to date". What does this mean exactly? Below is the code used in the talk:
"Foo.o" *> \_ -> do
need ["Foo.c"]
...
...
system' "gcc" ["-c", "Foo.c"]
Does this mean that the Shake framework expects there to be a "rule" on how to build "Foo.c", and will run that rule when figuring out if it needs to re-run the rule for building "Foo.o"? If that is the case, does Shake in essence have a map from File to Rule? What happens when my dependency is a file that simply exists on my system? If Shake is not used to generate it, and I use need ["Somefile.txt"], no rule will exist for how to build "Somefile.txt". Will Shake crash? At the root of it all, we have to start from some files that already exist.
P.S. I am new to build systems and to Shake; any guidance is appreciated.
A dependency is "up to date" if all its dependencies are up to date, and it has been run with those dependencies in their current value. But the important point in this question seems to be that Foo.o in Shake can refer to two things:
There can be a rule "Foo.o" *> which runs some commands, probably depending on source files, and produces an output file Foo.o.
If there are no rules to produce Foo.o, then Shake assumes Foo.o is a source file. At the leaves there must be files that are source files.
You can see this in the error message Shake produces:
$ shake shakeOptions $ action $ need ["hello.txt"]
Error, file does not exist and no rule available:
hello.txt
The fact that rules are named after the file they produced, and the absence of a rule implies it's a source file is shared with build systems like Make. However, this property is different from build systems like Buck/Bazel where targets and sources have distinct namespaces.
Related
I want to make a game using SDL2, but I'm unable to compile and/or run my code, please help!
SDL2 is notoriously hard to set up, and it's often the first library aspiring game developers try to use.
This post is intended as a canonical duplicate for common problems with setting up SDL2.
This answer is about MinGW / GCC, and not Visual Studio.
This answer only applies to Windows.
Common errors
The common errors are:
SDL.h: No such file or directory (when compiling)
Various SDL_main problems: "undefined reference to SDL_main", "conflicting types for SDL_main" or "number of arguments doesn't match prototype", etc. (when compiling or linking)
undefined reference to other functions (when linking)
DLL problems: (when running your program)
'??.dll' was not found
procedure entry point ... could not be located in ..., and other mysterious DLL-related errors
The program seemingly doing nothing when launched
This list is sorted from bad to good. If you change something and get a different error, use this list to tell if you made things better or worse.
The preamble
0. Don't follow bad advice.
Some resources will suggest you to do #define SDL_MAIN_HANDLED or #undef main. Don't blindly follow that advice, it's not how SDL2 is intended to be used.
If you do everything correcty, it will never be necessary. Learn the intended approach first. Then you can research what exactly that does, and make an educated decision.
1. Figure out how to compile directly from the console, you can start using an IDE and/or build system later.
If you're using an IDE, I suggest to first make sure you're able to compile your program directly from the console, to rule out any IDE configuration problems. After you figure that out, you can use the same compiler options in your IDE.
The same applies to build systems, such as CMake.
2. Download the right SDL2 files. Make sure you have the right files. You need the archive called SDL2-devel-2.0.x-mingw.tar.gz from here.
Extract it to any directory, preferably somewhere near your source code. Extracting into the compiler installation directory is often considered a bad practice (and so is copying them to C:\Windows, which is a horrible idea).
3. Know the difference between compiler flags and linker flags. A "flag" is an option you specify in the command line when building your program. When you use a single command, e.g. g++ foo.cpp -o foo.exe, all your flags are added to the same place (to this single command).
But when you build your program in two steps, e.g.:
g++ foo.cpp -c -o foo.o (compiling)
g++ foo.o -o foo.exe (linking)
you have to know which of the two commands to add a flag to. Those are "compiler flags" and "linker flags" respectively.
Most IDEs will require you to specify compiler and linker flags separately, so even if you use a single command now, it's good to know which flag goes where.
Unless specified otherwise, the order of the flags doesn't matter.
SDL.h: No such file or directory
Or any similar error related to including SDL.h or SDL2/SDL.h.
You need to tell your compiler where to look for SDL.h. It's in the SDL files you've downloaded (see preamble).
Add -Ipath to your compiler flags, where path is the directory where SDL.h is located.
Example: -IC:/Users/HolyBlackCat/Downloads/SDL2-2.0.12/x86_64-w64-mingw32/include/SDL2. Relative paths work too, e.g. -ISDL2-2.0.12/x86_64-w64-mingw32/include/SDL2.
Note that the path will be different depending on how you write the #include:
If you do #include <SDL.h>, then the path should end with .../include/SDL2 (like above). This is the recommended way.
If you do #include <SDL2/SDL.h>, then the path should end with .../include.
Various SDL_main problems
You can get several different errors mentioning SDL_main, such as undefined reference to SDL_main, or conflicting types for 'SDL_main', or number of arguments doesn't match prototype, etc.
You need to have a main function. Your main function must look like int main(int, char **). NOT int main() and NOT void main(). This is a quirk of SDL2, related to it doing #define main SDL_main.
Adding parameter names is allowed (and is mandatory in C), e.g. int main(int argc, char **argv). Also the second parameter can be written as char *[] or with a name: char *argv[]. No other changes are allowed.
If your project has multiple source files, make sure to include SDL.h in the file that defines the main function, even if it doesn't otherwise use SDL directly.
Try to avoid #define SDL_MAIN_HANDLED or #undef main when solving this issue, see preamble for explanation.
undefined reference to various functions
• undefined reference to SDL_...
The error message will mention various SDL_... functions, and/or WinMain. If it mentions SDL_main, consult the section "Various SDL_main problems" above. If the function names don't start with SDL_, consult the section "undefined reference to other functions" below.
You need to add following linker flags: -lmingw32 -lSDL2main -lSDL2 -Lpath, where path is the directory where libSDL2.dll.a and libSDL2main.a (which you've downloaded) are located. The order of the -l... flags matters. They must appear AFTER any .c/.cpp/.o files.
Example: -LC:/Users/HolyBlackCat/Desktop/SDL2-2.0.12/x86_64-w64-mingw32/lib. Relative paths work too, e.g. -LSDL2-2.0.12/x86_64-w64-mingw32/lib.
When you use -l???, the linker will look for a file called lib???.dll.a or lib???.a (and some other variants), which is why we need to pass the location of those files. libmingw32.a (corresponding to -lmingw32) is shipped with your compiler, so it already knows where to find it.
I added all those flags and nothing changed, or I'm getting skipping incompatible X when searching for Y:
You probably use the wrong SDL .a files. The archive you downloaded contains two sets of files: i686-w64-mingw32 (32-bit) and x86_64-w64-mingw32 (64-bit). You must use the files matching your compiler, which can also be either 32-bit or 64-bit.
Print (8*sizeof(void*)) to see if your compiler is 32-bit or 64-bit.
Even if you think you use the right files, try the other ones to be sure.
Some MinGW versions can be switched between 32-bit and 64-bit modes using -m32 and -m64 flags (add them to both compiler and linker flags).
I get undefined reference to a specific function:
• undefined reference to WinMain only
There are several possibilities, all of which were covered in the previous section:
You forgot -lmingw32 and/or -lSDL2main linker flags.
You must use following linker flags, in this exact order, after
any .c/.cpp/.o files: -lmingw32 -lSDL2main -lSDL2
The libSDL2main.a file you use doesn't match your compiler (32-bit file with a 64-bit compiler, or vice versa).
Try to avoid #define SDL_MAIN_HANDLED or #undef main when solving this issue, see preamble for explanation.
• undefined reference to SDL_main only
See the section "Various SDL_main problems" above.
• undefined reference to other functions
Your linker found and used libSDL2.a, but it should be finding and using libSDL2.dll.a. When both are available, it prefers the latter by default, meaning you didn't copy the latter to the directory you passed to -L.
If you intended to perform static linking, see the section called "How do I distribute my app to others?" below.
Nothing happens when I try run my app
Let's say you try to run your app, and nothing happens. Even if you try to print something at the beginning of main(), it's not printed.
Windows has a nasty habit of not showing some DLL-related errors when the program is started from the console.
If you were running your app from the console (or from an IDE), instead try double-clicking the EXE in the explorer. Most probably you'll now see some DLL-related error; then consult one of the next sections.
??.dll was not found
Copy the .dll mentioned in the error message, and place it next to your .exe.
If the DLL is called SDL2.dll, then it's in the SDL files you've downloaded (see preamble). Be aware that there are two different SDL2.dlls: a 32-bit one (in the i686-w64-mingw32 directory), and a 64-bit one (in x86_64-w64-mingw32). Get the right one, if necessary try both.
Any other DLLs will be in your compiler's bin directory (the directory where gcc.exe is located).
You might need to repeat this process 3-4 times, this is normal.
For an automatic way of determining the needed DLLs, see the next section.
procedure entry point ... could not be located in ... and other cryptic DLL errors
Your program needs several .dlls to run, and it found a wrong version of one, left over from some other program you have installed.
It looks for DLLs in several different places, but the directory with the .exe has the most priority.
You should copy all DLLs your program uses (except the system ones) into the directory where your .exe is located.
A reliable way to get a list of needed DLLs is to blindly copy a bunch of DLLs, and then remove the ones that turn out to be unnecessary:
Copy SDL2.dll. It's in the SDL files you've downloaded (see preamble). Be aware that there are two different SDL2.dlls: a 32-bit one (in the i686-w64-mingw32 directory), and a 64-bit one (in x86_64-w64-mingw32). Get the right one, if necessary try both.
Copy all DLLs from your compiler's bin directory (the directory where gcc.exe is located).
Now your program should run, but we're not done yet.
Download NTLDD (or some other program that displays a list of used DLLs). Run ntldd -R your_program.exe.
Any DLL not mentioned in its output should be removed from the current directory. Your program uses everything that remains.
I ended up with following DLLs, expect something similar: SDL2.dll, libgcc_s_seh-1.dll, libstdc++-6.dll (C++ only), libwinpthread-1.dll.
Can I determine the needed DLLs without copying excessive ones?
Yes, but it's less reliable.
Your program searches for DLLs in following locations, in this order:
The directory where your .exe is located.
C:\Windows, including some of its subdirectories.
The directories listed in PATH.
Assuming you (or some jank installer) didn't put any custom DLLs into C:\Windows, adding your compiler's bin directory to the PATH (preferably as the first entry) and either putting SDL2.dll in the same directory as the .exe or into some directory in the PATH should be enough for your program to work.
If this works, you can then run ntldd without copying any DLLs beforehand, and copy only the necessary ones. The reason why you'd want to copy them at all at this point (since your app already works) is to be able to distribute it to others, without them having to install the compiler for its DLLs. Skip any DLLs located outside of your compiler's bin directory (except for SDL2.dll).
Note that the possibility of having weird DLLs in C:\Windows is real. E.g. Wine tends to put OpenAL32.dll into C:\Windows, so if you try this process with OpenAL on Wine, it will fail. If you're making a sciprt that runs ntldd automatically, prefer copying the DLLs (or at least symlinking them - I heard MSYS2 can emulate symlinks on Windows?).
Can I make an EXE that doesn't depend on any DLLs?
It's possible to make an .exe that doesn't depend on any (non-system) .dlls by using the -static linker flag, this is called "static linking". This is rarely done, and you shouldn't need to do this if you did the above steps correctly. This requires some additional linker flags; they are listed in file ??-w64-mingw32/lib/pkgconfig/sdl2.pc shipped with SDL, in the Libs.private section. Notice that there are two files, for x32 and x64 respectively.
How do I distribute my app to others?
Follow the steps in the previous section, titled procedure entry point ... could not be located in ....
A saner alternative?
There is MSYS2.
It has a package manager that lets you download prebuilt libraries, and, as a bonus, a fresh version of the compiler.
Install SDL2 from its package manager. Use a tool called pkg-config (also from the package manager) to automatically determine all necessary flags (pkg-config --cflags SDL2 for compiler flags, pkg-config --libs SDL2 for linker flags).
This is the same experience as you would have on Linux (maybe except for some DLL management hassle).
Bonus - Other problems
Q: My program always opens a console window when I run it, how do I hide it?
A: Add -mwindows to the linker flags.
Q: I get error 'SDL_VideoMode' wasn't declared in this scope.
A: SDL_VideoMode is from SDL1.2, it's not a part of the newer SDL2. Your code was written for the outdated version of SDL. Find a better tutorial that deals specifically with SDL2.
Q: My program has the default file icon, but I want a custom one.
A: Your icon must be in the .ico format. If your graphics editor doesn't support it, make a series of .pngs of common sizes (e.g. 16x16, 32x32, 48x48, 64x64), then convert them to a single .ico using ImageMagick: magick *.png result.ico (or with convert instead of magick).
Create a file with the .rc extension (say, icon.rc), with following contents MyIconName ICON "icon.ico" (where MyIconName is an arbitrary name, and "icon.ico" is the path to the icon). Convert the file to an .o using windres -O res -i icon.rc -o icon.o (the windres program is shipped with your compiler). Specify the resulting .o file when linking, e.g. g++ foo.cpp icon.o -o foo.exe.
Recent versions of SDL2 have a nice property of using the same icon as the window icon, so you don't have to use SDL_SetWindowIcon.
A solution for Visual Studio:
Why not use a package manager? I use vcpkg, and it makes super easy to consume 3rd party libraries. Grab the vcpkg source, and extract it to a safe place, like C:/, then run its bootstrap script bootstrap-vcpkg.bat, this will generate vcpkg executable. Then run vcpkg integrate install to make libraries installed with vcpkg available in Visual Studio.
Search for the library you need:
vcpkg search sdl
imgui[sdl2-binding] Make available SDL2 binding
libwebp[vwebp-sdl] Build the vwebp viewer tool.
magnum[sdl2application] Sdl2Application library
sdl1 1.2.15#12 Simple DirectMedia Layer is a cross-platform development library designed to p...
sdl1-net 1.2.8-3 Networking library for SDL
sdl2 2.0.12-1 Simple DirectMedia Layer is a cross-platform
...
Install it with: vcpkg install sdl2.
Now you just need include SDL2 headers, and everything will work out of the box. The library will be linked automatically.
You can learn more about vcpkg here.
On Mac this is what I follow for XCode (must install g++):
sdl linking:
g++ main.cpp -o main $(sdl2-config --cflags --libs)
XCODE project steps:
open terminal app (macOS)
BUILD SETTINGS (select 'all' and 'combined' search bar enter: "search")
click on "header search paths(way right side click)
add: /usr/local/include
BUILD PHASES --> LINK BINARY LIBRARIES (click plus)
type in SDL --> click "add other"
press: command+SHIFT+g (to bring search bar)
type in: usr/local/Cellar
navigate to: SDL2 -->2.0.8 -->lib --> libSDL2-2.2.0.dylib (make sure not shortcut)
CMake offers several ways to specify the source files for a target.
One is to use globbing (documentation), for example:
FILE(GLOB MY_SRCS dir/*)
Another method is to specify each file individually.
Which way is preferred? Globbing seems easy, but I heard it has some downsides.
Full disclosure: I originally preferred the globbing approach for its simplicity, but over the years I have come to recognise that explicitly listing the files is less error-prone for large, multi-developer projects.
Original answer:
The advantages to globbing are:
It's easy to add new files as they
are only listed in one place: on
disk. Not globbing creates
duplication.
Your CMakeLists.txt file will be
shorter. This is a big plus if you
have lots of files. Not globbing
causes you to lose the CMake logic
amongst huge lists of files.
The advantages of using hardcoded file lists are:
CMake will track the dependencies of a new file on disk correctly - if we use
glob then files not globbed first time round when you ran CMake will not get
picked up
You ensure that only files you want are added. Globbing may pick up stray
files that you do not want.
In order to work around the first issue, you can simply "touch" the CMakeLists.txt that does the glob, either by using the touch command or by writing the file with no changes. This will force CMake to re-run and pick up the new file.
To fix the second problem you can organize your code carefully into directories, which is what you probably do anyway. In the worst case, you can use the list(REMOVE_ITEM) command to clean up the globbed list of files:
file(GLOB to_remove file_to_remove.cpp)
list(REMOVE_ITEM list ${to_remove})
The only real situation where this can bite you is if you are using something like git-bisect to try older versions of your code in the same build directory. In that case, you may have to clean and compile more than necessary to ensure you get the right files in the list. This is such a corner case, and one where you already are on your toes, that it isn't really an issue.
The best way to specify sourcefiles in CMake is by listing them explicitly.
The creators of CMake themselves advise not to use globbing.
See: https://cmake.org/cmake/help/latest/command/file.html?highlight=glob#glob
(We do not recommend using GLOB to collect a list of source files from your source tree. If no CMakeLists.txt file changes when a source is added or removed then the generated build system cannot know when to ask CMake to regenerate.)
Of course, you might want to know what the downsides are - read on!
When Globbing Fails:
The big disadvantage to globbing is that creating/deleting files won't automatically update the build-system.
If you are the person adding the files, this may seem an acceptable trade-off, however this causes problems for other people building your code, they update the project from version-control, run build, then contact you, complaining that"the build's broken".
To make matters worse, the failure typically gives some linking error which doesn't give any hints to the cause of the problem and time is lost troubleshooting it.
In a project I worked on we started off globbing but got so many complaints when new files were added, that it was enough reason to explicitly list files instead of globbing.
This also breaks common git work-flows(git bisect and switching between feature branches).
So I couldn't recommend this, the problems it causes far outweigh the convenience, when someone can't build your software because of this, they may loose a lot of time to track down the issue or just give up.
And another note, Just remembering to touch CMakeLists.txt isn't always enough, with automated builds that use globbing, I had to run cmake before every build since files might have been added/removed since last building *.
Exceptions to the rule:
There are times where globbing is preferable:
For setting up a CMakeLists.txt files for existing projects that don't use CMake.Its a fast way to get all the source referenced (once the build system's running - replace globbing with explicit file-lists).
When CMake isn't used as the primary build-system, if for example you're using a project who aren't using CMake, and you would like to maintain your own build-system for it.
For any situation where the file list changes so often that it becomes impractical to maintain. In this case it could be useful, but then you have to accept running cmake to generate build-files every time to get a reliable/correct build (which goes against the intention of CMake - the ability to split configuration from building).
* Yes, I could have written a code to compare the tree of files on disk before and after an update, but this is not such a nice workaround and something better left up to the build-system.
In CMake 3.12, the file(GLOB ...) and file(GLOB_RECURSE ...) commands gained a CONFIGURE_DEPENDS option which reruns cmake if the glob's value changes.
As that was the primary disadvantage of globbing for source files, it is now okay to do so:
# Whenever this glob's value changes, cmake will rerun and update the build with the
# new/removed files.
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "*.cpp")
add_executable(my_target ${sources})
However, some people still recommend avoiding globbing for sources. Indeed, the documentation states:
We do not recommend using GLOB to collect a list of source files from your source tree. ... The CONFIGURE_DEPENDS flag may not work reliably on all generators, or if a new generator is added in the future that cannot support it, projects using it will be stuck. Even if CONFIGURE_DEPENDS works reliably, there is still a cost to perform the check on every rebuild.
Personally, I consider the benefits of not having to manually manage the source file list to outweigh the possible drawbacks. If you do have to switch back to manually listed files, this can be easily achieved by just printing the globbed source list and pasting it back in.
You can safely glob (and probably should) at the cost of an additional file to hold the dependencies.
Add functions like these somewhere:
# Compare the new contents with the existing file, if it exists and is the
# same we don't want to trigger a make by changing its timestamp.
function(update_file path content)
set(old_content "")
if(EXISTS "${path}")
file(READ "${path}" old_content)
endif()
if(NOT old_content STREQUAL content)
file(WRITE "${path}" "${content}")
endif()
endfunction(update_file)
# Creates a file called CMakeDeps.cmake next to your CMakeLists.txt with
# the list of dependencies in it - this file should be treated as part of
# CMakeLists.txt (source controlled, etc.).
function(update_deps_file deps)
set(deps_file "CMakeDeps.cmake")
# Normalize the list so it's the same on every machine
list(REMOVE_DUPLICATES deps)
foreach(dep IN LISTS deps)
file(RELATIVE_PATH rel_dep ${CMAKE_CURRENT_SOURCE_DIR} ${dep})
list(APPEND rel_deps ${rel_dep})
endforeach(dep)
list(SORT rel_deps)
# Update the deps file
set(content "# generated by make process\nset(sources ${rel_deps})\n")
update_file(${deps_file} "${content}")
# Include the file so it's tracked as a generation dependency we don't
# need the content.
include(${deps_file})
endfunction(update_deps_file)
And then go globbing:
file(GLOB_RECURSE sources LIST_DIRECTORIES false *.h *.cpp)
update_deps_file("${sources}")
add_executable(test ${sources})
You're still carting around the explicit dependencies (and triggering all the automated builds!) like before, only it's in two files instead of one.
The only change in procedure is after you've created a new file. If you don't glob the workflow is to modify CMakeLists.txt from inside Visual Studio and rebuild, if you do glob you run cmake explicitly - or just touch CMakeLists.txt.
Specify each file individually!
I use a conventional CMakeLists.txt and a python script to update it. I run the python script manually after adding files.
See my answer here:
https://stackoverflow.com/a/48318388/3929196
I'm not a fan of globbing and never used it for my libraries. But recently I've looked a presentation by Robert Schumacher (vcpkg developer) where he recommends to treat all your library sources as separate components (for example, private sources (.cpp), public headers (.h), tests, examples - are all separate components) and use separate folders for all of them (similarly to how we use C++ namespaces for classes). In that case I think globbing makes sense, because it allows you to clearly express this components approach and stimulate other developers to follow it. For example, your library directory structure can be the following:
/include - for public headers
/src - for private headers and sources
/tests - for tests
You obviously want other developers to follow your convention (i.e., place public headers under /include and tests under /tests). file(glob) gives a hint for developers that all files from a directory have the same conceptual meaning and any files placed to this directory matching the regexp will also be treated in the same way (for example, installed during 'make install' if we speak about public headers).
I have a C project with the following structure with 1 target (binary final product)
main.c
configure.in
configure
Makefile.am
Makefile.in
folder-1
..Makefile.am
..Makefile.in
..<static library files .c files>
..<static library files .h files>
folder-2
<some .c files>
<some .h files>
...
...
I am aware how to configure and compile my project with Autotools. In regard to my library of folder-1: i am often changing files in that library with different debug levels by defining a flag called DMYDEBUG.
Compilation time for the whole project takes a while and by now, i am able to change the flag by
(1) modifiying the top-level configure.in file:
CCONFIGFLAGS="${CCONFIGFLAGS} -DSF_BIGENDIAN -DMYDEBUG=3"
(2) running make clean
(3) regenerating configure from the edited configure.in where i modify DMYDEBUG
(3) running ./configure on top level
(4) running make
only this way the wished effect is taking places. Is there a better way to modify DMYDEBUG (which is only relevant to the static library in folder-1) without having to recompile the whole project each time?
In the first place, it's terrible that you modify your configure.in to change the flag value. It would be much better to make configure recognize a custom argument that conveys the information, such as --with-debug-level=x. The AC_ARG_WITH() macro serves this purpose.
However, if you have to reconfigure the project (re-run ./configure, with or without rebuilding it first) to change the flag, then changing the flag will always require a full rebuild. For more narrowly-scoped rebuilding, you need to rely on make detecting the flag modification and re-building the affected targets.
make recognizes only file-level dependencies, so that strategy relies on you putting the macro definition in a header file, which the files that use it #include. Since you're using Automake, you can rely on your build system to recognize header dependencies automatically, but you may need to perform one clean build to bootstrap that.
I'm trying to automatically label my application sign-on line with a build number. This application is a plain vanilla C one without graphic UI; it is intended for command line, therefore it is a "simple" one.
The sign-on id is located in a "template" source file which is customized by CMake with a configure_file() command. Recently, I fancied to include a build number in this sign-on id. Consequently, the customization can no longer be statically done at CMake time, but everytime make is invoked.
To achieve that, there are two possibilities in CMake:
add_custom_target(), but it is triggered even when nothing else changes in the source tree which does not reflect the state of the tree;
add_custom_command(), which can be triggered only when the application (target) needs to be linked again.
I opted for the second solution and did not succeed.
Here is an extract of my CMakeLists.txt, the sign-on id being in file ErrAux.c (template in PROJECT_SOURCE_DIR, configured in PROJECT_BINARY_DIR):
add_executable(anathem ... ${PROJECT_BINARY_DIR}/ErrAux.c ...)
add_custom_command(TARGET anathem PRE_LINK
COMMAND "${CMAKE_COMMAND}" "-DVERS=${PROJECT_VERSION}"
"-DSRC=${PROJECT_SOURCE_DIR}"
"-DDST=${PROJECT_BINARY_DIR}"
-P "${CMAKE_HOME_DIRECTORY}/BuildNumber.cmake"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
COMMENT "Numbering build"
VERBATIM
)
This launches script BuildNumber.cmake just before the link step. It computes the next build number and customizes ErrAux.c with configure_file().
It works fine, except ...
It happens late in the make sequence and the update to ErrAux.c goes unnoticed. The sign-on id in the executable contains the previous build number.
Next time I run make, make notices the generated ErrAux.c is younger than its object module and causes it to be compiled again, which in turn causes a link which triggers a build number update. This happens even if no other file has changed and this loop can't be broken. This is clearly shown in the compiling log:
Scanning dependencies of target anathem
[ 13%] Building C object AnaThem/CMakeFiles/anathem.dir/ErrAux.c.o
[ 14%] Linking C executable anathem
Numbering build
3.0.0-45
[ 36%] Built target anathem
The crux seems to be that add_custom_command(TARGET ...) can't specify an output file like add_custom_command(OUTPUT ...) does. But this latter form can't be triggered in PRE_LINK mode.
As a workaround, I forced a compilation to "refresh" the object module with:
add_custom_command(TARGET anathem PRE_LINK
COMMAND "${CMAKE_COMMAND}" "-DVERS=${PROJECT_VERSION}"
"-DSRC=${PROJECT_SOURCE_DIR}"
"-DDST=${PROJECT_BINARY_DIR}"
-P "${CMAKE_HOME_DIRECTORY}/BuildNumber.cmake"
COMMAND echo "Numbering"
COMMAND echo "${CMAKE_C_COMPILER}" "\$(C_DEFINES)" "\$(C_INCLUDES)" "\$(C_FLAGS)" -c "${PROJECT_BINARY_DIR}/ErrAux.c"
COMMAND "${CMAKE_C_COMPILER}" "\$(C_DEFINES)" "\$(C_INCLUDES)" "\$(C_FLAGS)" -c "${PROJECT_BINARY_DIR}/ErrAux.c"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
COMMENT "Numbering build"
VERBATIM
)
An explicit compilation is forced after sign-on id customization. It mimics what is found in the various Makefile's and my not be safe for production. It's a cheat trick on both CMake and make.
UPDATE: Option -c is required to postpone link step until the final application liniking process.
This addition creates havoc in the link, as shown by the log, where you see a double compilation (the standard make one and the add_custom_command() one):
Scanning dependencies of target anathem
[ 13%] Building C object AnaThem/CMakeFiles/anathem.dir/ErrAux.c.o
[ 14%] Linking C executable anathem
Numbering build
3.0.0-47
Numbering
/usr/bin/cc -DANA_DEBUG=1 -I/home/prog/projects/AnaLLysis/build/AnaThem -I/home/prog/projects/AnaLLysis/AnaThem -g /home/prog/projects/AnaLLysis/build/AnaThem/ErrAux.c
/usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
collect2: error: ld returned 1 exit status
AnaThem/CMakeFiles/anathem.dir/build.make:798: recipe for target 'AnaThem/anathem' failed
make[2]: *** [AnaThem/anathem] Error 1
If I force a full recompilation, to make sure all sources are compiled, *main.c* included, I get the same error on `main`.
The only logical explanation is my manual C invocation is faulty and somehow destroys vital information. I checked with *readelf* that `main` is still in the symbol table for *main.c.o* and that it is still taken into account by the link step (from file *link.txt*).
UPDATE: Even with the correct link, I'm still experiencing the infinite loop syndrom. The generated application still has its sign-on id lagging behind the actual build counter.
Can someone give me a clue for the right direction?
FYI I'm quite new to CMake, so I may do things really wrong. Don't hesitate to criticize my mistakes.
The key to the solution is to put the generated module where make expects to find it. CMake organizes the build tree in a non-trivial way.
The shortcomming in my added compilation in add_custom_command() was to believe that by default the binary will be stored in the "usual" CMake locations. Since I forge manually my compiler command, this is not the case.
I found the module in the source directory, which is a consequence of the WORKING_DIRECTORY option, with name ErrAux.o and not ErrAux.c.o.
To obtain the correct behavior, I force an output location with:
-o "${PROJECT_BINARY_DIR}/CMakeFiles/anathem.dir/ErrAux.c.o"
Now, when I run make again, nothing happens since nothing changed.
Side question
To make the solution portable (if needed), are there CMake variables for CMakeFiles and anathem.dir directories? Or in the latter case, for the current target as "anathem" as the target name in add_custom_command()?
I have a simple makefile project where I just want make install to copy files to a target folder, ie:
all:
#echo "Nothing to build"
install:
cp ./*.wav /usr/share/snd
my_custom_target:
#echo "For testing purposes"
However, whenever I try to build any targets (ie: clean, all, install, my_custom_target, etc), every single one just echos "Nothing to be done for 'clean'", "Nothing to be done for 'all'", etc. My guess is that a makefile project is expecting at least something being built (ie: C/C++ file, etc).
Does anyone have any suggestions on how to proceed with this?
Thank you.
This seems to indicate that make is not able to find, or not able to correctly parse, your Makefile. What is the file named?
Also, ensure that the commands in each rule (like the cp ./*.wav /usr/share/snd) are prefixed by an actual tab character, not spaces. In the sample that you pasted in, they are prefixed simply by three spaces, but for make to parse it properly, they need to be prefixed by an actual tab character.
One more thing to check is whether there are files named all, install, or my_custom_target. Make does not care about whether some C or C++ file is built; the rules can do anything that you want. But it does check to see if there is a file named the same as the rule, and whether it is newer than the dependencies of the rule. If there is a file, and it is newer than all dependencies (or there are no dependencies, like in this example), then it will decide that there is nothing to do. In order to avoid this, add a .PHONY declaration to indicate that these are phony targets and don't correspond to actual files to be built; then make will always run these recipes, whether or not there is an up-to-date file with the same name.
.PHONY: all install my_custom_target