Shake build capturing directories - shake-build-system

I have recently converted my works make based build system to shake. I am now trying to make shake a little more robust to changes in the directory structure so that I do not have to regenerate the build system.
Each of my projects use are C based and have the following directory structure
src
source folder 1
source folder 2
inc
inc folder 1
inc folder 2
I am able to capture all the source files but what I cant get to work is capturing the include folders. The root inc folder and the sub folders I am trying to capture into a variable in the build system. I have been using the following setup
includes = getDirectoryDir "inc"
This will give me the included sub folders but not the root folder inc. Which I thought I could work around but inc will not be tracked.
What I would like is to have something like
includes = getDirectoryDirAndRoot "inc"
Which will capture each of the subdirectories and the root directory and have them tracked in the build system.
That aside what I have also tried to use
gcc -o out includes
But I would need to have every element in includes prepended with "-I" which I can't seem to figure out.
I guess how would one go abut doing this in shake, in make I can accomplish all of this by using makes shell function and a couple of string manipulation functions.

I think the question can be interpreted both ways, and both ways are useful (you may even want both), so I'll give two answers:
Answer 1: You want the C file to be recompiled if any file in the inc directory changes.
"*.c" *> \out -> do
headerFiles <- getDirectoryFiles "inc" "**/*.h"
need headerFiles
...
This snippet gets a list of all header files in the inc directory (or it's subdirectories) and introduces a dependency on them. If any header file changes, this rule will rerun.
Answer 2: You want to get the list of directories to pass to gcc.
"*.c" *> \out -> do
includeDirs <- getDirectoryDirs "inc"
cmd "gcc -c" [out] "-Iinc" (map ("-Iinc/" ++) includeDirs)
...
This snippet gets the directory names under inc and then uses map to prepend -Iinc/ over them. It also passes -Iinc directly. If the list of directories under inc changes this rule will rebuild. However, if the header files contained in those directories change nothing will rebuild. You can add dependencies on the used header files with the gcc -MD family of flags, as described in the Shake user manual, or using the technique from Answer 1.

Have a look at addOracle and its cousin addOracleCache. This should allow you to depend on information besides the files themselves, such as directories to be included.
But I would need to have every element in includes prepended with "-I" which I can't seem to figure out.
You can use Haskell here. If you have a list of directories directories :: [FilePath], you can turn those into compiler flags with
asIncludes :: [FilePath] -> [String]
asIncludes = fmap ("I" ++)

Related

Skip folders to build using scons after full build

I have large number of source files ~10,000 and they are scattered across several folders.
I wanted to know if there is a way to skip certain folders, I know that havent changed.
For ex, consider the following folder structure
A (Sconstruct is here)
|
->B (unchanged 1000 files)
->C (unchanged 1000 files)
->D (changed 1 file)
Once I do a complete build for the first time, I want it to compile everything (B, C, D) but when I modify a file in D (I know that), I would like to build folder D only, skip B and C and finally link them all together to form the final binary (B, C and new D).
I have been looking for quite some time now but not able to figure it out. Is it even possible? Can I specify only to look into a particular folder for changes?
First, I'd investigate using Decider('timestamp-match') or even building a custom Decider function. That should speed up your dependency-checking time.
But to answer your specific question, yes it is possible to not build the targets in B and C. If you don't invoke a builder for the targets in those subdirectories, you just won't build them. Just have an if that selectively chooses which env.Object() (or similar) functions to invoke.
When I fleshed out your example, I chose to have each subdirectory create a library that would be linked into the main executable, and to only invoke env.SConscript() for the directories that the user chooses. Here is one way to implement that:
A/SConstruct:
subdirs = ['B','C','D']
AddOption('--exclude', default=[], action='append', choices=subdirs)
env = Environment(EXCLUDES = GetOption('exclude'))
env.SConscript(
dirs=[subdir for subdir in subdirs
if subdir not in env['EXCLUDES']],
exports='env')
env2 = env.Clone()
env2.PrependUnique(LIBPATH=subdirs,
LIBS=subdirs)
env2.Program('main.c')
B/SConscript:
Import('env')
env.Library('B', env.Glob('*.c'))
C/SConscript:
Import('env')
env.Library('C', env.Glob('*.c'))
D/SConscript:
Import('env')
env.Library('D', env.Glob('*.c'))
To do a global build: scons
To do a build after modifying a single file in D: scons --exclude=B --exclude=C
EDIT
Similarly, you can add a whitelist option to your SConstruct. The idea is the same: only invoke builders for certain objects.
Here is a SConstruct similar to above, but with a whitelist option:
subdirs = ['B','C','D']
AddOption('--only', default=[], action='append', choices=subdirs)
env = Environment(ONLY = GetOption('only') or subdirs)
env.SConscript(
dirs=env['ONLY'],
exports='env')
env2 = env.Clone()
env2.PrependUnique(LIBPATH=subdirs,
LIBS=subdirs)
env2.Program('main.c')
To build everything: scons
To rebuild D and relink main program: scons --only=D
If D is independent of B and C just specify your target in D (program/library), or the whole directory, as target explicitly on the command line like scons D/myprog.exe. SCons will expand the required dependencies automatically, and such doesn't traverse the unrelated folders B and C.
Note how you can specify an arbitrary number of targets, so
scons D/myprog.exe B
is allowed too.

emacs open includes in code files, multiple directories

ECB, cscope, xcscope. All working. Is cedet necessary?
MSVS, eclipse, code::blocks, xcode. All of them allow easy click on an included source file and take you to it.
Now, with the above setup, emacs does too.
Except emacs doesn't take you to the std:: libraries, doesn't assume their location in /src/linux or some such. Emacs is a little blind and needs you to manually set it up.
But I can't find anything that explains how to set up ff-find-other-file to search for any other directories, let alone standard major libraries, outside of a project's directory.
So, how do I do it?
Edit; Most important is to be able to request on either a file name (.h, .c, .cpp, .anything) or a library (iostream) and open the file in which the code resides.
Additional directories for ff-find-other-file to look into are in ff-search-directories variable which by default uses the value of cc-search-directories, so you should be able to customize any of the two to specify additional search paths.
As for the second question about requesting a file name and finding corresponding file, something like that will do:
(defun ff-query-find-file (file-name)
(interactive "sFilename: ")
;; dirs expansion is borrowed from `ff-find-the-other-file` function
(setq dirs
(if (symbolp ff-search-directories)
(ff-list-replace-env-vars (symbol-value ff-search-directories))
(ff-list-replace-env-vars ff-search-directories)))
(ff-get-file dirs file-name))
Call it with M-x ff-query-find-file or bind it to a key to your liking.

Building a Shared Library, Updating Header Files to Compiler/System Directories

A friend and I are using Qt Creator with Boost to build a game engine. So far we have this idea that the engine is going to be a shared library, with the idea that we can run it with a test executable which will turn into the game we eventually want to make.
The problem is header files, mainly. I'd like to find some way for Qt Creator to be able to recognize the header files as soon as the latest builds of the engine have been built or even when they're added. At first I was thinking a script in Python which executed as a build step in Qt Creator after the engine had been built, would simply copy the header files to a system directory (/usr/include, for example - if operating on a *nix system), where the IDE would then recognize the header files when linking the engine with the test executable, and we'd also have full auto completion support.
Of course, environmental variables would be used, and while I prefer developing in Linux, my friend prefers Windows, so we agreed to take care of development in regards to our respective platform preferences.
While this seems like a good solution, I think this Python script idea may be over kill. Is there a better way to do this?
Update
From to the suggested Qmake script, I end up getting this error.
cp -f "/home/amsterdam/Programming/atlas/Engine/AtlasEngine/"AtlasEngine_global.h "/"
cp: cannot create regular file `/AtlasEngine_global.h': Permission denied
make: Leaving directory `/home/amsterdam/Programming/atlas/Engine/AtlasEngine__GCC__Linux__Debug'
make: *** [libAtlasEngine.so.1.0.0] Error 1
15:20:52: The process "/usr/bin/make" exited with code 2.
Error while building project AtlasEngine (target: Desktop)
When executing build step 'Make'
My adjustments look as follows:
# Copy over build artifacts
SRCDIR = $$ATLAS_PROJ_ROOT
DESTDIR = $$ATLAS_INCLUDE
# Look for header files there too
INCLUDEPATH += $$SRCDIR
# Dependencies: mylib. Only specify the libs you depend on.
# Leave out for building a shared library without dependencies.
#win32:LIBS += $$quote($$SRCDIR/mylib.dll)
# unix:LIBS += $$quote(-L$$SRCDIR) -lmylib
DDIR = \"$$SRCDIR/\" #<--DEFAULTS
SDIR = \"$$IN_PWD/\"
# Replace slashes in paths with backslashes for Windows
win32:file ~= s,/,\\,g
win32:DDIR ~= s,/,\\,g
win32:SDIR ~= s,/,\\,g
for(file, HEADERS) {
QMAKE_POST_LINK += $$QMAKE_COPY $$quote($${SDIR}$${file}) $$quote($$DDIR) $$escape_expand(\\n\\t)
}
I have managed to overcome this using some Qmake magic that works cross-platform. It copies over the shared libraries (either .dll or .so files) along with the header files to a directory in a directory dll at a level next to your current project.
Put this in the end of your .pro files and change the paths/libs accordingly.
# Copy over build artifacts
MYDLLDIR = $$IN_PWD/../dlls
DESTDIR = \"$$MYDLLDIR\"
# Look for header files there too
INCLUDEPATH += $$MYDLLDIR
# Dependencies: mylib. Only specify the libs you depend on.
# Leave out for building a shared library without dependencies.
win32:LIBS += $$quote($$MYDLLDIR/mylib.dll)
unix:LIBS += $$quote(-L$$MYDLLDIR) -lmylib
DDIR = \"$$MYDLLDIR/\"
SDIR = \"$$IN_PWD/\"
# Replace slashes in paths with backslashes for Windows
win32:file ~= s,/,\\,g
win32:DDIR ~= s,/,\\,g
win32:SDIR ~= s,/,\\,g
for(file, HEADERS) {
QMAKE_POST_LINK += $$QMAKE_COPY $$quote($${SDIR}$${file}) $$quote($$DDIR) $$escape_expand(\\n\\t)
}
Then adjust the LD_LIBRARY_PATH in the 'Run settings' of your project to point to that same dll directory (relatively).
Yes, it's ugly with escaping for paths with spaces and backslashes, but I found this to be working well cross-platform. Windows (XP, 7) and Linux tested. And yes it includes environment settings to be changed for running your project, but at least you don't need external (Python) scripts anymore or to install it to system directory requiring root privileges.
Improvements are welcome.
I'm not sure if anyone else would be having issues with this, but for whatever reason Qmake wasn't able to access my user specified environment variables properly.
So, since this was the case, one solution I came up with was to add the variables as Qmake configuration variable.
If you're in a UNIX based system, the first thing you're going to want to do is append the location of qmake - which should lie in your QtSDK folder - to your system $PATH, like so:
export PATH=$PATH:/path/to/QtSDK/...../qmake_root
From there, you can do something along the lines of:
qmake -set "VARIABLE" "VALUE"
In this case, I simply did:
qmake -set "ATLAS_PROJ_ROOT" $ATLAS_PROJ_ROOT.
And then I accessed it in my Qmake project file (.pro) with:
VAR = $$[ATLAS_PROJ_ROOT]
More info can be found here.

How to use autotools for deep projects?

I have a C project that has the following structure
Main/
Makefile.am
bin/
src/
Makefile.am
main.c
SomeLibrarySource/
SomeFuncs.c
SomeFuncs.h
The main.c contains the main function that uses functions defined in the SomeFuncs.{h/c} files.
I want to use autotools for this project. I read a couple of resources on autotools. But, I was only able to manage using autotools for a single level project where all source, object and other files reside in the same directory.
Then I got some links that talked about using autotools for deep projects like this one and then I got confused.
Right now I have two Makefile.am as follows
Makefile.am
SUBDIRS=src
src/Makefile.am
mainprgdir=../
mainprg_PROGRAMS=main
main_SOURCES=main.c
I am pretty sure that these files should not be as I have them now :P
How do I use autotools for the above project structure? (At least what should be there in those Makefile.am(s) and where should I place them.
EDIT:
One more thing! At the end I would like to have the object files created in the bin directory.
Thanks
mainprogdir=../ does not make a whole lot of sense (you don't know what it is relative to on installation). Probably intended:
# Main/Makefile.am
# .━━ target for `make install`
# |
# ↓ ↓━━ target for compilation
bin_PROGRAMS = bin/main
# ↓━━ based upon compilation target name
bin_main_SOURCES = src/main.c
There are two main approaches. If the functions in SomeLibrarySource are used only by main, then there's no need to build a separate library and you can simply specify the source files in src/Makefile.am
main_SOURCES = main.c SomeLibrarySource/SomeFuncs.c
However, if you actually want to use the functions in other code in your tree, you do not want to compile SomeFuncs.c multiple times but should use a convenience library.
# Assigning main_SOURCES is redundant
main_SOURCES = main.c
main_LDADD = SomeLibrarySource/libSomeFuncs.a
noinst_LIBRARIES = SomeLibrarySource/libSomeFuncs.a
AM_CPPFLAGS = -I$(srcdir)/SomeLibrarySource
(You'll need AC_PROG_RANLIB in configure.ac to use convenience libraries.)
If the source file is named SomeFuncs.c, automake will not need Makefile.am to specify SomeLibrarySource_libSomeFuncs_a_SOURCES, but if the name of the source code file does not match the name specified in noinst_LIBRARIES, SomeLibrarySource_libSomeFuncs_a_SOURCES should be set to the list of files used to build the library. Note that you do not need to specify main_SOURCES, since main.c is the default value if left unspecified (but it's not a bad idea to be explicit.) (In all of this, I am not comfortable use CamlCase names, but the system I'm using uses a case insensitive file system (biggest mistake apple ever made) and the examples I give here are working for me. YMMV)
You could of course do a recursive make, or build the library as a separate project and install it. (I like the final option. Libraries with useful features should exist on their own.)

Crossprofiling with gcov, but GCOV_PREFIX and GCOV_PREFIX_STRIP is ignored

I want to use GCOV to make code coverage but the tests will run on another machine. So the hard wired path to .gcda files in the executable won't work.
In order to change this default directory I can use the GCOV_PREFIX and GCOV_PREFIX_STRIP env vars, as it's said here.
Here my commands I used:
$ export GCOV_PREFIX="/foo/bar"
$ export GCOV_PREFIX_STRIP="3"
$ gcc main.c -fprofile-arcs -ftest-coverage
$ strings a.out | grep gcda
/home/calmarius/blahblah/main.c.gcda
The path remains the same.
Anyone have experience with this?
The environment variables are taken into account when you run the code.
Set them to the appropriate values on the target machine before you run your tests, and the .gcda files will be generated where you want them.
************ ARRRRGGGGGHHHHH ************
Please, please vote for Mat's answer.
The environment variables are taken into account when you run the
code.
This one sentence is apparently missing from EVERY document I have read regarding how to relocate the output !
In fact , allow me to expand that answer just a bit.
GCOV_PREFIX is a runtime - as apposed to build time - environment variable and determines the root directory where the gcov output files (*.gcda) are written.
GCOV_PREFIX_STRIP=X is also a runtime variable, and has the effect of stripping X elements from the path found in the object files (strings XXXX.o)
What this means is:
When you build your project, the object files are written with the full path to the location of each source file responsible for each object file embedded within them.
So, imagine you are writing an executable MyApp and a library MyLib in a directory stricture like this:
/MyProject
|-MyApp
|--MyLib
Notice MyLib is a subdirectory of MyApp
Let's say MyApp has 2 source files, and MyLib has 3
After building with the "-coverage" flag, you will have generated
5 .gcno files, 1 for each object file.
Embedded in the .o files for MyApp will be the absolute path **/MyProject/MyApp/**a_source_file.cpp Similarly, embedded in the .o files for MyLib will be the path **/MyProject/MyApp/MyLib/**another_source_file.cpp
Now, let's say you're like me, and move those files onto a completely different machine with a different directory structure from where they got built. In my case the target machine is actually a totally different architecture. I deploy to /some/deploy/path not /MyProject on that machine.
If you simply run the app, gcov data will try to write corresponding .gcda files to /MyProject/MyApp and /MyProject/MyApp/MyLib for each object file in your project, because that's the path indicated by the .o files, and after all, MyApp, and MyLib are simply collections of .o files archived together, with some other magic to fix up funcitons pointers and stuff.
Chances are, those directories don't exist, and you probably aren't running as root (are you?), so those directories won't be created either. Soooo.. you won't see any gcda files within the deploy location /my/deploy/path.
That's totally confusing, right !?!??!?!?!?
Here's where GCOV_PREFIX and GCOV_PREFIX_STRIP come in.
(BAM ! fist hits forehead)
You need to instruct the ****runtime**** that the embedded path in the .o files isn't really what you want. You want to "strip" some of the path off, and replace it with the deploy directory.
So, you set the deploy directory via GCOV_PREFIX=/some/deploy/path and you want to strip the /MyProject from the generated .gcda paths so you set GCOV_PREFIX_STRIP=1
With these two environment variables set, you run your app and then look in
/some/deploy/path/MyApp and /some/deploy/path/MyApp/MyLib and lo and behold, the 5 gcda files miraculously appear, one for each object file.
Note: the problem is compounded if you do out of source builds. The .o points to the source, but the gcda will be written relative to the build directory.

Resources