I recently stumbled upon this in a project I'm working on. In package A, there is a required configuration option --package-B-makefile-location from which A's makefile borrows variable values.
Is this a common design pattern which has merit? It seems to me that B's package source is as important as its binary for compiling A. Might there be reasons I wouldn't want to tamper with it?
Thanks,
Andrew
It is far from unheard of for one package to need other packages pre-installed, and you have to specify those locations.
For example, building GCC (4.5.2), you need to specify the locations of the GMP, MPFR and MPC libraries if they won't be found by default.
Complex systems which are extensible - Perl, Apache, Tcl/Tk, PHP - provide configuration data to their users in various ways (Config.pm for Perl, apxs for Apache, etc), but that configuration data is crucial to dependent modules.
My suspicion is that your Package A needs some of the configuration data related to Package B, but there isn't a fully-fledged system for providing it. As a workaround, Package A needs to see the configuration data encapsulated in the makefile.
It is not common to need the makefile; it is not uncommon to need some information about other packages.
It's a common and useful design pattern as far as it goes, but it can be abused like any other.
I'm not sure I understand the second part of your question, but if the makefiles are well designed then any change you make to B's makefiles which doesn't break B won't break A either.
Related
I want to require a c library which was build by with the autotools.
To be honest I have little to no idea how they work :/
(The library which I want to require is "https://github.com/p4lang/PI")
I have executed the ./configure etc. scripts and successfully installed it.
When I search my usr I find the library under /usr/local/lib/libpi.a
and analogously the header files under /usr/local/include/PI.
I build my project with cmake and would like to have a cross platform solution with it.
However I would be satisfied to use the pkg-config command.
Does anybody know what is the "correct" / "recommended" way to get cflags,
or at least a variant in which I do not have to hard code the paths?
The involvement of the Autotools ends at the point where the built artifacts are installed on the system. Using those does not go through the Autotools.* This applies just as much when the installed artifacts are libraries and headers as when they are executables. There's nothing special or different about using Autotools-built programs or libraries.
I build my project with cmake and would like to have a cross platform
solution with it. However I would be satisfied to use the pkg-config
command.
Just like projects served by any other build system, Autotools projects can build and install pkg-config configuration files, or CMake macros, or whatever other bits and pieces they might think appropriate to assist users, but this is project-specific. The Autotools do not create such additional pieces of their own accord, but some Autotools-based projects do add them. And some don't, just like some CMake projects don't, and some projects with hand-rolled build systems don't, etc..
Does anybody know what is the "correct" / "recommended" way to get cflags, or at least a variant in which I do not have to hard code the paths?
Note that typically, for a library whose name you know, the only flags you might need are those specifying the location of the library headers and / or one specifying the location of the libraries themselves. Even these are unnecessary if the relevant pieces are installed in places that the compiler looks by default. Also these are generally not considered CFLAGS, per se. Terminology varies a bit, but the former is a preprocessor flag, and the latter is a link flag.
Since you're using CMake, you could consider writing CMake code to search likely directories for the wanted libraries and headers, and to set the results in suitable variables for other code to use. That's more of an Autotools-style approach, though. Alternatively, you could define a user-set variable by which the wanted location(s) can be specified to CMake. This assumes that the third-party project is not already providing something useful for the purpose. Or, licensing permitting, you could package the third-party library together with your own, so that you are in control of where it gets installed.
In the general case, however, this is simply something that people have to deal with themselves when they build software. Make life easier for them by providing good documentation of what your project's dependencies are, and of how to inform the build system of their locations, and make useful provisions for feeding that information into the build system.
*An exception could be asserted for use of libtool archives, which an Autotools project might install alongside regular libraries -- if one wanted to use those, they would directly or indirectly go through libtool. But in practice, that's only going to happen in another Autotools project.
I'm trying to port my project to another platform and I've found a few differences between this new platform and the one I started on. I've seen the autotools package and configure scripts which are supposed to help with that, but I was wondering how feasible it would be to just have a separate branch for each new platform.
The only problem I see is how to do development on the target platform and then merge in changes to other branches without getting the platform-dependent changes. If there is a way to do that, it seems to me it'd be much cleaner.
Has anyone done this who can recommend/discourage this approach?
I would definitely discourage that approach.
You're just asking for trouble if you keep the same code in branches that can't be merged. It's going to be incredibly confusing to keep track of what changes have been applied to what branches and a nightmare should you forget to apply a change to one of your platform branches.
You didn't mention the language, but use the features available in the language to separate code differences between platforms, but using one branch. For example, in C++, you should first use file-based separation. For example, if you have sound code for Mac, Linux and Windows platforms, create a sound_mac.cpp, sound_windows.cpp and sound_linux.cpp file, each containing the same classes and methods, but containing very different platform-specific implementations. Obviously, you only add the appropriate file to the IDE on the particular platform. So, your Xcode project gets sound_mac.cpp file, while your Visual Studio project uses the sound_windows.cpp file. The files which reference those classes and methods will use #ifdef's to determine which headers to include.
You'll use a similar approach for things like installer scripts. You may have a different installer on the Mac than on Windows, but the files for both will be in the branch. Your build script on the Mac will simply utilize the Mac-specific installer files and ignore the Windows-specific files.
Keeping things in one branch and just ignoring what doesn't apply to the current platform allows you merge back and forth between topic branches and the master, making your life much more sane.
Branching to work out compatibility for a target platform is doable. Just be sure to separate out changes that don't have to do with the target platform specifically into another branch.
We have a project that is going to require linking against libcurl and libxml2, among other libraries. We seem to have essentially two strategies for managing these depencies:
Ask each developer to install those libraries under the "usual" locations, e.g. /usr/lib, or
Include the sources to these libraries under a dedicated folder in the project's source tree.
Approach 1 requires everyone to make sure those libraries are installed on their system, but appears to be the approach used by many open source projects. On such projects, the build will detect that those libraries are missing and will fail.
Approach 2 might make the project tree unmanageably large in some instances and make the compilation time much longer. In addition, this approach can obviously be taken too far. I wouldn't put the compiler under the project tree, for instance (right?).
What are the best practices wrt external dependencies? Can/should one require of every developer to have certain libraries installed to build the project? Or is considered better to include all the dependencies in the project tree?
Don't bother about their exact location in your code. Locating them should be handled by the used compiler/linker (or the user by setting variables) if they're common. For very uncommon dependencies (or ones with customized/modified files) you might want to include them in your source (if possible due to licensing etc.).
If you'd like it more convenient, you should use some script (e.g. configure or CMake) to setup/create the build files being used. CMake for example allows you to set different packages (libcurl and libxml2 in your example) as optional and required. When building the project it will try to locate those, if that fails it will ask the user. This IS an additional step and might make building a bit more cumbersome but it will also make downloading faster (smaller source) as well as updating easier (as all you have to do is rebuild your program).
So in general I'd follow approach 1, if there's special/rare/customized stuff being used, approach 2.
The normal way is to have the respective dependencies and have the developer install them. Later, if the project is packeted into .deb or .rpm, these packets will require the respective libraries to be installed, the source packets will have the -devel packets as dependencies.
Best practice is not to include the external libraries in your source tree - instead, include a text file called INSTALL in your project root, which gives instructions on building the project and includes a list of the library dependencies, including minimum versions.
The term has several definition according to Wikipedia, however what I'm really interested in is creating a program that has all its needed dependencies included within the source folder, so the end user doesn't need to install additional libraries for the app to install. For example, how Mac apps has all its dependencies all within the program itself already...
or is there a function that autotools does this? I'm programming in the Linux environment...
Are you talking about the source code of your application, or about your application binary?
The answer I'd give for both the cases depends on what libraries you're using.
If you're using libraries that you can find anywhere, that are somehow standard and/or that are quite big, you shouldn't attach them to your application, just require them both to build and to run your application.
Anyway don't be much concerned about your source code: little people will build your application, and they probably know something about programming and how a Linux system works; it won't be a big deal to require many (also not-so-common) dependences to build your application.
For what concerns the binary version it could be a little more problematic, since it will be used by end users who often don't know anything about libraries and programming stuff: you could choose to statically link the smallest and most uncommon libraries to your binary, in order to have less dependences.
You could do it, if you link statically, but it'd be somewhat unusual, and depending on what your program is supposed to do, you might be limiting yourself.
The alternative, if this is not just a one-off project, is to create a Linux Standard Base compatible RPM package and restrict yourself to linking against the libraries and symbols that LSB defines.
Run ldd on your program to discover all dependencies, then copy these to your directory, and add a program-wrapper script that issues
#!/bin/sh
LD_LIBRARY_PATH="${0##*/}:$LD_LIBRARY_PATH" exec "${0##*/}/real-program" "$#";
Duplicating the Mac OS X .app behavior on a plain POSIX system is difficult because it is very hard to guarantee that a process can find it's own executable (there are several way that will almost always work...). Mac OS X provides a OS service for this, but Linux (for instance) does not.
Once you've accomplished that feat, this becomes possible. Though, as others have mentioned, it loses the ability to share resource demands (disk space, RAM space, cache space) with other programs that use the same libraries because you'd be using static copies, or dynamically loading your own copy from the .app-like bundle.
I'm a bit naive when it comes to application development in C. I've been writing a lot of code for a programming language I'm working on and I want to include stuff from ICU (for internationalization and unicode support).
The problem is, I'm just not sure if there are any conventions for including a third party library. for something like readline where lots of systems are probably going to have it installed already, it's safe to just link to it (I think). But what about if I wanted to include a version of the library in my own code? Is this common or am I thinking about this all wrong?
If your code requires 3rd party libraries, you need to check for them before you build. On Linux, at least with open-source, the canonical way to do this is to use Autotools to write a configure script that looks for both the presence of libraries and how to use them. Thankfully this is pretty automated and there are tons of examples. Basically you write a configure.ac (and/or a Makefile.am) which are the source files for autoconf and automake respectively. They're transformed into configure and Makefile.in, and ./configure conditionally builds the Makefile with any configure-time options you specify.
Note that this is really only for Linux. I guess the canonical way to do it on Windows is with a project file for an IDE...
If it is a .lib and it has no runtime linked libraries it gets complied into you code. If you need to link to dynamic libraries you will have to assure they are there provide a installer or point the user to where they can obtain them.
If you are talking about shipping your software off to end users and are worried about dependencies - you have to provide them correct packages/installers that include the dependencies needed to run your software, or otherwise make sure the user can get them (subject to local laws, export laws, etc, etc, etc, but that's all about licensing).
You could build your software and statically link in ICU and whatever else you use, or you can ship your software and the ICU shared libraries.
It depends on the OS you're targeting. For Linux and Unix system, you will typically see dynamic linking, so the application will use the library that is already installed on the system. If you do this, that means it's up to the user to obtain the library if they don't already have it. Package managers in Linux will do this for you if you package your application in the distro's package format.
On Windows you typically see static linking, which means the application bundles the library and it will use that specific version. many different applications may use the same library but include their own version. So you can have many copies of the library floating around on your system.
The problem with shipping a copy of the library with your code is that you don't get the benefit of the library's maintainers' bug fixes for free. Obscure, small, and unsupported libraries are generally worth linking statically. Otherwise I'd just add the dependency and ensure that whatever packages you ship indicate it appropriately.