I'm a bit naive when it comes to application development in C. I've been writing a lot of code for a programming language I'm working on and I want to include stuff from ICU (for internationalization and unicode support).
The problem is, I'm just not sure if there are any conventions for including a third party library. for something like readline where lots of systems are probably going to have it installed already, it's safe to just link to it (I think). But what about if I wanted to include a version of the library in my own code? Is this common or am I thinking about this all wrong?
If your code requires 3rd party libraries, you need to check for them before you build. On Linux, at least with open-source, the canonical way to do this is to use Autotools to write a configure script that looks for both the presence of libraries and how to use them. Thankfully this is pretty automated and there are tons of examples. Basically you write a configure.ac (and/or a Makefile.am) which are the source files for autoconf and automake respectively. They're transformed into configure and Makefile.in, and ./configure conditionally builds the Makefile with any configure-time options you specify.
Note that this is really only for Linux. I guess the canonical way to do it on Windows is with a project file for an IDE...
If it is a .lib and it has no runtime linked libraries it gets complied into you code. If you need to link to dynamic libraries you will have to assure they are there provide a installer or point the user to where they can obtain them.
If you are talking about shipping your software off to end users and are worried about dependencies - you have to provide them correct packages/installers that include the dependencies needed to run your software, or otherwise make sure the user can get them (subject to local laws, export laws, etc, etc, etc, but that's all about licensing).
You could build your software and statically link in ICU and whatever else you use, or you can ship your software and the ICU shared libraries.
It depends on the OS you're targeting. For Linux and Unix system, you will typically see dynamic linking, so the application will use the library that is already installed on the system. If you do this, that means it's up to the user to obtain the library if they don't already have it. Package managers in Linux will do this for you if you package your application in the distro's package format.
On Windows you typically see static linking, which means the application bundles the library and it will use that specific version. many different applications may use the same library but include their own version. So you can have many copies of the library floating around on your system.
The problem with shipping a copy of the library with your code is that you don't get the benefit of the library's maintainers' bug fixes for free. Obscure, small, and unsupported libraries are generally worth linking statically. Otherwise I'd just add the dependency and ensure that whatever packages you ship indicate it appropriately.
Related
Coming from programming environments that support package managers, I experience a lot of discomfort installing and using libraries not included in the default project.
For example, #include <threads.h> triggers an error threads.h file not found. I found that the compiler looks for header files in /Library/Developer/CommandLineTools/usr/include/c++/v1 by issuing gcc -print-prog-name=cpp -v. I am not sure if this a complete folder list? How do I find the ones that it doesn't find by default? I am on OSX, but Windows solution is also desired.
The question doesn't really say whether you are building your own project, or someone else's, and whether you use an IDE or some build system. I'll try to give a generic answer suitable for most scenarios.
But first, it's header files, not libraries (which are a different kind of pain, by the way). You need to explicitly make them available to the compiler, unless they reside on a standard search path. Alas, it's a lot of manual work sometimes, especially when you need to build a third-party project with a ton of dependencies.
I am not sure if this a complete folder list?
Figuring out the standard include paths of your compiler can be tricky. Here's one question that has some hints: What are the GCC default include directories?
How do I find the ones that it doesn't find by default?
They may or may not be present on your machine. If they are, you'll have to find out where they are located. Otherwise you have to figure out what library they belong to, then download and unpack (and probably build) it. Either way, you will have to specify the path to that library's header files in your IDE (or Makefile, or whatever you use). Oh, and you need to make sure that the library version matches the version required by the project. Fun!
On macOS you can use third-party package managers (e.g. brew) to handle library installation for you.
pkg-config is not available on macOS, unless you install it from a third-party source.
If you are building your own project, a somewhat better solution is to use CMake and its find_package command. However, only libraries supported by CMake can be discovered this way. Fortunately, their collection of supported libraries is quite extensive, and you can make your own find_package scripts. Moreover, CMake is cross-platform, and it can handle versioning for you.
I want to require a c library which was build by with the autotools.
To be honest I have little to no idea how they work :/
(The library which I want to require is "https://github.com/p4lang/PI")
I have executed the ./configure etc. scripts and successfully installed it.
When I search my usr I find the library under /usr/local/lib/libpi.a
and analogously the header files under /usr/local/include/PI.
I build my project with cmake and would like to have a cross platform solution with it.
However I would be satisfied to use the pkg-config command.
Does anybody know what is the "correct" / "recommended" way to get cflags,
or at least a variant in which I do not have to hard code the paths?
The involvement of the Autotools ends at the point where the built artifacts are installed on the system. Using those does not go through the Autotools.* This applies just as much when the installed artifacts are libraries and headers as when they are executables. There's nothing special or different about using Autotools-built programs or libraries.
I build my project with cmake and would like to have a cross platform
solution with it. However I would be satisfied to use the pkg-config
command.
Just like projects served by any other build system, Autotools projects can build and install pkg-config configuration files, or CMake macros, or whatever other bits and pieces they might think appropriate to assist users, but this is project-specific. The Autotools do not create such additional pieces of their own accord, but some Autotools-based projects do add them. And some don't, just like some CMake projects don't, and some projects with hand-rolled build systems don't, etc..
Does anybody know what is the "correct" / "recommended" way to get cflags, or at least a variant in which I do not have to hard code the paths?
Note that typically, for a library whose name you know, the only flags you might need are those specifying the location of the library headers and / or one specifying the location of the libraries themselves. Even these are unnecessary if the relevant pieces are installed in places that the compiler looks by default. Also these are generally not considered CFLAGS, per se. Terminology varies a bit, but the former is a preprocessor flag, and the latter is a link flag.
Since you're using CMake, you could consider writing CMake code to search likely directories for the wanted libraries and headers, and to set the results in suitable variables for other code to use. That's more of an Autotools-style approach, though. Alternatively, you could define a user-set variable by which the wanted location(s) can be specified to CMake. This assumes that the third-party project is not already providing something useful for the purpose. Or, licensing permitting, you could package the third-party library together with your own, so that you are in control of where it gets installed.
In the general case, however, this is simply something that people have to deal with themselves when they build software. Make life easier for them by providing good documentation of what your project's dependencies are, and of how to inform the build system of their locations, and make useful provisions for feeding that information into the build system.
*An exception could be asserted for use of libtool archives, which an Autotools project might install alongside regular libraries -- if one wanted to use those, they would directly or indirectly go through libtool. But in practice, that's only going to happen in another Autotools project.
I developed a C program requiring some dynamic libraries, most notably libmysqlclient.so, which I intent to run on some remote-hosts. It seems like I have the following Options for distribution:
Compile the program static.
Install the required dependencies on the remote host
Distribute the dependencies with the program.
The first option is problematic as I need glibc-version at runtime anyway (since I use glibc and libnss for now).
I'm not sure about the second option: Is there a mechanism which checks if a installed library-version is sufficient for a program to run (beside libxyz.so.VERSION). Can I somehow check ABI-compatibility at startup?
Regarding the last Option: would I distribute ALL shared-libraries with the binary, or just the one which are presumably not installed (e.g libmysqlclient, but not libm).
Apart form this, am I likely to encounter ABI-compatibility problems if I use a different compiler for the binary then the one the dependencies were build with (e.g binary clang, libraries gcc)?
Version checking is distribution-specific. Usually, you would package your application in a .deb or .rpm file using the target distribution's packaging tools, and ship that to users. This means that you have to build your application once for each supported distribution, but there really is no way around that anyway because different distributions have slightly different versions of libmysqlclient. These distribution build tools generate some dependency version information automatically, and in other cases, some manual help is needed.
As a starting point, it's a good idea to look at the distribution packaging for something that relies on the MySQL/MariaDB client library and copy that. Maybe inspircd in Debian is a good example.
You can reduce the amount of builds you need to create and test somewhat by building on the oldest distribution versions you want to support. But some caveats apply; distributions vary in the degree of backwards compatibility they provide.
Distributing dependencies with the program is very problematic because popular libraries such as libmysqlclient are also provided by the base operating system, and if you use LD_LIBRARY_PATH to inject your own version, this could unintentionally extend to other programs as well (e.g., those you launch from your own program). The latter risk is still present even if you use DT_RUNPATH (via the -rpath linker option), although it is somewhat reduced.
A different option is to link just application-specific support libraries statically, and link base operating system libraries dynamically. (This is what some software collections do.) This does not seem to be such a great choice for libmysqlclient, though, because there might be an expectation that its feature set is identical to the distribution (regarding the TLS library and available configuration options), and with static linking, this is difficult to achieve.
I took over a fairly large C code. There are lots of legacy binaries that are requiring old version shared libraries. The server has never versions of those exact libraries. I could recompile or setup symbolic links that will connect older versions to new. Setting up symbolic links will take some time - is there any standard or smart way to do this? I am new to this and would appreciate any tips. This is all C and FreeBSD environment.
Thanks.
In general when updating legacy code with new libraries, it is best to perform a check by recompiling the source code against the new libraries and their includes. This will allow you to use the compiler to check for inconsistencies between the old and new libraries in areas such as data types, function signatures, etc.
By recompiling you also are able to check that the new libraries provide all of the dependencies that you need.
Finally, doing a recompile will help you check that you are in fact able to recompile and link everything and have all of the necessary components.
I would feel uncomfortable tying to take a short cut such as using symbolic links.
The shared-library version number is only supposed to be changed when the ABI changes. (Old versions of FreeBSD didn't quite get this right, and it's fixed in more recent versions but only for system libraries!) So the only way to make those applications work properly is to either recompile them, or supply the exact version of the shared library that they were linked against. For programs that only depend on old versions of the FreeBSD system libraries, you can installes the compat[45678]x packages, which provide the versions of the libraries supplied with the specified version of the OS -- but there are significant pitfalls:
1) If some of the libraries your application depends on are linked against newer versions of the standard libraries than your application itself is, the dynamic linker will give you two incompatible copies of the standard library, and things are not likely to work.
2) If your application loads external modules or plug-ins using dlopen(), all bets are off, because these modules are not versioned.
FreeBSD 8 and newer use symbol versioning for the C library and some other important system libraries, so those libraries should never change library version again and ABI compatibility will be preserved. Many third-party developers are not so careful, and will both break ABI without changing the library version, and change the library version without breaking the ABI, so you can't win. (Some developers don't read the documentation and think that the shared-library version number should be the same as the product's version number.)
The term has several definition according to Wikipedia, however what I'm really interested in is creating a program that has all its needed dependencies included within the source folder, so the end user doesn't need to install additional libraries for the app to install. For example, how Mac apps has all its dependencies all within the program itself already...
or is there a function that autotools does this? I'm programming in the Linux environment...
Are you talking about the source code of your application, or about your application binary?
The answer I'd give for both the cases depends on what libraries you're using.
If you're using libraries that you can find anywhere, that are somehow standard and/or that are quite big, you shouldn't attach them to your application, just require them both to build and to run your application.
Anyway don't be much concerned about your source code: little people will build your application, and they probably know something about programming and how a Linux system works; it won't be a big deal to require many (also not-so-common) dependences to build your application.
For what concerns the binary version it could be a little more problematic, since it will be used by end users who often don't know anything about libraries and programming stuff: you could choose to statically link the smallest and most uncommon libraries to your binary, in order to have less dependences.
You could do it, if you link statically, but it'd be somewhat unusual, and depending on what your program is supposed to do, you might be limiting yourself.
The alternative, if this is not just a one-off project, is to create a Linux Standard Base compatible RPM package and restrict yourself to linking against the libraries and symbols that LSB defines.
Run ldd on your program to discover all dependencies, then copy these to your directory, and add a program-wrapper script that issues
#!/bin/sh
LD_LIBRARY_PATH="${0##*/}:$LD_LIBRARY_PATH" exec "${0##*/}/real-program" "$#";
Duplicating the Mac OS X .app behavior on a plain POSIX system is difficult because it is very hard to guarantee that a process can find it's own executable (there are several way that will almost always work...). Mac OS X provides a OS service for this, but Linux (for instance) does not.
Once you've accomplished that feat, this becomes possible. Though, as others have mentioned, it loses the ability to share resource demands (disk space, RAM space, cache space) with other programs that use the same libraries because you'd be using static copies, or dynamically loading your own copy from the .app-like bundle.