I am currently looking for ways to expose the location of a shared library on Linux such that it can be picked up easily by any program installed separately. I want to make this location configurable so it can point to different possible installations of the same library. Examples of similar cases I can think of would be Qt5 and Java.
To make a long story short, I am developing FreeRDS, a FreeRDP-based Remote Desktop Services stack. Server-side RDS-aware applications link to libwinpr-wtsapi, a stub library that exposes the Microsoft Windows Terminal Services API interface, but does not implement it. This enables applications to link to libwinpr-wtsapi without having to link directly to a specific RDS implementation. On the first call to any of the WTSAPI functions, the real implementation is loaded dynamically by libwinpr-wtsapi. However, the location of the dynamic library implementing the WTSAPI (here, FreeRDS) needs to be known.
Right now, I am achieving this by setting an environment variable with the full path to the library:
export WTSAPI_LIBRARY=/opt/freerds/lib/x86_64-linux-gnu/libfreerds-fdsapi.so
However, this is not very practical, as this environment variable would need to be set for every program using the WTSAPI. In this case, I have my installation of FreeRDS in /opt/freerds.
I am thinking I could probably simplify this by using a single environment variable to expose the installation prefix of FreeRDS on the system, with something similar to JAVA_HOME:
export FREERDS_HOME=/opt/freerds
However, I then need to know the proper library subdirectory. It is also important to know that it would be possible in the future to offer both a 32-bit and a 64-bit version of the library offering the FreeRDS WTSAPI. This library basically performs RPC with the FreeRDS session manager, so that would be definitely possible.
Let's say we have FREERDS_HOME properly set, or that FreeRDS is installed in the default installation prefix of the system, which files would be "standard" to offer some additional installation configuration information? Here I'm thinking I could have an equivalent of Qt5's qt.conf that would specific installation subdirectories, like the 64-bit installation subdir, the 32-bit installation subdir, etc. However, I don't know where I should be putting that file. Should it be in <prefix>/etc/freerds/freerds.conf?
Ideas, anyone? Thank you!
some (many? all?) Linux distributions today include environment-modules, which aim is exactly to make available many different versions of the same software by customizing the environment (and eventually, shell aliases/functions) with easy front-end commands.
You can find all the needed information here.
Thanks for the multiple answers, here is the solution I finally opted for that satisfies my needs:
As explained earlier, there could be more than one installation of FreeRDS on the same system, but only one of them running at once. We can also assume FreeRDS is supposed to be running before we can attempt to interact with it. Knowing this, I modified FreeRDS to write a simple configuration file in /var/run/freerds.instance with the install prefix and installation subdirectories. This is very similar to having a .pid file, except we're exposing installation paths.
The freerds.instance file is using the .ini format, which is fairly common in configuration files. All that libwinpr-wtsapi has to do is parse /var/run/freerds.instance to find the installation prefix of the current FreeRDS instance, along with the library subdir, so we can find the correct libfreerds-fdsapi.so.
Here is what a sample freerds.instance file looks like:
[FreeRDS]
prefix="/opt/freerds"
bindir="bin"
sbindir="sbin"
libdir="lib/x86_64-linux-gnu"
datarootdir="share"
localstatedir="var"
sysconfdir="etc"
I prefer this solution because it requires literally no special configuration, setting of environment variables, etc. No matter what, we always find the proper FreeRDS installation wherever it is on the system.
You can add a $ORIGIN rpath to your executable, that makes it load libraries relative to the directory the executable is in. (See "ld: Using -rpath,$ORIGIN inside a shared library (recursive)"). This probably applies to dlopen() too.?
$ gcc ... -Wl,-rpath,'$ORIGIN/../lib/dir' -lsomething
I've also found you can run the dynamic linker directly to get some debug facility:
$ /lib/ld-linux.so.2
Usage: ld.so [OPTION]... EXECUTABLE-FILE [ARGS-FOR-PROGRAM...]
...
--list list all dependencies and how they are resolved
export LD_LIBRARY_PATH=/yourso.so
Related
Coming from programming environments that support package managers, I experience a lot of discomfort installing and using libraries not included in the default project.
For example, #include <threads.h> triggers an error threads.h file not found. I found that the compiler looks for header files in /Library/Developer/CommandLineTools/usr/include/c++/v1 by issuing gcc -print-prog-name=cpp -v. I am not sure if this a complete folder list? How do I find the ones that it doesn't find by default? I am on OSX, but Windows solution is also desired.
The question doesn't really say whether you are building your own project, or someone else's, and whether you use an IDE or some build system. I'll try to give a generic answer suitable for most scenarios.
But first, it's header files, not libraries (which are a different kind of pain, by the way). You need to explicitly make them available to the compiler, unless they reside on a standard search path. Alas, it's a lot of manual work sometimes, especially when you need to build a third-party project with a ton of dependencies.
I am not sure if this a complete folder list?
Figuring out the standard include paths of your compiler can be tricky. Here's one question that has some hints: What are the GCC default include directories?
How do I find the ones that it doesn't find by default?
They may or may not be present on your machine. If they are, you'll have to find out where they are located. Otherwise you have to figure out what library they belong to, then download and unpack (and probably build) it. Either way, you will have to specify the path to that library's header files in your IDE (or Makefile, or whatever you use). Oh, and you need to make sure that the library version matches the version required by the project. Fun!
On macOS you can use third-party package managers (e.g. brew) to handle library installation for you.
pkg-config is not available on macOS, unless you install it from a third-party source.
If you are building your own project, a somewhat better solution is to use CMake and its find_package command. However, only libraries supported by CMake can be discovered this way. Fortunately, their collection of supported libraries is quite extensive, and you can make your own find_package scripts. Moreover, CMake is cross-platform, and it can handle versioning for you.
I have a simple common lisp server program, that uses the osicat library to interface with the posix filesystem. I need to do this because the system creates symbolic links to files, and uses the POSIX stat metadata, and neither of those things are straightforward to do in portable lisp.
I am managing the dependencies with quicklisp, and I have all of this pinned to a working distribution. The app is portable between CCL and SBCL, and I tend to build it in the first and deploy it using the latter. I declare the dependencies for the app with an asdf defsystem, and I can use quicklisp to load it for easy development from local projects.
For deployment I was just using some ansible playbooks that replicated a developer environment on a remote (.e. setting up quicklisp, pushing code into local projects, running out of a user home directory) which was hacky, but mostly ok. More recently, as it's becoming more stable I have been compiling it using sb-ext:save-lisp-and-die, using a simple compile script. This means I get an executable that I can run more like a server, with service management scripts, and an anonymous user account.
This has been working very well, and so I recently moved this step to the next level, and I'm building .deb packages with my compile script, so I can bundle up everything into a relocatable binary. This also sort of works, but the resultant binaries are not relocatable from the original build host. They refuse to start up, and it appears that they try to dynamically load a shared library for osicat
Unhandled SIMPLE-ERROR in thread #<SB-THREAD:THREAD "main thread" RUNNING
Mar 15 12:47:14 annie [479]: {10005C05B3}>:
Mar 15 12:47:14 annie [479]: Error opening shared object "libosicat.so":
Mar 15 12:47:14 annie [479]: libosicat.so: cannot open shared object file: No such file or directory.
it looks like the image expects to find this in the original build tree's quicklisp archives
(ERROR "Error opening ~:[runtime~;shared object ~:*~S~]:~% ~A." "/home/builder/buil...quicklisp/dists/quicklisp/software/osicat-20180228-git/posix/libosicat.so
(SB-SYS:DLOPEN-OR-LOSE #S(SB-ALIEN::SHARED-OBJECT :PATHNAME #P"
so poking around the source, I realise that when quicklisp fetches osicat and exercise its build operation, it compiles this DLL to wrap it's interface with the system libaries, rather than just ffi to them directly - possibly because it's using cffi groveller, I don't really know much about cffi (yet). This is fine, but rather than linking to a .so using the system linker it's trying to dlopen it from a fixed path, which is not very portable, and kind of breaks the usefulness of save-image
I'm a bit stumped at this point, but before I go diving any much further into QL and cffi builds, I wondered if there's some build or compile configuration I'm missing that would make it bootstrap in a more 'static' fashion or influence the production of the wrapped library. Ideally I just want a single blob I can wrap in an installer, and link it against system libraries, but if I have to deploy some additional artefacts that's probably alright. I don't know how to make the autogenerated shared objects occur at a more controlled path.
At that point though, I may as well write a .so for my posix calls and distribute this alongside the app and try and FFI to it more directly. That would be a bit of a pain, so I would prefer to not do this.
You're right, when a dumped image is starting up, it is trying to reload the shared libraries. Which, as you are experiencing, is not working if the image is not starting on the machine it was dumped on.
This is almost what static-program-op set out to solve. A simple system definition like this should help you compile a static program:
(defsystem "foo"
:defsystem-depends-on ("cffi-grovel")
:build-operation "static-program-op" ; "asdf" package is implied
:build-pathname "foo" ; path of the generated binary
:entry-point "foo:main" ; function to use as the entry point
;; ... everything else ...
)
If your system depends on grovel files (defined by :cffi-wrapper-file, :c-file or :o-file), such as the ones provided by osicat, then it will statically link those to your dumped image.
However, it's not perfect.
Essentially, there are still some issues. Some are being fixed upstream by CFFI itself (e.g. not reloading shared libraries of the statically embedded libraries), others are a bit harder. (E.g. SBCL default compilation options don't let you use static-program-op by default. This is being fixed in Debian builds of SBCL, but other distributions are being less responsive.)
This is obviously a problem that the community at large has met, and there are several libraries that are around to help:
The first one, that has been around for a while, is Deploy. The approach it takes is that it embeds the dumped image and the libraries into an archive, and rearranges things for the binary to load them from wherever it is extracted to.
The second one, which I am biased towards because I made it, is linux-packaging. It takes the approach of fixing static-program-op by extending it, but requires you to build a custom SBCL. However, it generates distribution packages like .deb and .rpm, in order to be able to specify dependencies for system shared libraries (e.g. if you depend on sqlite, it will figure out which package provides it and add it as a dependency in the .deb). I highly recommend looking at the .gitlab-ci.yml for examples.
I recommend reading the webpages of both of those libraries to make your choice, they both have their advantages and drawbacks. <joke>Obviously, linux-packaging is superior.</joke>
Maybe you can use sb-posix:symlink and sb-posix:fstat on SBCL instead, removing the osicat dependency by feature toggle.
On Windows, it's more or less common to create "proxy DLLs" which take place of the original DLL and forward calls to it (after any additional actions as needed). You can read about it here and here for example.
However, shlib munging culture under Linux is quite different. It starts with the fact that LD_PRELOAD is the builtin feature with ld.so under Linux, which simply injects separate shlib into process and uses any symbols it defines as override. And that "injection" technique seems to define whole direction of thought - here's a typical ELF hacking tool or this question, where gentleman seems to have the same usecase as me, but starts with asking how he can patch existing binaries.
No, thanks. I don't want to inject into or modify something which is nor mine. All I want to do is to make a standalone proxy shlib which will call out to the original. Ideally, there would be a tool which can be fed with the original .so and create a C source code which would just redirect to original's functions, while letting me easily override anything I want. So, where's such tool? ;-) Thanks.
Using LD_PRELOAD doesn't really involve modifying something which isn't yours, and the injection isn't all that different from normal dynamic library loading. The “typical ELF hacking tool” from the ERESI project is unrelated to LD_PRELOAD. You should not be afraid of it. A good introduction to writing LD_PRELOAD-able “proxies” is here.
That being said, if you want to create a system-wide proxy for some library, you might argue that globally setting LD_PRELOAD (and thus loading your proxy into every binary that ever runs on your system) is undesirable. It is commonly used to override functions from glibc by tools such as libeatmydata or socksify, but if you're overriding a function in a library that is bigger and/or less widespread than glibc, it makes sense to try to find another approach, to really create a proxy for just that one library.
One such approach is to use patchelf --replace-needed or --add-needed to hardcode the full pathname of the original library and then make sure the proxy library is found first by setting LD_LIBRARY_PATH¹. So, the complete procedure is:
create an LD_PRELOAD-able library that overrides some functions of the original one (test that it works using only LD_PRELOAD before proceeding further!)
compile and link this library with the original library so that ldd libwrapper-foo.so includes something like:
libfoo.so.0 => /usr/lib/x86_64-linux-gnu/libfoo.so.0 (0x0000deadbeef0000)
hardcode the full path using patchelf:
patchelf --replace-needed libfoo.so.0 /usr/lib/x86_64-linux-gnu/libfoo.so.0 libwrapper-foo.so
symlink libwrapper-foo.so to libfoo.so.0
now LD_LIBRARY_PATH=. ldd $(which program-that-uses-libfoo) should include these lines:
libfoo.so.0 => ./libfoo.so.0 (0x0000dead56780000)
/usr/lib/x86_64-linux-gnu/libfoo.so.0 (0x0000dead1234000000)
set LD_LIBRARY_PATH to full path to the wrapper library in your .bashrc or somewhere
A real-life example of such proxy libary is my wrapper for libpango that enables subpixel positioning for all applications.
¹) It might also be possible to put this proxy library into /usr/local/lib, but ldconfig (the tool that updates shared libraries cache) refuses to use libraries with hardcoded absolute paths.
apitrace is a tool which covers detailed tracing of graphic libs (OpenGL, DirectX) calls for a number of platform. It's probably too detailed and complex for generic solution, but at least provides some reference and affinity.
The term has several definition according to Wikipedia, however what I'm really interested in is creating a program that has all its needed dependencies included within the source folder, so the end user doesn't need to install additional libraries for the app to install. For example, how Mac apps has all its dependencies all within the program itself already...
or is there a function that autotools does this? I'm programming in the Linux environment...
Are you talking about the source code of your application, or about your application binary?
The answer I'd give for both the cases depends on what libraries you're using.
If you're using libraries that you can find anywhere, that are somehow standard and/or that are quite big, you shouldn't attach them to your application, just require them both to build and to run your application.
Anyway don't be much concerned about your source code: little people will build your application, and they probably know something about programming and how a Linux system works; it won't be a big deal to require many (also not-so-common) dependences to build your application.
For what concerns the binary version it could be a little more problematic, since it will be used by end users who often don't know anything about libraries and programming stuff: you could choose to statically link the smallest and most uncommon libraries to your binary, in order to have less dependences.
You could do it, if you link statically, but it'd be somewhat unusual, and depending on what your program is supposed to do, you might be limiting yourself.
The alternative, if this is not just a one-off project, is to create a Linux Standard Base compatible RPM package and restrict yourself to linking against the libraries and symbols that LSB defines.
Run ldd on your program to discover all dependencies, then copy these to your directory, and add a program-wrapper script that issues
#!/bin/sh
LD_LIBRARY_PATH="${0##*/}:$LD_LIBRARY_PATH" exec "${0##*/}/real-program" "$#";
Duplicating the Mac OS X .app behavior on a plain POSIX system is difficult because it is very hard to guarantee that a process can find it's own executable (there are several way that will almost always work...). Mac OS X provides a OS service for this, but Linux (for instance) does not.
Once you've accomplished that feat, this becomes possible. Though, as others have mentioned, it loses the ability to share resource demands (disk space, RAM space, cache space) with other programs that use the same libraries because you'd be using static copies, or dynamically loading your own copy from the .app-like bundle.
I'm a bit naive when it comes to application development in C. I've been writing a lot of code for a programming language I'm working on and I want to include stuff from ICU (for internationalization and unicode support).
The problem is, I'm just not sure if there are any conventions for including a third party library. for something like readline where lots of systems are probably going to have it installed already, it's safe to just link to it (I think). But what about if I wanted to include a version of the library in my own code? Is this common or am I thinking about this all wrong?
If your code requires 3rd party libraries, you need to check for them before you build. On Linux, at least with open-source, the canonical way to do this is to use Autotools to write a configure script that looks for both the presence of libraries and how to use them. Thankfully this is pretty automated and there are tons of examples. Basically you write a configure.ac (and/or a Makefile.am) which are the source files for autoconf and automake respectively. They're transformed into configure and Makefile.in, and ./configure conditionally builds the Makefile with any configure-time options you specify.
Note that this is really only for Linux. I guess the canonical way to do it on Windows is with a project file for an IDE...
If it is a .lib and it has no runtime linked libraries it gets complied into you code. If you need to link to dynamic libraries you will have to assure they are there provide a installer or point the user to where they can obtain them.
If you are talking about shipping your software off to end users and are worried about dependencies - you have to provide them correct packages/installers that include the dependencies needed to run your software, or otherwise make sure the user can get them (subject to local laws, export laws, etc, etc, etc, but that's all about licensing).
You could build your software and statically link in ICU and whatever else you use, or you can ship your software and the ICU shared libraries.
It depends on the OS you're targeting. For Linux and Unix system, you will typically see dynamic linking, so the application will use the library that is already installed on the system. If you do this, that means it's up to the user to obtain the library if they don't already have it. Package managers in Linux will do this for you if you package your application in the distro's package format.
On Windows you typically see static linking, which means the application bundles the library and it will use that specific version. many different applications may use the same library but include their own version. So you can have many copies of the library floating around on your system.
The problem with shipping a copy of the library with your code is that you don't get the benefit of the library's maintainers' bug fixes for free. Obscure, small, and unsupported libraries are generally worth linking statically. Otherwise I'd just add the dependency and ensure that whatever packages you ship indicate it appropriately.