I am developing a simple tool that will be used across a variety of platforms (mostly Solaris, Linux and HP-UX). The tool relies on the module Proc::ProcessTable however I would like to avoid having to build/install the module across all the systems it will be used on.
Rather, I would like to 'embed' the Proc::ProcessTable code inside my tool. The result I am seeking is to have a single file that will work in all systems, without having to install the module separately.
Is this possible at all? Embedding a Perl-only module would be trivial, but this module compiles some OS-specific C code. Assuming I could compile that code on each of the OS I need, how would I go about including that pre-compiled C code inside my Perl script in order to make the embedded module work?
I would like to avoid having to build/install the module across all the systems it will be used on
Set up a local build system/farm, and produce packages (e.g. RPM) for the target operating systems. One prerequisite is that you turn your tool into a CPAN-ready distribution, and mark Proc::ProcessTable as a run-time dependency.
Related
I have a simple common lisp server program, that uses the osicat library to interface with the posix filesystem. I need to do this because the system creates symbolic links to files, and uses the POSIX stat metadata, and neither of those things are straightforward to do in portable lisp.
I am managing the dependencies with quicklisp, and I have all of this pinned to a working distribution. The app is portable between CCL and SBCL, and I tend to build it in the first and deploy it using the latter. I declare the dependencies for the app with an asdf defsystem, and I can use quicklisp to load it for easy development from local projects.
For deployment I was just using some ansible playbooks that replicated a developer environment on a remote (.e. setting up quicklisp, pushing code into local projects, running out of a user home directory) which was hacky, but mostly ok. More recently, as it's becoming more stable I have been compiling it using sb-ext:save-lisp-and-die, using a simple compile script. This means I get an executable that I can run more like a server, with service management scripts, and an anonymous user account.
This has been working very well, and so I recently moved this step to the next level, and I'm building .deb packages with my compile script, so I can bundle up everything into a relocatable binary. This also sort of works, but the resultant binaries are not relocatable from the original build host. They refuse to start up, and it appears that they try to dynamically load a shared library for osicat
Unhandled SIMPLE-ERROR in thread #<SB-THREAD:THREAD "main thread" RUNNING
Mar 15 12:47:14 annie [479]: {10005C05B3}>:
Mar 15 12:47:14 annie [479]: Error opening shared object "libosicat.so":
Mar 15 12:47:14 annie [479]: libosicat.so: cannot open shared object file: No such file or directory.
it looks like the image expects to find this in the original build tree's quicklisp archives
(ERROR "Error opening ~:[runtime~;shared object ~:*~S~]:~% ~A." "/home/builder/buil...quicklisp/dists/quicklisp/software/osicat-20180228-git/posix/libosicat.so
(SB-SYS:DLOPEN-OR-LOSE #S(SB-ALIEN::SHARED-OBJECT :PATHNAME #P"
so poking around the source, I realise that when quicklisp fetches osicat and exercise its build operation, it compiles this DLL to wrap it's interface with the system libaries, rather than just ffi to them directly - possibly because it's using cffi groveller, I don't really know much about cffi (yet). This is fine, but rather than linking to a .so using the system linker it's trying to dlopen it from a fixed path, which is not very portable, and kind of breaks the usefulness of save-image
I'm a bit stumped at this point, but before I go diving any much further into QL and cffi builds, I wondered if there's some build or compile configuration I'm missing that would make it bootstrap in a more 'static' fashion or influence the production of the wrapped library. Ideally I just want a single blob I can wrap in an installer, and link it against system libraries, but if I have to deploy some additional artefacts that's probably alright. I don't know how to make the autogenerated shared objects occur at a more controlled path.
At that point though, I may as well write a .so for my posix calls and distribute this alongside the app and try and FFI to it more directly. That would be a bit of a pain, so I would prefer to not do this.
You're right, when a dumped image is starting up, it is trying to reload the shared libraries. Which, as you are experiencing, is not working if the image is not starting on the machine it was dumped on.
This is almost what static-program-op set out to solve. A simple system definition like this should help you compile a static program:
(defsystem "foo"
:defsystem-depends-on ("cffi-grovel")
:build-operation "static-program-op" ; "asdf" package is implied
:build-pathname "foo" ; path of the generated binary
:entry-point "foo:main" ; function to use as the entry point
;; ... everything else ...
)
If your system depends on grovel files (defined by :cffi-wrapper-file, :c-file or :o-file), such as the ones provided by osicat, then it will statically link those to your dumped image.
However, it's not perfect.
Essentially, there are still some issues. Some are being fixed upstream by CFFI itself (e.g. not reloading shared libraries of the statically embedded libraries), others are a bit harder. (E.g. SBCL default compilation options don't let you use static-program-op by default. This is being fixed in Debian builds of SBCL, but other distributions are being less responsive.)
This is obviously a problem that the community at large has met, and there are several libraries that are around to help:
The first one, that has been around for a while, is Deploy. The approach it takes is that it embeds the dumped image and the libraries into an archive, and rearranges things for the binary to load them from wherever it is extracted to.
The second one, which I am biased towards because I made it, is linux-packaging. It takes the approach of fixing static-program-op by extending it, but requires you to build a custom SBCL. However, it generates distribution packages like .deb and .rpm, in order to be able to specify dependencies for system shared libraries (e.g. if you depend on sqlite, it will figure out which package provides it and add it as a dependency in the .deb). I highly recommend looking at the .gitlab-ci.yml for examples.
I recommend reading the webpages of both of those libraries to make your choice, they both have their advantages and drawbacks. <joke>Obviously, linux-packaging is superior.</joke>
Maybe you can use sb-posix:symlink and sb-posix:fstat on SBCL instead, removing the osicat dependency by feature toggle.
I've a project completely coded in C with dependencies on gnuplot, gtk, GNU Scientific Library, etc. It works fine on my machine.
However, how can I package it as a standalone executable which would be platform and OS independent?
Even if it works for any Linux platform, it's fine.
However, how can I package it as a standalone executable which would be platfrom and OS independent?
You can't. At least the standard C library has to use OS-specific APIs (syscalls, library functions) at many places.
Even if it works for any Linux platform, it's fine.
Linux is kind of a moving target, still this is often possible by linking all the libraries statically -- see your compiler's documentation for how to do that.
You can't have something a C program run on different OSes without recompiling.
This being said, if you want to compile your your program for a single platform like Linux, static compilation may usually be a solution, but it's not a use case handled by GTK+, so you might be able to build the other dependencies statically, but GTK+ and its dependencies will need to be built dynamically and shipped separately.
Usually you'd then be advised to package your application for your GNU/Linux distro of choice using RPM/DEB packages, but that would work for only that one distro.
So the best choice I see is to use flatpak to bundle your dependencies, and have flatpak installed where you want to install your app.
I developed a C program requiring some dynamic libraries, most notably libmysqlclient.so, which I intent to run on some remote-hosts. It seems like I have the following Options for distribution:
Compile the program static.
Install the required dependencies on the remote host
Distribute the dependencies with the program.
The first option is problematic as I need glibc-version at runtime anyway (since I use glibc and libnss for now).
I'm not sure about the second option: Is there a mechanism which checks if a installed library-version is sufficient for a program to run (beside libxyz.so.VERSION). Can I somehow check ABI-compatibility at startup?
Regarding the last Option: would I distribute ALL shared-libraries with the binary, or just the one which are presumably not installed (e.g libmysqlclient, but not libm).
Apart form this, am I likely to encounter ABI-compatibility problems if I use a different compiler for the binary then the one the dependencies were build with (e.g binary clang, libraries gcc)?
Version checking is distribution-specific. Usually, you would package your application in a .deb or .rpm file using the target distribution's packaging tools, and ship that to users. This means that you have to build your application once for each supported distribution, but there really is no way around that anyway because different distributions have slightly different versions of libmysqlclient. These distribution build tools generate some dependency version information automatically, and in other cases, some manual help is needed.
As a starting point, it's a good idea to look at the distribution packaging for something that relies on the MySQL/MariaDB client library and copy that. Maybe inspircd in Debian is a good example.
You can reduce the amount of builds you need to create and test somewhat by building on the oldest distribution versions you want to support. But some caveats apply; distributions vary in the degree of backwards compatibility they provide.
Distributing dependencies with the program is very problematic because popular libraries such as libmysqlclient are also provided by the base operating system, and if you use LD_LIBRARY_PATH to inject your own version, this could unintentionally extend to other programs as well (e.g., those you launch from your own program). The latter risk is still present even if you use DT_RUNPATH (via the -rpath linker option), although it is somewhat reduced.
A different option is to link just application-specific support libraries statically, and link base operating system libraries dynamically. (This is what some software collections do.) This does not seem to be such a great choice for libmysqlclient, though, because there might be an expectation that its feature set is identical to the distribution (regarding the TLS library and available configuration options), and with static linking, this is difficult to achieve.
The term has several definition according to Wikipedia, however what I'm really interested in is creating a program that has all its needed dependencies included within the source folder, so the end user doesn't need to install additional libraries for the app to install. For example, how Mac apps has all its dependencies all within the program itself already...
or is there a function that autotools does this? I'm programming in the Linux environment...
Are you talking about the source code of your application, or about your application binary?
The answer I'd give for both the cases depends on what libraries you're using.
If you're using libraries that you can find anywhere, that are somehow standard and/or that are quite big, you shouldn't attach them to your application, just require them both to build and to run your application.
Anyway don't be much concerned about your source code: little people will build your application, and they probably know something about programming and how a Linux system works; it won't be a big deal to require many (also not-so-common) dependences to build your application.
For what concerns the binary version it could be a little more problematic, since it will be used by end users who often don't know anything about libraries and programming stuff: you could choose to statically link the smallest and most uncommon libraries to your binary, in order to have less dependences.
You could do it, if you link statically, but it'd be somewhat unusual, and depending on what your program is supposed to do, you might be limiting yourself.
The alternative, if this is not just a one-off project, is to create a Linux Standard Base compatible RPM package and restrict yourself to linking against the libraries and symbols that LSB defines.
Run ldd on your program to discover all dependencies, then copy these to your directory, and add a program-wrapper script that issues
#!/bin/sh
LD_LIBRARY_PATH="${0##*/}:$LD_LIBRARY_PATH" exec "${0##*/}/real-program" "$#";
Duplicating the Mac OS X .app behavior on a plain POSIX system is difficult because it is very hard to guarantee that a process can find it's own executable (there are several way that will almost always work...). Mac OS X provides a OS service for this, but Linux (for instance) does not.
Once you've accomplished that feat, this becomes possible. Though, as others have mentioned, it loses the ability to share resource demands (disk space, RAM space, cache space) with other programs that use the same libraries because you'd be using static copies, or dynamically loading your own copy from the .app-like bundle.
I'm a bit naive when it comes to application development in C. I've been writing a lot of code for a programming language I'm working on and I want to include stuff from ICU (for internationalization and unicode support).
The problem is, I'm just not sure if there are any conventions for including a third party library. for something like readline where lots of systems are probably going to have it installed already, it's safe to just link to it (I think). But what about if I wanted to include a version of the library in my own code? Is this common or am I thinking about this all wrong?
If your code requires 3rd party libraries, you need to check for them before you build. On Linux, at least with open-source, the canonical way to do this is to use Autotools to write a configure script that looks for both the presence of libraries and how to use them. Thankfully this is pretty automated and there are tons of examples. Basically you write a configure.ac (and/or a Makefile.am) which are the source files for autoconf and automake respectively. They're transformed into configure and Makefile.in, and ./configure conditionally builds the Makefile with any configure-time options you specify.
Note that this is really only for Linux. I guess the canonical way to do it on Windows is with a project file for an IDE...
If it is a .lib and it has no runtime linked libraries it gets complied into you code. If you need to link to dynamic libraries you will have to assure they are there provide a installer or point the user to where they can obtain them.
If you are talking about shipping your software off to end users and are worried about dependencies - you have to provide them correct packages/installers that include the dependencies needed to run your software, or otherwise make sure the user can get them (subject to local laws, export laws, etc, etc, etc, but that's all about licensing).
You could build your software and statically link in ICU and whatever else you use, or you can ship your software and the ICU shared libraries.
It depends on the OS you're targeting. For Linux and Unix system, you will typically see dynamic linking, so the application will use the library that is already installed on the system. If you do this, that means it's up to the user to obtain the library if they don't already have it. Package managers in Linux will do this for you if you package your application in the distro's package format.
On Windows you typically see static linking, which means the application bundles the library and it will use that specific version. many different applications may use the same library but include their own version. So you can have many copies of the library floating around on your system.
The problem with shipping a copy of the library with your code is that you don't get the benefit of the library's maintainers' bug fixes for free. Obscure, small, and unsupported libraries are generally worth linking statically. Otherwise I'd just add the dependency and ensure that whatever packages you ship indicate it appropriately.