Elegant way to search for installed DEB packages in C code - c

My CompSci Prof gave me the task to create a simplistic C installation program. I have to search for specified DEB packages in Ubuntu and if the package not found, inform user about this and abort the program. I have implemented two ways 1)parsing popen's output for the apt-get install, and then 2) using grep to parse dpkg -l. Both ways work perfectly well, but my Prof calls them ugly approaches. Is there any non-ugly way, e.g. some specific non-Posix UBUNTU API function?

To access the Debian Package Management functionallity there is this library: http://dpkg.alioth.debian.org/doc/index.html
However be warned about the following from its README.api file (as per version 1.16.15 of the Debian package libdpkg-dev):
What: libdpkg.a (C static library)
Status: volatile
Description:
The API provided by this library is highly volatile, still in the process
of being cleaned up. It's only supposed to be used internally by dpkg for
now. Header files, functions, variables and types might get renamed,
removed or change semantics. If you still have a need to use it, which
you'd be doing anyway, say by locally building dpkg to get the library,
then define the C preprocessor macro LIBDPKG_VOLATILE_API in your build
to acknowledge that fact.

Related

Creating a standalone, relocatable build of postgres

For a small project I'm working on, I would like to create a “relocatable build” of PostgreSQL, similar to the binaries here. The idea is that you have PostgreSQL and all required libraries packaged so that you can just unpack it in any directory on any machine and it will run. I want the resulting build of Postgres to work on virtually any Linux machine it finds itself on.
I've made it so far as determining which libaries I need to build:
My understanding is that I should be getting the source code for these libraries (and their dependencies) and compiling them statically.
As things stand currently, my build script is quite barebones and obviously produces an install that is linked against whatever distribution it was run on:
./configure \
--prefix="${outputDir}" \
--with-uuid="ossp"
I'm wondering if anyone could outline what steps I must take to get the relocatable build that I'm after. My hunch right now is that I'm looking for guidance on what environment variables I would need to set and/or parameters I'd need to provide to my build in order to end up with a fully relocatable build of Postgres.
Please note: I don't normally work with C/C++ although I have several years of ./configure, make and doing builds for other much higher level ecosystems under my belt. I'm well aware that distribution-specific releases of Postgres are widely available, to speak nothing of the official docker container. Please take the approach that I'm pursuing a concept in the spirit of research or exploration. I'm looking for a precise solution, not a fast one.
This answer is for Linux; this will work differently on different operating systems.
You can create a “relocatable build” of PostgreSQL if you build it with the appropriate “run path”. The documentation gives you these hints:
The method to set the shared library search path varies between platforms, but the most widely-used method is to set the environment variable LD_LIBRARY_PATH [...]
On some systems it might be preferable to set the environment variable LD_RUN_PATH before building.
The manual for ld tells you:
If -rpath is not used when linking an ELF executable, the contents > of the environment variable LD_RUN_PATH will be used if it is defined.
It also tells you this about the run path:
The tokens $ORIGIN and $LIB can appear in these search directories. They will be replaced by the full path to
the directory containing the program or shared object in the case of $ORIGIN and either lib - for 32-bit
binaries - or lib64 - for 64-bit binaries - in the case of $LIB.
See also this useful answer.
So the sequence of steps would be:
./configure --disable-rpath [other options]
export LD_RUN_PATH='$ORIGIN/../lib'
make
make install
Then you package the PostgreSQL binaries in the bin subdirectory and the shared libraries plus all required libraries (you can find them with ldd) in the lib subdirectory. The libraries will then be looked up relative to the binaries.

Cross-build partprobe for ARM / Linux: configure error concerning libuuid

I want to cross-build partprobe (e.g. parted-3.1 from [here] ) for an ARMv7-controller but keep getting error messages concerning libuuid and uuid_generate.
Actually I only need partprobe which may not even rely on that uuid_generate function, but I don't see any options in the configure script to disable any features.
I've successfully cross-built software before using the BPS as well as using the ARM toolchain provided by my distribution (Mint 17).
Here's what I've tried so far:
1) Using the manufacturer's BSP
I have a board support package that provides libraries and headers as well as a toolchain:
/path/to/bsp/_rootfs/lib/libuuid.so.1
/path/to/bsp/_rootfs/lib/libuuid.so.1.3.0
/path/to/bsp/board-support/linux-3.2.0-psp04.06.00.11/include/linux/uuid.h
/path/to/bsp/linux-devkit/am3352/bin/
When I invoke
./configure \
--libdir=/path/to/bsp/_rootfs/lib/ \
--includedir=/path/to/bsp/board-support/linux-3.2.0-psp04.06.00.11/include/ \
--bindir=/path/to/bsp/linux-devkit/am3352/bin/ \
--with-sysroot=/path/to/bsp/_rootfs \
--host=arm-linux-gnueabihf
I get the following error
checking for uuid_generate in -luuid... no
configure: error: GNU Parted requires libuuid - a part of the util-linux-ng package (but
usually distributed separately in libuuid-devel, uuid-dev or similar)
This can probably be found on your distribution's CD or FTP site or at:
http://userweb.kernel.org/~kzak/util-linux-ng/
Note: originally, libuuid was part of the e2fsprogs package. Later, it
moved to util-linux-ng-2.16, and that package is now the preferred source.
The uuid.h and libraries are available, so I thought the configure script should not complain, but the error seems to be misleading. The header uuid.h is available but does not contain a uuid_generate function declaration, while the library contains such a function (checked with nm -D).
I'm not sure what to do with that...does the BSP contain incompatible version of the header and the library?
However the busybox binary contains wget which seems to use uuid_generate...at some point it must have worked.
Replacing the original uuid.h with a uuid.h.in from the /path/to/bsp/docs/am3352/licenses/e2fsprogs/ (which contains uuid_generate) still results in the same error.
2) Using the Linux distro's ARM-environment
I also tried using the ARM-toolchain provided by my Linux distribution (packages gcc-arm-linux-gnueabihf, libuuid1:armhf)
aptitude install uuid-dev:armhf
shows conflicts with the x86 package of uuid-dev, but there are already available:
/usr/arm-linux-gnueabi/include/linux/uuid.h
/usr/arm-linux-gnueabihf/include/linux/uuid.h
/lib/arm-linux-gnueabihf/libuuid.so.1
/lib/arm-linux-gnueabihf/libuuid.so.1.3.0
BTW: None of those two header files includes a string uuid_generate, while the libraries do.
Invoking
./configure host=arm-linux-gnueabihf
runs without errors, but creates a Makefile that lacks any info on the cross-build environment.
BTW: Invoking make anyway exits with error; trying the same with the current source code from git://git.savannah.gnu.org/parted.git configures and builds successfully, but no magic involved: the result are libraries/binaries for x86 and not for ARM.
Right now I'm at my wits' end - so my question is:
Can someone see the problem(s) I'm missing?

Haskell: Missing C library on Arch Linux works on Ubuntu

I recently switched my PC at work from Ubuntu to Arch Linux.
And I am now getting the following error (I am using stack to build my project):
setup-Simple-Cabal-1.22.4.0-ghc-7.10.2: Missing dependency on a
foreign
library:
* Missing C library: HSrts-ghc7.10.2
This problem can usually be solved by installing the system package that
provides this library (you may need the "-dev" version). If the library is
already installed but in a non-standard location then you can use the flags
--extra-include-dirs= and --extra-lib-dirs= to specify where it is.
As far as I understand it, the difference in Linux Distribution should not cause any issue.
Things I have tried:
-add the path where the library is with --extra-lib-dirs
-make sure that the version of stack/ghc are the same acrose both systems
-tried unsucesfully to find a relevant difference between the 2 systems
(gcc version was different but didn't change anything)
I have a docker container based on ubutu where it builds without an issue.
The only thing I can think of is that this library gets handled differently from some random C-library since it contains the Haskell-Runtime. But I have no idea what this difference would be. Or how a differnent handling would cause an issue on my Arch System.
Here my .cabal file (the folder also contains the whole project):
https://github.com/opencog/atomspace/blob/master/tests/haskell/libExecutionOutputTest/opencoglib.cabal
Okay i figured out a workaround, instead of specifiyc the library in the .cabal file:
...
extra-libraries: HSrts-ghc7.10.2
...
you add it to your stack.yaml file:
...
ghc-options:
package-name: -lHSrts-ghc7.10.2
...
If you also have a exectuable defined in your .cabal file this will break the executable, since the library is not only included in the library. And including the runtime library in an executable results in an instant segementation fault.

GNU configure options for binutils, gcc & glib

I am trying to build an alternative compilation suite on my debian-testing machine (sorry, real question is actually at bottom).
Technically it is a "cross-compilation" because I need to use this toolchain on another machine, but hardware is compatible (x86_64-unknown-linux-gnu) so I don't need to bother about build/host/target differencies.
On the other hand I do need to worry about prefix/sysroot because I cannot install in any standard location (to be more precise: I could install anywhere, since I have root access there, but I shouldn't); This leaves me with my $HOME, some completely non-standard place (e.g.: /usr/local/my/toolchain) or some semi-standard (e.g.: /opt) place. In any case I will need to do something to enable compilation to find includes and libs in such places and runtime linker to find needed .so.
My requirements are:
I have a running Linux that shouln't be messed with.
This system does not have a "C" compiler.
Said linux is BusyBox-based, so I will need a substantial amount of utilities to do any serious compiling there, including make, sed, awk, ..., beside the compiler proper.
I would be happy to stuff my augmented toolchain in /opt, but that is not a requirement; any place is ok as long as it's accessible by more than a single user, I would like ot avoid installing in $HOME.
I am aware of "optware", I installed it and it does work... up to a point. Unfortunately:
It's really old software
it's only 32bit (my system is Linux syno0 3.2.40 #5004 SMP Thu Nov 6 15:26:44 CST 2014 x86_64 GNU/Linux).
Some programs won't compile because provided libs have 32/64 mismatch.
Real motivation to do all this exercise is I need to install some perl modules needed for one application that will have to run there and to install them from cpan I need a native compiler (and other stuff, of course).
Similar arguments about a Ruby-on-rails application I should port there.
If at all possible I should try to use the "native" libs in /lib:/lib64:/usr/lib:/usr/lib64:/usr/lib32 ("static" .a libs are not available).
I had a limited success preparing a custom tarball from an available toolchain for my processor, relocating it to /opt, stuffing needed apps in its sysroot and compiling with: CPPFLAGS="-I/opt/include" and LDFLAGS="-L/opt/lib -Wl,-rpath -Wl,/opt/lib".
This enables me to build almost everything "LFS-style", but it's rather error-prone and 64-bit-only.
I seem to understand it should possible to automatize all this by a careful mix of --prefix, --with-sysroot, --with-native-system-header-dir, --enable-multilib and their friends.
I tried to understand exactly how they should be used and failed, for a reason or another. I didn't find any exhaustive documentation and information in GCC instalation docs are confusing me.
Can someone, please, give me a recipe to build this toolchain?
Any pointer to in-depth documentation welcome, but I suspect some tutoring will be necessary.
I assume recompilation of Binutils and GCC is mandatory, Glib is probably not needed; anything else can be recompiled "native" on target.
TiA
ZioByte
After installing your toolchain in nonstandard places you need to set environment(maybe system-wide) correctly for GCC using LIBRARY_PATH and C_INCLUDE_PATHor CPLUS_INCLUDE_PATH.
Environment Variables Affecting GCC
I see three ways to automate setting path variables for your relocatable toolchain:
on every relocation adding your GCC path to your PATH environment variable. And create alias in your busybox profile (usually /etc/profile)
alias example:
alias gcc='TOOLCHAIN_PREFIX=$(which gcc | rev | cut -d"/" -f3-10 |rev); \
LIBRARY_PATH=$TOOLCHAIN_PREFIX/lib/ \
C_INCLUDE_PATH=$TOOLCHAIN_PREFIX/include/ gcc'
creating for your toolchain launcher-script that will calculate pathes, but you'll should launch it with direct path, setting it when you launch build process, or of course you can add its location to PATH environment varaible.
script example
#!/bin/sh
TOOLCHAIN_PREFIX=$(echo $0 | rev | cut -d"/" -f3-10 |rev);
LIBRARY_PATH=$TOOLCHAIN_PREFIX/lib/ \
C_INCLUDE_PATH=$TOOLCHAIN_PREFIX/include/ \
$TOOLCHAIN_PREFIX/bin/gcc-4.*
The most reliable and ergonomic way — create install/uninstall script that will unpack and set environment correctly, to relocate toolchain you will uninstall from it from one prefix and install to another. If you have dpkg on your debian-testing system, .deb package is best choice.
I can see no way to set environment fully automatically. But we can reduce it to setting just one path — path of toolchain.
HINT* For better stability you should isolate your toolchain and also install in your prefix Linux Kernel headers and Glib

What is better downloading libraries from repositories of or installing from *.tar.gz

gcc 4.4.4 c89 Fedora 13
I am wondering what is better. To give you a compile of examples: apache runtime portable and log4c.
The apr version in my fedora repository is 1.3.9. The latest stable version on the apr website is 1.4.2.
Questions
Would it be better to download from the website and install, or install using yum?
When you install from yum sometimes it can put things in many directories. When installing from the tarball you can put the includes and libraries where you want.
The log4c the versions are the same, as this is an old project.
I downloaded log4c using yum. I copied all the includes and libraries to my development project directory.
i.e.
project_name/tools/log4c/inc
project_name/tools/log4c/libs
However, I noticed that I had to look for some headers in the /usr/include directory.
Many thanks for any suggestions,
If the version in your distribution's package repository is recent enough, just use that.
Advantages are automatic updates via your distribution, easy and fast installs (including the automatic fetching and installing of dependencies!) and easy removals of packages.
If you install stuff from .tar.gz by yourself, you have to play your own distribution - keep track of security issues and bugs.
Using distribution packages, you have an eye on security problems as well, but a lot work does the distributor for you (like developing patches, repackaging, testing and catching serious stuff). Of course each distributor has a policy how to deal with different classes of issues for different package repositories. But with your own .tar.gz installs you have nothing of this.
It's an age-old question I think. And it's the same on all Linux distributions.
The package is created by someone - that person has an opinion as to where stuff should go. You may not agree - but by using a package you are spared chasing down all the dependencies needed to compile and install the software.
So for full control: roll your own - but be prepared for the possible work
otherwise use the package.
My view:
Use packages until it's impossible to do so (conflicts, compile parameters needed, ..) . I'd much rather spend time getting the software to work for me, than spend time compiling.
I usually use the packages provided by my distribution, if they are of a new enough version. There is two reasons for that:
1) Someone will make sure that I get new packages if security vulnerabilities in the old ones are uncovered.
2) It saves me time.
When I set up a development project, I never create my own include/lib directories unless the project itself is the authorative source for the relevant files I put there.
I use pkg-config to provide the location of necessary libraries and include files to my compiler. pkg-config use some .pc-files as a source of information about where things are supposed to be, and these are maintained by the same people who create the packages for your distribution. Some libraries does not provide this file, but an alternative '-config'-script. I'll provide two examples:
I'm not running Fedora 13, but an example on Ubuntu 10.04 would be;
*) Install liblog4c-dev
*) The command "log4c-config --libs" returns "-L/usr/lib -llog4c" ...
*) The command "log4c-config --cflags" returns "-I/usr/include"
And for an example using pkg-config (I'll use SDL for the example):
*) Install libsdl1.2-dev
*) The command "pkg-config sdl --libs" returns "-lSDL"
*) The command "pkg-config sdl --cflags" returns "-D_GNU_SOURCE=1 -D_REENTRANT -I/usr/include/SDL"
... So even if another distribution decides to put things in different paths, there are scripts that are supposed to give you a reliable answer to where things is - so things can be built on most distributions. Autotools (automake, autoconf, and the likes) amd cmake are quite helpful to make sure that you don't have to deal with these problems.
If you want to build something that has to work with the Apache that's included with Fedora, then it's probably best to use the apr version in Fedora. That way you get automatic security updates etc. If you want to develop something new yourself, it might be useful to track upstream instead.
Also, normally the headers that your distro provides should be found by gcc & co. without you needing to copy them, so it doesn't matter where they are stored by yum/rpm.

Resources