In a linux box my rpm's used to install my software binaries to a predefined "/opt//bin" and "/opt//lib" , and from some binaries(c executable) i used to call these binaries located in /opt//bin by harcoding its full path using system call.
for example : system("/opt/<my_loc>/bin/myBin");
Now I would like to install my software to a custom path so what's the best approach to call the binaries from the new custom path ?
Stop hard-coding the path in each system call. Or at least use a #define so you can update it from your old path to your new path. Then you can generate a suitable config file at build-time via a script.
The better approach is to just configure your PATH environment variable correctly, so you can just tell it what binary you want to run. I order it like this: ~/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin (if your /bin and /sbin is not merged into /usr you will need to add those too). system() uses sh which on my system is dash so you would set your PATH in ~/.profile and/or /etc/profile.
I don't like a binary path per package, so I use the program stow to merge a bunch of per package directories in /usr/local/stow/$package/bin to /usr/local/bin.
Related
I have a simple CMake project with CTest and CPack. It uses the Lua C API to load and execute an script file called script.lua.
This script will be in different location when built vs when installed/packed, it's location would be:
[build] : ${CMAKE_CURRENT_SOURCE_DIR}/src/scripts
[install]: ../scripts (relative to app which is in bin directory)
What I'm trying to achieve here is to have install step regenerate configure_file then rebuild using new configure_file and only then proceed to do the normal install step and of course revert the configure_file back to it's original state afterwards.
Any help regarding this issue is appreciated.
My understanding is that CMake's configure_file command has its full effect during the execution of the cmake program. It has no representation in generated makefiles, or whatever other build system components cmake generates. Thus, if you want to configure a file differently for installation than for pre-installation testing,
You would need to perform completely separate builds (starting with executing cmake) for the two cases, and
You would need to use some attribute of the cmake command line or execution environment to convey the wanted information, such as using a -D option to define a CMake variable on the command line.
I advise you not to pursue this route. Aside from being overcomplicated, it's also poor form to install a different build of the software than is tested.
You have a variety of alternatives that could serve better. Among those are
Give the program itself the ability to accept a custom location for the Lua script. That is, make it recognize a command-line argument or environment variable that serves this purpose. Make use of that during pre-installation testing.
If indeed the program is using a relative path to locate the script at runtime, then just (have CMake) put a copy of the script at the appropriate location in the build tree, so that the program will find it normally during testing.
We have one build machine which has some static libraries at unusual paths, and gcc doesn't find them when via the -l option, all other build machines run fine. But it seems this one configured incorrectly or something.
The solutions we have tried:
Check the host name of the build machine in build script and add the -L command line option if it matches name the problematic build machine (very ugly).
Print the list of search dirs using the -print-search-dirs option
and symlink the problematic library into the first one (too hackish).
What I would like to do is simply add an extra path to search paths system wide to the gcc.
Is there a way to change/configure the default library search paths in GCC? Is there a config file where the list of the defaults are stored?
You can use
-l:<PATH_TO_YOUR_LIBRARY>
Or you can set up the /etc/ld.so.conf with the path to your directory of which you have the shared library installed. You might have to run ldconfig after that.
Original
I am looking for a way to create a non-isolated development environment for a C-library.
I will most likely use cmake to build the library and my IDE is a simple text editor.
The problem now is that I do not only create the library but also some sample "applications" using the library.
Therefore I need to install the library's headers and the shared object (I'm using GNU/Linux) somewhere and I do not want to install it to /usr/local/lib or (the even worse) /usr/lib.
Is there a way to create a virtual environment similar to python's pyvenv (and similar) where I can install the everything to but still have access to the host libraries?
Also I do not want to rewrite my $PATH/$LD_LIBRARY_PATH, setup a VM, container, or chroot.
The usage would then look like:
# switch to environment somehow
loadenv library1
# for library
cd library
make && make install
# for application
cd ../application1
make && ./application1
Is this possible?
Edit 1
So basically my directory structure will look like this:
library/
library/src/
library/src/<files>.c
library/include/<files>.h
application/
application/src/
application/src/<files>.c
First I need to compile the library and install the binary and header files.
These should be installed in a fake system-location.
Then I can compile the application and run it.
Edit 2
I thought a bit about it and it seems all I need is a filesystem sandbox.
So basically I want to open up a shell where every write to disk is not committed to the filesystem but rather temporarily saved in e.g. a ramfs/tmpfs just to be dropped when the shell exits.
This way I can exactly test how everything would behave if compiled, deployed and executed on the real machine without any danger to existing files or directories and without accidentally creating files or directories without cleaning them up.
You don't really need to 'install' the library, you can work in the development tree.
(1) for compilation all you need to do is use -I flag to specify where the libraries header files are, and this can be a relative path, for example in your case you could do -I../../library/include
(2) for linking you need to tell the linker where the library is located at, you can use the -L flag append to the library search order.
(3) for testing the application, you are correct that the application needs to be able to find the library. You have a couple of options:
(a) make sure the library and the executable are in the same directory
(b) you can temporarily modify your LD_LIBRARY_PATH, in your current shell only, for testing:
export LD_LIBRARY_PATH=abs_path_to_library:$LD_LIBRARY_PATH
note that this will only effect the current shell (command terminal) you are working in. Any other shells you may have open, or open later will have your normal LD_LIBRARY_PATH. I know you specified that you don't want to modify your PATH or LD_LIBRARY_PATH, but being local to the shell that the command is executed it is a nice, easy way to do this.
(c) embed the path to the library in the client executable. To do this you need to pass an option to the linker. The command for gcc is:
-Wl,-rpath,$(DEFAULT_LIB_INSTALL_PATH)
see this how-to
I've just started to learn about linux kernel modules and the book I'm referring to says:
"For this [compilation] to work, the kernel source has to be suitably prepared; in particular it has to have a configuration file (.config in the main kernel source directory) and proper dependencies setup"
However, as far as I know (and have tried), the .config file is generated by the make menuconfig (or any of the equivalent make config commands) - and that doesn't seem to be enough for my module files to compile. What's the bare minimum I need to do in the kernel source directory?
make modules?
Yes, the .config file is generated using make *config.
Here are some of them:
make defconfig creates the default configuration for your architecture.
make config is the most primitive method, it prompts on every configuration.
make menuconfig is ncurses config menu. That's the one I prefer if I'm not editing .config file directly.
make gconfig is like menuconfig, but using gtk+.
Don't forget that make oldconfig should be called after modifying the .config file yourself.
Your current config might also be stored somewhere on your disk. For many linux versions, it's location is /boot/config-$(uname -r) If it exists, you can start with it. If not, your best bet is make defconfig, then editing the config file to suit your needs.
After configuration:
Before building modules, you might want to compile the kernel since your modules will not be used by the current kernel and even if you make your current kernel use those modules, it'll most probably cause a panic since symbol tables will not be in the order that your compiled modules assumes. make -jN is the most used method for compiling, N being twice your CPU core count. This also compiles modules, but creates .ko files for them, instead of embedding into the vmlinuz file.
After that, you can sudo make install to install your kernel. This usually wraps the kernel object you've just compiled into a suitable format and puts under /boot (it doesn't have to be /boot, actually).
Then you sudo make modules_install to copy the created .ko files into /lib/modules/$(uname -r). This builds all modules.
After doing that, you might prefer only building your own module, instead of all of them. When on the kernel tree root, you may make M=your_modules_relative_path to only build your module.
I don't know which book you're reading, but if you're building a module externally, you still have to perform the work above. After that, you may use LDD examples as a starting point for your makefiles.
See https://github.com/duxing2007/ldd3-examples-3.x
I have a shared library that my application needs (a .so) and I am wondering what is the best way to distribute it?
It's not something that can be apt-get installed and I need it in the LD path's in order to run the application.
In the past I've needed to include a separate "launcher script" that the user would click on instead of clicking on the Linux executable directly. The launcher script would set up LD_LIBRARY_PATH to include the directory where the shared library was stored, and then launch the executable. Here's the script, for reference (it assumes that the executable and the shared library are hidden away in a sub-folder named "bin", and that the executable's name is the same as the script's name except without the ".sh" suffix):
#!/bin/bash
appname=$(basename "$0" .sh)
dirname=$(dirname "$0")
cd "$dirname/bin"
export LD_LIBRARY_PATH=$(pwd):$LD_LIBRARY_PATH
./$appname "$#"
Distribute it the same way you distribute the executable that depends upon it; bundle the two together.
If you didn't write the library, make sure you're complying with its license terms for redistribution.