I have currently complied lighttpd from source
./configure --prefix=/home/lighttpd \
--without-pcre \
--without-zlib \
--without-bzip2
I also tried -enable-static --disable-shared option, but modules still loading from lib directory
I want to compile all lighttpd module in single binary instead of loading from lib directory, how to do that ?
Cross-posted to https://redmine.lighttpd.net/boards/3/topics/6615
Lighttpd can be built statically using SCons or using make. Briefly:
SCons:
$ scons -j 4 build_static=1 build_dynamic=0 prefix=/custom/inst/path install
make:
# edit src/Makefile.am and in the section under 'if LIGHTTPD_STATIC', update lighttpd_SOURCES with each module to be included in the static build or just use the entire list that is already there
$ LIGHTTPD_STATIC=yes ./configure -C --enable-static=yes
$ make
More details in https://redmine.lighttpd.net/boards/3/topics/5912
[edit] To build statically using 'make', use lighttpd git master branch or lighttpd 1.4.40+
Compile it with flag -DLIGHTTPD_STATIC. If gcc will warn you about syntax, force interpret gcc as the C99 standard:
make CFLAGS=-DLIGHTTPD_STATIC -std=c99
Also you must change src/Makefile.in which is generated by configure to add the modules you want to include. Specifically, add to am__liblightcomp_la_SOURCE_DIST, am__lighttpd_SOURCES_DIST and common_src:
mod_access.c mod_staticfile.c
And also add the objects.
To am__objects_1 and am__objects_2
mod_access.$(OBJEXT) mod_staticfile.$(OBJEXT)
if src/plugin-static.h file is not available, change src/plugin.c file, find line #include "plugin-static.h", coment it and add this below:
PLUGIN_INIT(mod_access)
PLUGIN_INIT(mod_staticfile)
The documentation of lighttpd explains that using build_static will only cause it to be statically linked with its own modules (mod_*.so files). It will still dynamically link external dependencies (those in /lib*/*).
If you were not mixing these up, regarding your experience of:
I also tried -enable-static --disable-shared option, but modules still loading from lib directory
And all you wanted was for these modules to be statically included in the lighttpd binary, then the other answers should be correct and valid.
However, if you want to have a single binary, so that there are no externally dynamic dependencies. Then you need to use both scons and replace build_static=1 with build_fullstatic=1. The Makefile setup doesn't have this option.
build_static=1
scons -j 4 build_static=1 build_dynamic=0
Using ldd to show which dynamic libraries it needs:
ldd sconsbuild/static/build/lighttpd
linux-vdso.so.1 (0x00007fff0478f000)
libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0 (0x00007f760191e000)
libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f76018e4000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f76016bc000)
/lib64/ld-linux-x86-64.so.2 (0x00007f7601a5a000)
PS: if you built a shared version, the mod_*.so files would still not show up here, as those are lazily loaded from within the execution of lighttpd, based on your config file.
build_fullstatic=1
scons -j 4 build_fullstatic=1 build_dynamic=0
ldd sconsbuild/fullstatic/build/lighttpd
not a dynamic executable
Which is what I assume you wanted to see.
Related
I'm building the following file:
int main()
{
return 0;
}
with the following flags:
g++ -o main -Wl,-rpath,'$ORIGIN' main.cpp
but the rpath flag is doing nothing. When I execute the following commands:
objdump -x main | grep -i rpath
readelf -a main | grep -i rpath
I obtain nothing (RPATH is not defined).
What I'm doing wrong?
EDIT
I have tried to do the above with a different binary using the following cmake flags:
set(CMAKE_SKIP_BUILD_RPATH FALSE)
set(CMAKE_BUILD_WITH_INSTALL_RPATH FALSE)
set(CMAKE_INSTALL_RPATH "\$ORIGIN")
set(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE)
I have moved the executable to a different machine, and placed a dynamic library that it needs to 'dlopen' in the same folder. It has worked (and I'm 100% sure this is because rpath, since before applying the above cmake flags the executable didn't worked).
Still, using the above two commands to check rpath (objdump and readelf) I still don't see anything.
If I didnt miss something here, you are not linking any libs in your build command.
Lets say you want to link libusb.so shared library, which is located in libusb sub-folder of your current folder where is main.cpp.
I will not take any details here, about soname, linkname of lib etc, just to make clear about rpath.
rpath will provide runtime linker path to library, not for linktime, cause even shared library need to be present(accessible) in compile/link time. So, to provide your application loader with possibility to look for needed library in start time, relatively to your app folder, there is $ORIGIN variable, you can see it with readelf but only if you link some library with $ORIGIN in rpath.
Here is example based on your question:
g++ main.cpp -o main -L./libusb -Wl,-rpath,'$ORIGIN/libusb' -lusb
As you see, you need to provide -L directory for compile/link time search, and rpath for runtime linker. Now you will be able to examin all needed libs for your app using readelf and location for search.
Currently I am building a foo.h and foo.c with:
$ clang -I . -dynamiclib \
-undefined dynamic_lookup \
-o foo.dylib foo.c
I am able to use this in other C libraries like this:
clang -I . -dynamiclib \
-undefined dynamic_lookup \
-o bar.dylib bar.c foo.dylib
I would like to use this library in an assembly project.
$ nasm -f macho64 test.asm \
&& ld -e start -macosx_version_min 10.13.0 -static -o test test.o foo.dylib
$ ./test
ld: warning: foo.dylib, ignoring unexpected dylib file
Wondering how I link together the C -> asm system to get the C functions working in asm. Then I would like to go further and use that compiled asm to use in either a C or asm project, so wondering how to do that.
When using the assembly in C, I would like for you to basically get functions and import #include "myassembly.h" or something like that, so it feels like a real library. Then you have a function like myfunc which is defined in assembly, but you can use it in c as myfunc(1, 2, 3); sort of thing.
If I change it from static to dynamic linking with the -lSystem flag (and removing -static), I get this:
dyld: Library not loaded: foo.dylib
Referenced from: ./test
Reason: image not found
make: *** [...] Abort trap: 6
You're specifying -static which means:
-static Produces a mach-o file that does not use the dyld. Only used
building the kernel.
dyld is the dynamic loader. If you're not using the dynamic loader, you can't use dynamic libraries.
Update for edited question:
When a dylib is created, it gets an "install name". When an executable is linked to that dylib, the executable stores the install name of the dylib in its reference to it. (Note, it does not store the link-time path of the dylib file it linked against.) When the executable is loaded, the dynamic loader looks for the dylib using the install name it recorded, by default.
You can specify the install name using the -install_name <name> option to the linker. It could be the absolute path to where you expect the library to be installed (e.g. /usr/local/lib/foo.dylib), if you expect it to be installed in a fixed location. Often, though, that's not useful. You want a more flexible means for the dynamic loader to find the dylib.
The dynamic loader understands certain special path prefixes on install names to support such flexibility. See the dyld(1) man page. For example, if you specify an install name of #executable_path/foo.dylib then, at load time, the loader will look next to the executable for the library.
You can see the install name of a dylib by using otool -D foo.dylib. Your dylib may not have an install name, in which case its effective install name is just its file name with no path.
If the loader doesn't find the library by using its install name, it has a search strategy. By default, it looks in ~/lib:/usr/local/lib:/lib:/usr/lib. You can use some environment variables to alter the search strategy. For example, you can set DYLD_FALLBACK_LIBRARY_PATH to a colon-delimited list of directories to search, instead. These environment variables are also listed in the dyld(1) man page.
My build environment is CentOS 5. I have a third party library called libcunit. I installed it with autotools and it generates both libcunit.a and libcunit.so. I have my own application that links with a bunch of shared libraries. libcunit.a is in the current directory and libcunit.so and other shared libraries are in /usr/local/lib/. When I compile like:
gcc -o test test.c -L. libcunit.a -L/usr/local/lib -labc -lyz
I get a linkage error:
libcunit.a(Util.o): In function `CU_trim_left':
Util.c:(.text+0x346): undefined reference to `__ctype_b'
libcunit.a(Util.o): In function `CU_trim_right':
Util.c:(.text+0x3fd): undefined reference to `__ctype_b'
But when I compile with .so like:
gcc -o test test.c -L/usr/local/lib -lcunit -labc -lyz
it compiles fine and runs fine too.
Why is it giving error when linked statically with libcunit.a?
Why is it giving error when linked statically with libcunit.a
The problem is that your libcunit.a was built on an ancient Linux system, and depends on symbols which have been removed from libc (these symbols were used in glibc-2.2, and were removed from glibc-2.3 over 10 years ago). More exactly, these symbols have been hidden. They are made available for dynamic linking to old binaries (such as libcunit.so) but no new code can statically link to them (you can't create a new executable or shared library that references them).
You can observe this like so:
readelf -Ws /lib/x86_64-linux-gnu/libc.so.6 | egrep '\W__ctype_b\W'
769: 00000000003b9130 8 OBJECT GLOBAL DEFAULT 31 __ctype_b#GLIBC_2.2.5
readelf -Ws /usr/lib/x86_64-linux-gnu/libc.a | egrep '\W__ctype_b\W'
# no output
Didn't notice that the libcunit.a is actually found in your case and the problem with linakge is rather in the CUnit library itself. Employed Russian is absolutely right, and he's not talking about precompiled binary here. We understand that you've built it yourself. However, CUinit itself seems to be relying on the symbol from glibc which is not available for static linking anymore. As a result you have only 2 options:
File a report to the developers of CUnit about this and ask them to fix it;
Use dynamic linking.
Nevertheless, my recommendation about your style of static linkage still applies. -L. is in general bad practice. CUnit is a 3rd party library and should not be placed into the directory containing source files of your project. It should be rather installed in the same way as the dynamic version, i.e. like you have libcunit.so in /usr/local/lib. If you'd supply prefix to Autotools on the configure stage of CUnit, then Autotools would install everything properly. So if you want to link statically with CUnit, consider doing it in the following form:
gcc -o test test.c -L/usr/local/lib -Wl,-Bstatic -lcunit -Wl,-Bdynamic -labc -lyz
I don't get it. I usually install third party software into /usr/local so libraries are installed into /usr/local/lib and never had problems linking to these libraries. But now it suddenly no longer works:
$ gcc -lkaytils -o test test.c
/usr/bin/ld.gold.real: error: cannot find -lkaytils
/usr/bin/ld.gold.real: /tmp/ccXwCkYk.o: in function main:test.c(.text+0x15):
error: undefined reference to 'strCreate'
collect2: ld returned 1 exit status
When I add the parameter -L/usr/local/lib than it works but I never had to use this before. Header files in /usr/local/include are found without adding -I/usr/local/include.
I'm using Debian GNU/Linux 6 (Squeeze) which has an entry for /usr/local/lib in /etc/ld.so.conf.d/libc.conf by default and the ldconfig cache knows the library I'm trying to use:
k#vincent:~$ ldconfig -p | grep kaytils
libkaytils.so.0 (libc6,x86-64) => /usr/local/lib/libkaytils.so.0
libkaytils.so (libc6,x86-64) => /usr/local/lib/libkaytils.so
So what the heck is going on here? Where can I check which library paths are searched by gcc by default? Maybe something is wrong there.
gcc -print-search-dirs will tell you what path the compiler checks. /usr/local/lib is simply not among them, so your compile time linker (in this case the new gold ld from binutils) doesn't find the library while the dynamic one (ld-linux.so which reads the cache written by ldconfig) does. Presumably the builds you've done previously added -L/usr/local/lib as necessary in their makefiles (usually done by a ./configure script), or you installed binaries.
This is probably an issue of environment variables - you have something set that's including /usr/local/include but not /usr/local/lib
From the GCC mapage on environment variables
CPATH specifies a list of directories to be searched as if speci‐
fied with -I, but after any paths given with -I options on the com‐
mand line. This environment variable is used regardless of which
language is being preprocessed.
and
The value of LIBRARY_PATH is a colon-separated list of directories,
much like PATH. When configured as a native compiler, GCC tries
the directories thus specified when searching for special linker
files, if it can’t find them using GCC_EXEC_PREFIX. Linking using
GCC also uses these directories when searching for ordinary
libraries for the -l option (but directories specified with -L come
first).
try "printenv" to see what you have set
Is there a tool for modifying the shared library entries in the dynamic section of an ELF binary? I would like to explicitly modify the shared library dependencies in my binary (i.e. replace path to existing library with a custom path)
replace path to existing library with a custom path
If this is your own library, then you probably linking it like that:
$ cc -o prog1 -l/full/path/to/libABC.so prog1.o
instead of the proper:
$ cc -o prog1 -L/full/path/to/ -lABC prog1.o
The first approach tells Linux linker that application needs precisely that library, only that library and no override should be possible. Second approach tells that application needs the library which would be installed somewhere on the system, either in the default library path or one pointed by the $LD_LIBRARY_PATH (would be looked up during run-time). -L is used only during link-time.
Otherwise, instead of patching the ELF, first check if you can substitute the library using a symlink. This is the usual trick: it is hard to modify executable afterward, but it is very easy to change where to the symlink points.
patchelf is what you want
$ patchelf --replace-needed LIB_ORIGIN LIB_NEW ELF_FILE
To see the effect
$ readelf -d ELF_FILE
Install the tools is easy:
$ sudo apt-get install patchelf readelf
You may want to check the LD_LIBRARY_PATH environment variable.
If you look at the .dynsym section in Linux via readelf, you'll just see something like:
1: 0000000000000000 163 FUNC GLOBAL DEFAULT UND fseek#GLIBC_2.2.5 (2)
which just contains a symbolic name of the library. However, if you include the dynamic loader info, you get:
libc.so.6 => /lib/libc.so.6 (0x00002ba11da4a000)
/lib64/ld-linux-x86-64.so.2 (0x00002ba11d82a000)
So as mentioned, the absolute easiest thing to do (assuming you're doing this for debugging, and not forever) would just be to create a new session, export your custom path in front of the existing LD_LIBRARY_PATH, and go from there.