In the gcc manual it is given that "The C standard library itself
is stored in ‘/usr/lib/libc.a’". I have gcc installed, but could not find libc.a at the said location. Curious to know where is it located.
I find many .so files in /usr/lib location. What are those?
If you are looking for libc.a:
$ gcc --print-file-name=libc.a
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libc.a
A few things:
gcc and glibc are two different things. gcc is the compiler, glibc are the runtime libraries. Pretty much everything needs glibc to run.
.a files are static libraries, .so means shared object and is the Linux equivalent of a DLL
Most things DON'T link against libc.a, they link against libc.so
Hope that clears it up for you. As for the location, it's almost certainly going to be in /usr/lib/libc.a and / or /usr/lib/libc.so. Like I said, the .so one is the more common.
If you are on RPM based Linux (Red Hat/CentOS/Fedora/SUSE) then you would get the location of the installed glibc with
rpm -ql glibc and rpm -ql glibc-devel .
locate libc.a would get you the location. And to see from where it comes do:
rpm -qf /usr/lib/libc.a
Here is what rpm -qi has to tell about these packages
glibc-devel:
The glibc-devel package contains the object files necessary
for developing programs which use the standard C libraries (which are
used by nearly all programs). If you are developing programs which
will use the standard C libraries, your system needs to have these
standard object files available in order to create the
executables.
Install glibc-devel if you are going to develop programs which will
use the standard C libraries
glibc:
The glibc package contains standard libraries which are used by
multiple programs on the system. In order to save disk space and
memory, as well as to make upgrading easier, common system code is
kept in one place and shared between programs. This particular package
contains the most important sets of shared libraries: the standard C
library and the standard math library. Without these two libraries, a
Linux system will not function.
You need to install package for static libraries separately:
glibc-static.i686
On centos 5.8
$ ls -l /usr/lib/libc.a
-rw-r--r-- 1 root root 2442786 Apr 8 2010 /usr/lib/libc.a
$ rpm -qf /usr/lib/libc.a
glibc-devel-2.3.4-2.43.el4_8.3
You also have to have the glibc-devel package install under RedHat distributions.
Related
So I've been reading online but I'm still very confused. I understand that there are different tools in the Linux-on-Windows world: Msys, Msys2, Cygwin, Mingw and Mingw-64.
Here is what I think I know, and please correct me if I'm wrong:
Mingw aims to simply be a port of the GCC programs to Windows. It creates native Windows binaries and that's it.
Mingw-64 is just a more recent and better supported version of Mingw that also supports Windows 64 bit.
Cygwin, while also including Mingw (?) to support GCC on Windows, provides a POSIX compatibility layer through a DLL that all programs are linked to by default.
MSYS is a fork of Cygwin, but it drops some of the POSIX compatibility efforts. Instead it mostly aims to allow creating native Windows program. But - they will still be dependent on a MSYS DLL being present.
MSYS2 is just a more recent and active version of the less active MSYS.
Is this all true? If it is, here is what I want to validate:
Essentially, I think all I should need for my development is Mingw in order to use GCC to build native Windows applications. I don't need a POSIX layer, and I don't want my program to depend on any DLL apart from the ones that are present on Windows systems anyway. As far as I understand, this is what Mingw offers.
However, somehow I managed to install MSYS (or MSYS2? I'm not sure anymore) on my system. The tutorial I was following early on suggested doing so.
Since it seems MSYS(2) includes Mingw under C:\msys64\mingw64, I just use the Mingw binaries directly from the Windows CMD without going through the MSYS(2) shell program. For example, I just added C:\msys64\mingw64\bin to the PATH and I run gcc from the Windows CMD directly to compile my project.
Is this a valid way to use Mingw? Or am I expected to run into problems?
Does this approach create pure Windows native binaries which should never depend on any MSYS(2)-related DLL?
Is it true that the MSYS(2)-related functionality and dependencies only come into play if I launch the Mingw programs (such as GCC) through the msys2.exe shell program? And so if I want to avoid any MSYS(2) or Cygwin related stuff, and simply use pure Mingw GCC, is it an okay approach to just launch GCC directly under the Mingw directory as described earlier?
Update: I have now checked using Dependency Walker, and running C:\msys64\mingw64\bin\gcc from the MSYS2 shell still creates an .exe with no special dependencies (which is good). So what is this msys-2.0.dll that the MSYS2 docs speak of? And how is using MSYS2 to compile C different than just using Mingw?
You're mostly right about what these projects are. MSYS2 does provide an evironment for POSIX programs like Bash, GNU Make, and other utilities, but it also provides a package manager named pacman that you can use to install lots of other things. In fact, you can use pacman to install a mingw-w64 toolchain.
MSYS2 provides two mingw-w64 toolchains actually: you get a choice of an i686 (32-bit) toolchain which makes native Windows binaries that can run on any Windows computer, or an x86_64 (64-bit) toolchain that makes native Windows binaries that only work on 64-bit Windows. You can install both of these at the same time.
You say "I don't need a POSIX layer", but you might find it useful to be able to write Bash scripts or use POSIX programs provided by MSYS2 like GNU Make when building your native Windows software. This is especially useful if you want to someday build your software on Linux or macOS: it's possible to write a simple Makefile or shell script that works on those platforms and also MSYS2.
Yes, it's valid to use the binaries from C:\msys64\mingw64\bin directly if you want to.
Yes, the mingw-w64 toolchain creates native Windows binaries regardless of which shell you happen to run it from.
No. Whether you start MSYS2 via msys2.exe, mingw32.exe, or mingw64.exe, you get a Bash shell with various Linux utilities available like ls, grep, make, and tar. The shell and those utilities use the POSIX emulation provided by msys-2.0.dll. The main difference between those MSYS2 launchers is what gets added to your PATH, so you might want to run echo $PATH and env in each of those environments and compare the results.
I'd strongly recommend using MSYS2 instead of MSYS and mingw.org . Pretend those latter two don't even exist. Being under active development the newer projects are better in every way.
MSYS2's package manager can deliver toolchains for the following target systems:
Standalone Win32 (i686)
Standalone Win64 (x86_64)
MSYS2 i686
MSYS2 x86_64
The former two cases can be invoked from any shell you like. You may need to set up paths if not using the launch script provided by MSYS2. They produce native Windows executables. Using the default switches to GCC there will be some dependencies, such as libgcc_s*.dll . Doing a static build with -static will produce an executable with no dependencies other than Windows DLLs.
In the latter two cases, the binary will depend on the MSYS2 DLL, and other things, but this provides support for a range of POSIX functionality.
[~ MSYS]$ ls /usr/include
_ansi.h cursesp.h glob.h net strings.h
_newlib_version.h cursesw.h gnumake.h netdb.h symcat.h
_syslist.h cursslk.h grp.h netinet sys
a.out.h cygwin icmp.h newlib.h sysexits.h
acl devctl.h ieeefp.h nl_types.h syslog.h
aio.h diagnostics.h ifaddrs.h panel.h tar.h
alloca.h dirent.h inttypes.h paths.h term.h
alpm.h dis-asm.h io.h plugin-api.h term_entry.h
alpm_list.h dlfcn.h langinfo.h poll.h termcap.h
ansidecl.h elf.h lastlog.h process.h termio.h
ar.h endian.h libfdt.h pthread.h termios.h
argz.h envlock.h libfdt_env.h pty.h tgmath.h
arpa envz.h libgen.h pwd.h threads.h
asm err.h limits.h reent.h tic.h
assert.h errno.h locale.h regdef.h time.h
attr error.h machine regex.h tzfile.h
bfd.h eti.h magic.h resolv.h ucontext.h
bfd_stdint.h etip.h malloc.h sched.h unctrl.h
bfdlink.h fastmath.h mapi.h search.h unistd.h
bits fcntl.h math.h semaphore.h utime.h
byteswap.h fdt.h memory.h setjmp.h utmp.h
complex.h features.h menu.h signal.h utmpx.h
cpio.h fenv.h mntent.h spawn.h w32api
ctf.h FlexLexer.h monetary.h ssp wait.h
ctf-api.h fnmatch.h mqueue.h stdatomic.h wchar.h
ctype.h form.h nc_tparm.h stdint.h wctype.h
curses.h fts.h ncurses stdio.h winpty
cursesapp.h ftw.h ncurses.h stdio_ext.h wordexp.h
cursesf.h gawkapi.h ncurses_dll.h stdlib.h xlocale.h
cursesm.h getopt.h ncursesw string.h
[~ MSYS]$
[~ MSYS]$
[~ MSYS]$ ls /usr/include/sys
_default_fcntl.h acl.h fcntl.h mman.h quota.h signal.h stdio.h termio.h ttychars.h utsname.h
_intsup.h cdefs.h features.h mount.h random.h signalfd.h strace.h termios.h types.h vfs.h
_pthreadtypes.h config.h file.h msg.h reent.h smallprint.h string.h time.h ucontext.h wait.h
_sigset.h custom_file.h iconvnls.h mtio.h resource.h socket.h sysinfo.h timeb.h uio.h xattr.h
_stdint.h cygwin.h ioctl.h param.h sched.h soundcard.h syslimits.h timerfd.h un.h
_timespec.h dir.h ipc.h poll.h select.h stat.h syslog.h times.h unistd.h
_timeval.h dirent.h kd.h procfs.h sem.h statfs.h sysmacros.h timespec.h utime.h
_types.h errno.h lock.h queue.h shm.h statvfs.h sysproto.h tree.h utmp.h
Cygwin is a competing product also providing POSIX functions and depending on a Cygwin DLL. The MSYS2 target is a fork of Cygwin.
I am writing some code for raspberry pi ARM target on x86 ubuntu machine. I am using the gcc-linaro-armhf toolchain. I am able to cross compile and run some independent programs on pi. Now, I want to link my code with external library such as ncurses. How can I achieve this.
Should I just link my program with the existing ncurses lib on host machine and then run on ARM? (I don't think this will work)
Do I need to get source or prebuilt version of lib for arm, put it in my lib path and then compile?
What is the best practice in this kind of situation?
I also want to know how it works for the c stdlib. In my program I used the stdio functions and it worked after cross compiling without doing anything special. I just provided path for my arm gcc in makefile. So, I want to know, how it got correct std headers and libs?
Regarding your general questions:
Why the C library works:
The C library is part of your cross toolchain. That's why the headers are found and the program correctly links and runs. This is also true for some other very basic system libraries like libm and libstdc++ (not in every case, depends on the toolchain configuration).
In general when dealing with cross-development you need some way to get your desired libraries cross-compiled. Using binaries in this case is very rare. That is, especially with ARM hardware, because there are so many different configurations and often everything is stripped down much in different ways. That's why binaries are not very much binary compatible between different devices and Linux configurations.
If you're running Ubuntu on the Raspberry Pi then there is a chance that you may find a suitable ncurses library on the internet or even in some Ubuntu apt repository. The typical way, however, will be to cross compile the library with the specific toolchain you have got.
In cases when a lot and complex libraries need to be cross-compiled there are solutions that make life a bit easier like buildroot or ptxdist. These programs build complete Linux kernels and root file systems for embedded devices.
In your case, however, as long as you only want ncurses you can compile the source code yourself. You just need to download the sources, run configure while specifying your toolchain using the --host option. The --prefix option will choose the installation directory. After running make and make install, considering everything went fine, you will have got a set of headers and the ARM-compiled library for your application to link against.
Regarding cross compilation you will surely find loads of information on the internet and maybe ncurses has got some pointers in its shipped documentation, too.
For the query How the C library works in cross-tools
When compiling and building cross-tool chain during configuration they will provide sysroot.
like --with-sysroot=${CLFS_CROSS_TOOLS}
--with-sysroot
--with-sysroot=dir
Tells GCC to consider dir as the root of a tree that contains (a subset of) the root filesystem of the target operating system. Target system headers, libraries and run-time object files will be searched for in there. More specifically, this acts as if --sysroot=dir was added to the default options of the built compiler. The specified directory is not copied into the install tree, unlike the options --with-headers and --with-libs that this option obsoletes. The default value, in case --with-sysroot is not given an argument, is ${gcc_tooldir}/sys-root. If the specified directory is a subdirectory of ${exec_prefix}, then it will be found relative to the GCC binaries if the installation tree is moved.
So instead of looking /lib /usr/include it will look /Toolchain/(libc) and (include files) when its compiling
you can check by
arm-linux-gnueabihf-gcc -print-sysroot
this show where to look for libc .
also
arm-linux-gnueabihf-gcc -print-search-dirs
gives you clear picture
Clearly, you will need an ncurses compiled for the ARM that you are targeting - the one on the host will do you absolutely no good at all [unless your host has an ARM processor - but you said x86, so clearly not the case].
There MAY be some prebuilt libraries available, but I suspect it's more work to find one (that works and matches your specific conditions) than to build the library yourself from sources - it shouldn't be that hard, and I expect ncurses doesn't take that many minutes to build.
As to your first question, if you intend to use ncurses library with your cross-compiler toolchain, you'll have its arm-built binaries prepared.
Your second question is how it works with std libs, well it's really NOT the system libc/libm the toolchain is using to compile/link your program is. Maybe you'll see it from --print-file-name= option of your compiler:
arm-none-linux-gnuabi-gcc --print-file-name=libm.a
...(my working folder)/arm-2011.03(arm-toolchain folder)/bin/../arm-none-linux-gnuabi/libc/usr/lib/libm.a
arm-none-linux-gnuabi-gcc --print-file-name=libpthread.so
...(my working folder)/arm-2011.03(arm-toolchain folder)/bin/../arm-none-linux-gnuabi/libc/usr/lib/libpthread.so
I think your Raspberry toolchain might be the same. You can try this out.
Vinay's answer is pretty solid. Just a correction when compiling the ncurses library for raspberry pi the option to set your rootfs is --sysroot=<dir> and not --with-sysroot . Thats what I found when I was using the following compiler:
arm-linux-gnueabihf-gcc --version
arm-linux-gnueabihf-gcc (crosstool-NG linaro-1.13.1+bzr2650 - Linaro GCC 2014.03) 4.8.3 20140303 (prerelease)
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
I have been working on some C code on a windows machine and now I am in the process of transferring it to a Linux computer where I do not have full privileges. In my code I link to several static libraries.
Is it correct that these libraries need to be re-made for a Linux computer?
The library in question is GSL-1.13 scientific library
Side question, does anyone have a pre-compiled version of the above for Linux?
I have tried using automake to compile the source on the Linux machine, but no makefile seems to be created and no error is output.
Thanks
Yes, you do need to compile any library again when you switch from Windows to GNU/Linux.
As for how to do that, you don't need automake to build GSL. You should read the file INSTALL that comes inside the tarball (the file gsl-1.16.tar.gz) very carefully. In a nutshell, you run the commands
$ ./configure
$ make
inside the directory that you unpacked from the tarball.
I want to automate expect for passwd but I don't have permission to install.
But if I could copy and paste the expect source code and execute the .c files usingcc/gcc
and generate the executable expect.
or
Can I copy the expect executable from linux and just use it anywhere else like on solaris, aix etc?
This is the expect in /usr/bin/expect in my linux box:
[root#test]# file /usr/bin/expect
/usr/bin/expect: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped
The prebuilt executables for Solaris and Linux at kbskit include Expect (along with lots of other Tcl extensions) in the *bi "Batteries Included" versions. Each is just one big file, no unpacking or installing is needed, apart from eg. chmod a+x SunOS_kbsvq8.5-bi to make the file executable. You use this executable to run your script, and at the start of the script you need to add package require Expect to set up the Expect commands.
If you have a C compiler, you can build expect (but you have to build Tcl first).
The executable from Linux can't be used almost anywhere else (on solaris, aix, etc.); there is a chance for it to work on FreeBSD.
If you are on Solaris 11 then installing expect is as easy as:
pkg install //solaris/shell/expect
The package will come from the official Oracle Solaris repository so nothing to worry about. There's no need to get sources, build, etc.
I understand that you may not have permission to do this but since this comes from official source (Oracle) I'm pretty sure your sysadmin will do it in a sec.
I am looking for a program to create a C-library (i.e. to link and compile the files into one file) based on .h-files and .c-files that have the following structure (it is a FEC-library: www.openfec.org ). I am using Ubuntu. I want it to do this without manually specifying each files. I tried WAF, but got the error 'ERROR:root: error: No module named cflags'.
Here is (part of the) the structure:
fec
lib_advanced
ldpc_from_file
of_code_profile.h
of_ldpc_ff.h
...
lib_common
linear_binary_codes_utils
binary_matrix
it_decoding
ml_decoding
statistics
of_cb.h
of_debug.h
of_mem.c
of_mem.h
of_openfec_api.c
of_openfec_api.h
of_openfec_profile.h
of_types.h
Thanks!!
You have to use gcc to compile the C-files to objects files.
Then you have to use ar r and then ranlib to pack the objects into one .a library file.
C libraries on *nix systems (including all linux distros) are created with standards tools, this tools being a) a C compiler b) a linker b) the ar utility c) the ranlib utility.
The C compiler 99.9% of the time the GNU C compiler, while the linker ld, and the utilities ar and ranlib are part of the binutils package on gnu systems (99.9% of linux systems).
ar and ranlib are used to created static libraries, putting already compiled object files ( *.o files) in an archive file libsomething.a with ar and indexing the archive with ranlib.
The linker can be called inside the gcc compiler to create dynamic libraries with position independent code, again the already compiled files are archived on a special, this file has the .so extension for shared object.
Static Libraries are used for speed and self-containment, they produce big executables which contain all their dependencies inside the final executable. If a single library of many changes, to update it you'll have to recompiled everything.
Dynamic libraries are compiled and linked separately of the executables, they can be used simultaneously by multiple executables, if one library is updated, you just need to recompile a single library and not every executable which depends on it.
The use of this tools are universal and standard procedure, they can vary by few details from *nix systems to *nix system, but on linux you essentially always use the GCC and Binutils packages. Extra build utilities on the form of make, cmake, autotools, etc exist to help on the process, but the basic tools are always used.
Generally on the most basic level you write a Makefile script which is interpreted by the make utility. And depending on your commands it can make one or both kinds of libraries, install the library and executables, uninstall them, clean up, etc
For more information :
http://www.yolinux.com/TUTORIALS/LibraryArchives-StaticAndDynamic.html
http://www.dwheeler.com/program-library/Program-Library-HOWTO/