Is there a standard way to tell brew (homebrew package manager for Mac OS) to build combined 32/64-bit binaries? - osx-snow-leopard

Is there a standard way to tell brew to compile libraries with "fat" 32/64 combination libraries? I'd like to build libxml2 for combo 32/64 bit?
/usr/local/lib/libxml2.dylib: Mach-O 64-bit dynamically linked shared library x86_64
I can do it if I build the library "by hand", but I'm wondering if there's some convention for brew and brew formula to tell brew to do this.

There's no standard way. Some formulae have a --universal option for doing this (but not all of them).

Related

How do I install libuuid with Homebrew on MacOS 11.1?

I'm aware there is a macports libuuid package, however AFAIK it's not safe to use homebrew and macports on the same machine.
This answer seems outdated. I say that because when I use brew install ossp-uuid make doesn't like it, despite seeming to find uuid.h.
What is the current and up to date homebrew package to install libuuid on mac?
Alternatively, could one build libuuid from source and if yes, how?
I was made aware that libuuid is already part of macOS.
It was therefore enough to just add an OS specific include.
#if __APPLE__
#include <uuid/uuid.h>
#else
#include <uuid.h>
#endif
I was porting an app from Linux to MacOS Monterey (could apply to 11.1 (Big Sur) as well).
The link was failing on libuuid, even though it compiled ok as on Linux.
I finally got my build to link, having 'homebrew install'ed ossp-uuid with homebrew version 3.6.4, under Monterey (12.5.1).
I first heard of ossp-uuid in this thread (thanks blkpingu).
I had tried several things before. It compiled ok, but did not link. brew install ossp-uuid fixed that (homebrew 3.6.4), and brew set up the requisite links to find the library under /usr/local/lib .
I left the source code as #include <uuid/uuid.h>, in this particular case.

Algorand's Verifiable Random Function (VRF) implementation

I have been trying for a while to compile the VRF implementation that algorand open-sourced more than a year ago (available here). There is little to no documentation, so I haven't able to do it. I have tried in both Mac OS and Linux without much luck. It seems like the installation scripts on their fork of libsodium just skip compiling the VRF files. With so much code, is difficult to get what is the error. Has anybody done this?
The 1.0.16 release doesn't seem to include the VRF files at all. One change mentions crypto_vrf.c, but that is not in the tarball.
Cloning the repository includes crypto_vrf.c and the code was compiled.
Use:
git clone git#github.com:algorand/libsodium.git
cd libsodium
sh autogen.sh
./configure
make
This recipe appeared to work fine on a MacBook Pro running macOS 10.14.6 Mojave, both when using the Xcode clang compiler and when using my home-built GCC 9.3.0 (gcc). I would expect the same recipe to work on Linux too.
The recipe given does assume you have sufficiently modern versions of the 'AutoTools' — autoconf, automake, libtool (and m4). They don't have to be all that modern. I have autoconf v2.69 (copyright date: 2012) and automake v1.15 (copyright date: 2015) — and used /usr/bin/m4 and the libtool included in the package. I compiled autoconf and automake so that they're installed in /opt/gnu/bin (though you could probably get pre-built sets using Brew or one of the other systems for obtaining open source packages for macOS).

Installing LAPACK and BLAS Libraries for C on Mac OS

I wanted instructions/websites from where I could download LAPACK and BLAS libraries for use in my C programs. I also wanted to know how I could link these to the gcc compiler from terminal.
You can use Homebrew to take care of this for you. Just install Homebrew and then:
brew install openblas
brew install lapack
But you don't even need that: macOS already ships with BLAS and LAPACK implementations in its vecLib framework. So if your software is vecLib-aware, or you pass it the right compiler options, you don't even need to install a separate BLAS and LAPACK.

How to work with external libraries when cross compiling?

I am writing some code for raspberry pi ARM target on x86 ubuntu machine. I am using the gcc-linaro-armhf toolchain. I am able to cross compile and run some independent programs on pi. Now, I want to link my code with external library such as ncurses. How can I achieve this.
Should I just link my program with the existing ncurses lib on host machine and then run on ARM? (I don't think this will work)
Do I need to get source or prebuilt version of lib for arm, put it in my lib path and then compile?
What is the best practice in this kind of situation?
I also want to know how it works for the c stdlib. In my program I used the stdio functions and it worked after cross compiling without doing anything special. I just provided path for my arm gcc in makefile. So, I want to know, how it got correct std headers and libs?
Regarding your general questions:
Why the C library works:
The C library is part of your cross toolchain. That's why the headers are found and the program correctly links and runs. This is also true for some other very basic system libraries like libm and libstdc++ (not in every case, depends on the toolchain configuration).
In general when dealing with cross-development you need some way to get your desired libraries cross-compiled. Using binaries in this case is very rare. That is, especially with ARM hardware, because there are so many different configurations and often everything is stripped down much in different ways. That's why binaries are not very much binary compatible between different devices and Linux configurations.
If you're running Ubuntu on the Raspberry Pi then there is a chance that you may find a suitable ncurses library on the internet or even in some Ubuntu apt repository. The typical way, however, will be to cross compile the library with the specific toolchain you have got.
In cases when a lot and complex libraries need to be cross-compiled there are solutions that make life a bit easier like buildroot or ptxdist. These programs build complete Linux kernels and root file systems for embedded devices.
In your case, however, as long as you only want ncurses you can compile the source code yourself. You just need to download the sources, run configure while specifying your toolchain using the --host option. The --prefix option will choose the installation directory. After running make and make install, considering everything went fine, you will have got a set of headers and the ARM-compiled library for your application to link against.
Regarding cross compilation you will surely find loads of information on the internet and maybe ncurses has got some pointers in its shipped documentation, too.
For the query How the C library works in cross-tools
When compiling and building cross-tool chain during configuration they will provide sysroot.
like --with-sysroot=${CLFS_CROSS_TOOLS}
--with-sysroot
--with-sysroot=dir
Tells GCC to consider dir as the root of a tree that contains (a subset of) the root filesystem of the target operating system. Target system headers, libraries and run-time object files will be searched for in there. More specifically, this acts as if --sysroot=dir was added to the default options of the built compiler. The specified directory is not copied into the install tree, unlike the options --with-headers and --with-libs that this option obsoletes. The default value, in case --with-sysroot is not given an argument, is ${gcc_tooldir}/sys-root. If the specified directory is a subdirectory of ${exec_prefix}, then it will be found relative to the GCC binaries if the installation tree is moved.
So instead of looking /lib /usr/include it will look /Toolchain/(libc) and (include files) when its compiling
you can check by
arm-linux-gnueabihf-gcc -print-sysroot
this show where to look for libc .
also
arm-linux-gnueabihf-gcc -print-search-dirs
gives you clear picture
Clearly, you will need an ncurses compiled for the ARM that you are targeting - the one on the host will do you absolutely no good at all [unless your host has an ARM processor - but you said x86, so clearly not the case].
There MAY be some prebuilt libraries available, but I suspect it's more work to find one (that works and matches your specific conditions) than to build the library yourself from sources - it shouldn't be that hard, and I expect ncurses doesn't take that many minutes to build.
As to your first question, if you intend to use ncurses library with your cross-compiler toolchain, you'll have its arm-built binaries prepared.
Your second question is how it works with std libs, well it's really NOT the system libc/libm the toolchain is using to compile/link your program is. Maybe you'll see it from --print-file-name= option of your compiler:
arm-none-linux-gnuabi-gcc --print-file-name=libm.a
...(my working folder)/arm-2011.03(arm-toolchain folder)/bin/../arm-none-linux-gnuabi/libc/usr/lib/libm.a
arm-none-linux-gnuabi-gcc --print-file-name=libpthread.so
...(my working folder)/arm-2011.03(arm-toolchain folder)/bin/../arm-none-linux-gnuabi/libc/usr/lib/libpthread.so
I think your Raspberry toolchain might be the same. You can try this out.
Vinay's answer is pretty solid. Just a correction when compiling the ncurses library for raspberry pi the option to set your rootfs is --sysroot=<dir> and not --with-sysroot . Thats what I found when I was using the following compiler:
arm-linux-gnueabihf-gcc --version
arm-linux-gnueabihf-gcc (crosstool-NG linaro-1.13.1+bzr2650 - Linaro GCC 2014.03) 4.8.3 20140303 (prerelease)
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Location of C standard library

In the gcc manual it is given that "The C standard library itself
is stored in ‘/usr/lib/libc.a’". I have gcc installed, but could not find libc.a at the said location. Curious to know where is it located.
I find many .so files in /usr/lib location. What are those?
If you are looking for libc.a:
$ gcc --print-file-name=libc.a
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libc.a
A few things:
gcc and glibc are two different things. gcc is the compiler, glibc are the runtime libraries. Pretty much everything needs glibc to run.
.a files are static libraries, .so means shared object and is the Linux equivalent of a DLL
Most things DON'T link against libc.a, they link against libc.so
Hope that clears it up for you. As for the location, it's almost certainly going to be in /usr/lib/libc.a and / or /usr/lib/libc.so. Like I said, the .so one is the more common.
If you are on RPM based Linux (Red Hat/CentOS/Fedora/SUSE) then you would get the location of the installed glibc with
rpm -ql glibc and rpm -ql glibc-devel .
locate libc.a would get you the location. And to see from where it comes do:
rpm -qf /usr/lib/libc.a
Here is what rpm -qi has to tell about these packages
glibc-devel:
The glibc-devel package contains the object files necessary
for developing programs which use the standard C libraries (which are
used by nearly all programs). If you are developing programs which
will use the standard C libraries, your system needs to have these
standard object files available in order to create the
executables.
Install glibc-devel if you are going to develop programs which will
use the standard C libraries
glibc:
The glibc package contains standard libraries which are used by
multiple programs on the system. In order to save disk space and
memory, as well as to make upgrading easier, common system code is
kept in one place and shared between programs. This particular package
contains the most important sets of shared libraries: the standard C
library and the standard math library. Without these two libraries, a
Linux system will not function.
You need to install package for static libraries separately:
glibc-static.i686
On centos 5.8
$ ls -l /usr/lib/libc.a
-rw-r--r-- 1 root root 2442786 Apr 8 2010 /usr/lib/libc.a
$ rpm -qf /usr/lib/libc.a
glibc-devel-2.3.4-2.43.el4_8.3
You also have to have the glibc-devel package install under RedHat distributions.

Resources