How do I package up go code as an arm RPM? - arm

I've got a go project that I'm building on OSX. I've compiled it for arm linux by using docker to run the linux compiler and GOARCH=arm64 to generate arm code. now I want to package it up as an RPM.
The problem is that I can't figure out how to get rpmbuild to give me an arm package. I've tried setting --target arm, --target arm64, --target arm-test-linux, and --target arm64-test-linux
All produce the same result,
file project.arm.rpm => project.arm.rpm: RPM v3.0 bin noarch
How do I get it to recognize the architecture instead of producing a noarch rpm?

use the BuildArch option in your spec file:
BuildArch: arm64

rpmbuild -bs project.spec
This will create project.src.rpm. And then:
sudo dnf install qemu-user-static
mock -r fedora-33-aarch64 project.src.rpm
which will use this feature: https://github.com/rpm-software-management/mock/wiki/Feature-forcearch
With the recent version of Mock you do not even need to specify --forcearch as it automatically detected.
If you omit the BuildArch then it will be automatically set using the chroot you use as a parameter for Mock.
BTW here is general guidance about packaging Go as an RPM. https://docs.fedoraproject.org/en-US/packaging-guidelines/Golang/

Related

Installing Kaldi on MacOS Catalina -- error with zlib

I am trying to work with the DiscVoice Library which requires the Kaldi Library. In order to install Kaldi, I needed to run extras/check_dependencies.sh to check the dependencies of the program, and I am currently getting:
extras/check_dependencies.sh: zlib is not installed.
extras/check_dependencies.sh: The following prerequisites are missing; install them first:
zlib1g-dev
I have been trying to install the zlib library, and have been unsuccessful. I have tried brew install zlib, which runs successfully but when I run the extras/check_dependencies.sh the output is not changed to reflect that.
I am working on macOS Catalina.
zlib is already there in macOS. You don't need to install it.
You might want to examine extras/check_dependencies.sh to see how it is looking for zlib. macOS does not store its headers files or library files in the usual locations.

How do I install crystal-lang on rapsberry pi?

When I try to add it to sources as per debian install instructions I get this error. I'm guessing this means that there are no arm packages for it.
Failed to fetch https://dist.crystal-lang.org/apt/dists/crystal/InRelease Unable to find expected entry 'main/binary-armhf/Packages' in Release file (Wrong sources.list entry or malformed file)
I'm guessing I probably need to install it from source. How would I go about doing that with an arm cpu? When I check it out and run make I get the error:
You need to have a crystal executable in your path! Makefile:113:
recipe for target '.build/crystal' failed make: *** [.build/crystal]
Error 1
Any suggestions would be greatly appreciated.
EDIT: There's now a semi-official repository for crystal on raspbian, check it out here: http://public.portalier.com/raspbian
Crystal doesn't build Debian packages for ARM, and you're correct in that you'll need to build from source.
However, the Crystal compiler is written in Crystal. This presents the obvious problem of how to get a compiler to build the compiler. The answer is cross-compilation: building an arm binary on a x86 desktop computer and copying it across.
Here's a quick step-by-step based on my memory of last time I cross-compiled:
Install Crystal on a x86 desktop PC, and check it works.
Install all required libraries on the desktop and Raspberry Pi. You'll need the same LLVM version on the Raspberry Pi and desktop. This is probably the hardest and longest step. You can install llvm 3.9 from debian testing for ARM, see this stackoverflow post for how to install only LLVM from debian testing.
Check out the sources from git on both computers, and run make deps.
Cross-compile the compiler by running this command in the root of the git repository:
./bin/crystal build src/compiler/crystal.cr --cross-compile --target arm-unknown-linux-gnueabihf --release -s -D without_openssl -D without_zlib
This command will create a crystal.o file in your current directory, and also output a linker command (cc crystal.o -o crystal ...).
Copy crystal.o to the raspberry pi, and run the linker command. Be sure to edit the absolute path to llvm_ext.o so that it points to the Crystal checkout on your Raspberry Pi, not the checkout on your desktop. Also make sure that all references to llvm-config in the command are for the correct LLVM version. For example, changing /usr/local/bin/llvm-config to llvm-config-3.9 on Raspbian.
Run the crystal executable in your current directory (./crystal -v) and make sure it works.
Ensure to set CRYSTAL_PATH environment variable is set to lib:/path/to/crystal/source/checkout/src so that the compiler can find the standard library when compiling applications.

Cross build third-party library locations on Linux

Ive been cross compiling my unit-tests to ensure they pass on all the platforms of interest, e.g. x86-linux, win32, win64, arm-linux
they unit tests require the CUnit library
So I've had to cross compile that also for each platform
That comes with its own autoconf stuff so you can easily cross-build it by specifying --host for configure
The question I have is where is the 'correct' place to have the CUnit libs installed for the various platforms? i.e. what should I set --prefix to for configure?
My initial guess was:
/usr/local/<platform>/lib/Cunit
i.e. setting --prefix /usr/local/<platform>
e.g. --prefix /usr/local/arm-linux-gnueabihf
which on sudo make install gives you:
/usr/local/arm-linux-gnueabihf/doc/CUnit
/usr/local/arm-linux-gnueabihf/include/CUnit
/usr/local/arm-linux-gnueabihf/lib
/usr/local/arm-linux-gnueabihf/share/CUnit
Obviously, if i don't specify a prefix for configure, each platform build overwrites the prev one which is no good
to then successfully link to these platform specific libs i need to specify the relevant lib dir for each target in its own LDFLAGS in the Makefile
Is this the right approach? Have I got the dir structure/location right for this sort of cross-build stuff? I assume there must be a defacto approach but not sure what it is..
possibly configure is supposed to handle all this stuff for me? maybe I just have to set --target correctly and perhaps --enable-multilib? all with --prefix=/usr/local?
some of the error msgs i get suggest /usr/lib/gcc-cross might be involve?
From reading more about cross compilation and the Gnu configure and build system it seems that I should just be setting the --target option for the configure step
but how do you know what the target names are? are they some fragment of the cross compiler names?
The 3 cross compilers I am using are:
arm-linux-gnueabihf-gcc-4.8
i686-w64-mingw32-gcc
x86_64-w64-mingw32-gcc
allowing me to cross-compile for ARM, win32 and win64
my host is 32 bit ubuntu, which I think might be --host i386-linux, but it seems that configure should get this right as its default
This is the procedure I finally figured out and got to work:
for each of my 3 cross-build tools (arm, win32, win64) my calls to configure looked like:
./configure --host=arm-linux-gnueabihf --build=i686-pc-linux-gnu --prefix=/usr/local/arm-linux-gnueabihf
./configure --host=i686-w64-mingw32 --build=i686-pc-linux-gnu --prefix=/usr/local/i686-w64-mingw32
./configure --host=x86_64-w64-mingw32 --build=i686-pc-linux-gnu --prefix=/usr/local/x86_64-w64-mingw32
each of these was followed by make, sudo make install
prior to calling configure for the arm cross build i had to do:
ln -s /usr/bin/arm-linux-gnueabihf-gcc-4.8 /usr/bin/arm-linux-gnueabihf-gcc
this was because the compiler had -4.8 tagged on the end so configure could not correctly 'guess' the name of the compiler
this issue did not apply to either the win32 or win64 mingw compilers
Note an additional gotcha was that when subsequently trying to link to these cross compiled CUnit libs, none of the cross compilers seemed to look in /usr/local/include by default so I had to manually add:
-I/usr/local/include
for each object file build
e.g. i added /usr/local/include to INCLUDE_DIRS in my Makefile
all this finally seems to have given me correctly cross built CUnit libs and I have successfully linked to them to produce cross built unit test binaries for each of the target platforms.
not at all easy and I would venture to call the configure option settings 'counter-intuitive' - as ever it is worth taking the time to read the relevant docs - this snippet was pertinent:
There are three system names that the build knows about: the machine
you are building on (build), the machine that you are building for
(host), and the machine that GCC will produce code for (target). When
you configure GCC, you specify these with --build=, --host=, and
--target=.
Specifying the host without specifying the build should be avoided, as
configure may (and once did) assume that the host you specify is also
the build, which may not be true.
If build, host, and target are all the same, this is called a native.
If build and host are the same but target is different, this is called
a cross. If build, host, and target are all different this is called a
canadian (for obscure reasons dealing with Canada's political party
and the background of the person working on the build at that time).
If host and target are the same, but build is different, you are using
a cross-compiler to build a native for a different system. Some people
call this a host-x-host, crossed native, or cross-built native.
and also:
When people configure a project like './configure', man often meets
these three confusing options, which are more related with
cross-compilation
--host: In which system the generated program will run.
--build: In which system the program will be built.
--target: this option is only used to build a cross-compiling
toolchain. When the tool chain generates executable program, in which target
system the program will run.
An example of tslib (a mouse driver library)
'./configure --host=arm-linux --build=i686-pc-linux-gnu': the
dynamically library is built on a x86 linux computer but will be used
for a embedded arm linux system.

How to install without make?

I am working with a BeagleBoard and I have already compiled ZMQ library with arm-linux-gnueabi gcc compiler. The problem is I don't know where to copy all that files because I don't have make command nor I am able to install it.
If I run:
uname -mrs
I get:
Linux 3.2.8-mg01.3 armv7l
Thanks in advance!
Build & install required library on your build machine:
./configure --prefix=/custom/location
make && make install
And then just copy /custom/location from your build machine to the target machine's root /.
You need to check that no stuff are being overwritten (or at least that no dependencies got lost).
Another, correct way, would be to create an installable package (i.e., deb or rpm), but that is a different question.

How do I cross-compile libsndfile for Arm/Raspberry Pi

I'm trying to get a working cross-compiler running under Linux (Debian squeeze amd64) but I can't seem to link my files with the installed libsndfile, I'm assuming I need to cross-compile the source to target the Raspberry-Pi and link to that version. But I can't seem to find straightforward instructions on doing the ./configure and make stage to compile to the target.
Note: I followed these steps: How do I build a GCC 4.7 toolchain for cross-compiling? to get the cross compiler built and using Eclipse.
Ok, what you want to do is something which you probably cannot do within Eclipse.
Instead you need a terminal (eg xterm or gnome-terminal). The you need to run the configure script with something along the lines of:
./configure --prefix=$HOME/Arm --build=i386-linux --host=arm-unknown-linux-gnueabi
Watch the configure output to make sure that the configure script picks up the correct compiler and then do:
make && make install
When you then want to build something else that links against the Arm binaries, make sure they get the headers from $HOME/Arm/include and link against the library in $HOME/Arm/lib.
You should be able to find lots of documentation about cross compiling stuff that uses autoconf generated configure scripts on the net.

Resources