I'm currently working on an application that I would like to publish to many distributions. So far, I've done all my testing on one distribution at a time (compile and run on the same distro). But when I take the outputted AppImage from compilation on my main computer (Arch Linux), and try to run it in a vm (Ubuntu 20.04), it gives me the error below:
gabriel#gabriel-VirtualBox:~/Downloads$ ./Neptune.Installer-x86_64.AppImage ./Neptune.Installer-x86_64.AppImage: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by ./Neptune.Installer-x86_64.AppImage)
What possible solutions are there to this? I've considered statically linking the library, but I'm unsure if that might cause licensing issues, as my program is not open source. Apart from that, I might consider simply compiling my program on a very old distribution such as Ubuntu 12 or something, but I won't know how well that carries over to other distros (for example, will my program still work on an old version of Fedora?)
This might be a complicated question but I just want to know what the best way to solve this issue is. Change libraries? Statically link? Compile on old distributions? Let me know what you think.
I've considered statically linking the library, but I'm unsure if that might cause licensing issues,
Yes.
very old distribution such as Ubuntu 12 or something, but I won't know how well that carries over to other distros
It doesn't (alpine linux). If you compile software, you have to run it with the set of libraries you compiled it against. Even if you compile on "very old distributions" there may be changes.
publish to many distributions
Choose the list of distributions and versions of these distributions you want to support. Tell users that you will support these distribution versions. (https://partner.steamgames.com/doc/store/application/platforms -> Steam only officially supports Ubuntu running Ubuntu 12.04 LTS or newer..).
Compile against every combination of distribution+version separately, and distribute your software separately for every such distribution version. For users convenience, create and share package repositories for specific distribution package manager with your software. On https://www.zabbix.com/download there are only so many combinations to choose from. Interest yourself in CI/CD in docker virtualized environments. I like gitlab.
Or alternatively distribute your application with all dependent shared libraries. Bundle it all together with the operating system and distribute in a form of a docker image or a qemu/virtualbox virtual image. Or distribute with just shared libraries files with a wrapper around LD_PRELOAD. Just like steam does. Install steam on your system, and see what happens in ~/.steam/steam/ubuntu12_64.
And hire a layer to solve the licensing issues.
I have successfully managed to cross-compile the C Azure IoT SDK for a target device running embedded Linux. The instructions are here : https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/SDK_cross_compile_example.md
The next step is to get a basic application using the SDK running on the target device.
How would one go about doing this? Where are the generated libraries etc. to copy to the sysroot of the target device.
There seems to be only support for Rasberry Pi and generating a new firmware image.
I would recommend that you use the -DCMAKE_INSTALL_PREFIX=[output path] when you generate your makefiles. Once you have run cmake and make you can then run make install which will copy the generated libraries to the location you chose. You do NOT want to install them into your host's library search path since (presumably) they are built for an incompatible architecture. Having done that the /lib directory will have the libraries that you need to use to build your application. These are static libraries (unless you chose otherwise) so they only need to be linked to your application. They do not need to be on the device. Obviously you will also need to cross compile your application.
There are a couple of things you need to look out for though. Your device will need to have the same version of OpenSSL and curl that you used when you built the SDK. These are dynamic libraries so your application would likely fail at run time if you don't take care of that since there would be a version mismatch.
There is another example of cross-compiling here: https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/Docker_SDK_Cross_Compile.md. This version also builds the prerequisite libraries and has suggestions about how you might also cross compile your application. It uses a Docker container to do this but, even if you don't want to use Docker, it may still help you with your process.
how to make an application in built(like top, vi , etc ) so that they can be put inside the /system/bin automatically on flashing the kernel and can be accessed from the command prompt.
I tried modifying the Makefile for the my application by looking at the example of top utility but could not find it under /system/bin .
I am not sure if I have included the sources of the file in the Makefile correctly.
You need to start with something that the manufacturer provided. Presumably it's a devKit or something. Most modern dev kits ship with either a MFG provided development environment, kernel, sources, etc. Many are based on Yocto Linux.
You can't just compile a binary locally on your PC with whatever version of GCC you have and have it work on an embedded environment. Chances are it's a different architecture (ARM or Freescale or something). There are ways to cross-compile but is some setup involved. Read about cross compiling here: http://en.wikipedia.org/wiki/Cross_compiler
There are development and packaging environments that have been developed by the community but it's not for the faint of heart. In short, start reading: https://www.yoctoproject.org/
I am developing a project based on OpenCV. Currently I am working on Ubuntu 10.04 system with AMD turion processor. But project need to work as an embedded system. So I am using a readymade board based on ARM11 processor with SAMSUNG S3C6410 processor. It supprot Linux 2.6.28. It also support ubuntu. So how could I port my code from the host system to my embedded system.
Thanks
This is link for the board.
http://www.minidevs.com/
I think the best way to start, is to take a look at Angstrom/OpenEmbedded.
It's a framework for building OS images for various embedded platforms. You could take the precompiled-images, but I've realized that after a while it's not worth the hassle.
Just build the target image yourself (with OpenCV for the target platform, it definitely builds for S3C2440 (tested it myself a year ago or so) and for all OMAP3 platforms (BeagleBoard, EVM and the like).
Then, use OpenEmbedded to build the cross-compiler (there is a package name for that), install it on your host machine, and you should be ready to go.
If there is no support for S3C6410, just use any other ARM11 platform out there, and install the packages. It is likely your vendor-supplied OS was built using OpenEmbedded, it quickly becomes de-facto standard.
http://www.angstrom-distribution.org/
http://www.openembedded.org
Check Linaro if the processor you are looking for is ARM. It looks promising. http://www.linaro.org/
I been looking into Cygwin/Mingw/lcc and I liked to be able to compile perl native C extensions on my windows(preferably under cygwin) and then run them on Solaris and HP unix without any further fuss, is this possible?
This all stems from my original perl cross-platform question here.
(This is a very old question, but missing some useful info --
I've personally done this for Solaris (SPARC & x86), AIX, HP-UX and Linux (x86, x64).)
Getting C++ cross-compiled is much harder than straight C.
HP-UX 32-bit PA-RISC is not supported because it uses SOM format instead of ELF and binutils doesn't (and likely won't ever) support SOM. In other words, you can only cross-compile 64-bit PA-RISC. (Requires PA-RISC 2.0 chip.)
I would go with mingw instead of cygwin, if you can. Cygwin introduces a lot of file permission headaches and cygwin1.dll dependencies that can be troublesome. If possible, however, build on linux. Everything will be much faster because all the tools and scripts you're running are designed for an environment where exec and stat are fast operations. Windows + NTFS is not that environment.
Start with the crosstools script, but be prepared to spend a lot of time on this.
Try with the very latest gcc/binutuils first, but if you can't overcome problems try dropping back to older packages. E.g. for Power3 (AIX) gcc 4.x series cross compiler generates bad code, 3.x is fine.
When copying native libs and headers make sure you are copying from the oldest machine you're likely to run on. Copying a new libc means your code won't run on any machine with an older libc.
When copying native libs and headers you probably want 'tar -h' to turn symlinks into actual files, also watch that on Solaris some requisite crt object files are buried in a cc directory, not under /usr/lib
Cross-compiler are very hard to setup and get working correctly.
Consider that (the people at) NetBSD have to put in a huge amount of work to get cross-compiling to work, and they're running the same OS, just different architectures.
You'd have to, at least, copy all the headers from the other OSs to Windows, and get a cross-compiler, linker etc for the target OS/architecture.
Also that may well not be possible - perl and shared libraries may be compiled with a native/non-gcc compiler which won't be available on Windows at all.
I agree with Douglas, that getting a cross compiler up and working is very hard to do. This is generally, your choice of last resort. If you are boot strapping, or making a binary for an embedded device, then often cross-compiling is your only option. You should be comfortable compiling your own gcc under Cygwin before considering cross compiling. To cross compile, you need to build a gcc to run under windows, but which will create binaries for your execution platform. Sample instructions for doing this can be found here.
Perhaps you are wanting to cross compile because you don't have root and/or can't compile on your target platform. For example, I had a hosting provider which ran Redhat Linux. I could run Perl CGI scripts, and associated modules, but I could not compile on the target machine, and an libraries I built had to exist in my own directory.
To solve this, I could have attempted to cross compile for my target platform, but instead, I decided to setup a similar host inside a VM on Windows. From within Cygwin, you can create a script which ssh's into your VM, copies your source, and does a full configure/build. The last step was to deploy the binary artifact onto my hosted system.
I've successfully had both Solaris 10 and Open Solaris running within a VM on Windows. Unfortunately, you might have a harder time running HPUX under a VM.
Why don't you have a read up on "Grand Unified Builder" (http://lilypond.org/gub/ and http://valentin.villenave.info/The-LilyPond-Report-11 (section #4))
I don't know how it works, but GUB allows the Lilypond developers to compile for about 11 platforms on a linux box.
Compile on Windows then use Wine to run them on any *nix. It works well most of the time.
No, this isn't possible at the binary level. There are so many differences at binary level between the various OSes and CPUs.
But what you can do is make the your C extensions source compatible so that it can compile to different platforms. C was designed as a "portable assembly language". As long as you stick with routines that are cross-platform, then they will usually work the same. You'll still need to test because there could be bugs that exists on particular platform.
This can't be done ... but is it that much of a hassle to recompile the code under Solaris or HP?