Usage stats of different Ubuntu versions - ubuntu-18.04

At the time of writing this question, Ubuntu 22.04 is almost out. I am curious about how fast the developers will migrate from older versions (potentially from 18.04 LTS) to the new LTS. This is important for me to decide which Ubuntu version I should target with my application.
The best would be some statistics about the usage of the different Linux distros and versions, potentially with time scale. There are a few charts and list out (e.g., https://distrowatch.com/dwres.php?resource=trending) but I could not find any statistics that go down to the distro's version level.
What is the best or "official" source to get this information from?

Related

support for vsi7z (direct reading of compressed 7z files) in terra and sf?

The current versions of terra and sf do not support direct reading of 7zip compressed files, because GDAL does not support it yet. But it seems that this feature will be included in GDAL version 3.7.0, scheduled for May 1, 2023 (https://github.com/OSGeo/gdal/milestone/42). Knowing that terra and sf are currently using GDAL version 3.4.1, released on 12/27/2021, should we expect that support for direct reading of 7zip files will be available in terra's or sf's CRAN binary packages a year or so after its inclusion in GDAL? I have no idea how GDAL is integrated with terra or sf, so my question may be naive or poorly phrased. Thanks in advance.
It depends on your operating system, and how you install.
On Windows we depend on what is included in Rtools. Rtools tends to be up-to-date. I would expect a delay in the order of months. But as you point out it more than 12 months now. Rtools was just updated to GDAL 3.6.0 (and it should be updated again soon as that version of GDAL was retracted). There can be an additional delay if a new version of Rtools is not yet used at CRAN (or only for R-devel), so it can sometimes help to install from source.
On Linux you can install any version of GDAL you want, if you have permission to do so. The defaults on some Linux systems are terribly old. On Ubuntu it helps a lot to use the ubuntugis-unstable repository (see instructions). If you are on Ubuntu and use r2u, then it depends on what they use.
On OSX you may use the CRAN binary or GDAL install yourself from source or install it with "brew". You can see what is used for CRAN here. I think the lag in the GDAL version of the CRAN binaries typically is also only in the order of months. "brew" seems to keep very good track of GDAL updates.

GLIBC version not found when executing AppImage on different distro

I'm currently working on an application that I would like to publish to many distributions. So far, I've done all my testing on one distribution at a time (compile and run on the same distro). But when I take the outputted AppImage from compilation on my main computer (Arch Linux), and try to run it in a vm (Ubuntu 20.04), it gives me the error below:
gabriel#gabriel-VirtualBox:~/Downloads$ ./Neptune.Installer-x86_64.AppImage ./Neptune.Installer-x86_64.AppImage: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by ./Neptune.Installer-x86_64.AppImage)
What possible solutions are there to this? I've considered statically linking the library, but I'm unsure if that might cause licensing issues, as my program is not open source. Apart from that, I might consider simply compiling my program on a very old distribution such as Ubuntu 12 or something, but I won't know how well that carries over to other distros (for example, will my program still work on an old version of Fedora?)
This might be a complicated question but I just want to know what the best way to solve this issue is. Change libraries? Statically link? Compile on old distributions? Let me know what you think.
I've considered statically linking the library, but I'm unsure if that might cause licensing issues,
Yes.
very old distribution such as Ubuntu 12 or something, but I won't know how well that carries over to other distros
It doesn't (alpine linux). If you compile software, you have to run it with the set of libraries you compiled it against. Even if you compile on "very old distributions" there may be changes.
publish to many distributions
Choose the list of distributions and versions of these distributions you want to support. Tell users that you will support these distribution versions. (https://partner.steamgames.com/doc/store/application/platforms -> Steam only officially supports Ubuntu running Ubuntu 12.04 LTS or newer..).
Compile against every combination of distribution+version separately, and distribute your software separately for every such distribution version. For users convenience, create and share package repositories for specific distribution package manager with your software. On https://www.zabbix.com/download there are only so many combinations to choose from. Interest yourself in CI/CD in docker virtualized environments. I like gitlab.
Or alternatively distribute your application with all dependent shared libraries. Bundle it all together with the operating system and distribute in a form of a docker image or a qemu/virtualbox virtual image. Or distribute with just shared libraries files with a wrapper around LD_PRELOAD. Just like steam does. Install steam on your system, and see what happens in ~/.steam/steam/ubuntu12_64.
And hire a layer to solve the licensing issues.

How to install Berkeley DB for SICstus on Windows?

This maybe a dumb question but can someone explain how to install Berkeley for SICstus on Windows? This is my last resort
The entire Oracle site for Berkeley DB seems broken, with many broken links. Also, there are no longer any installers, and only the latest version of the source code. Their site has been that way for over a year. I do not know what happened.
In the mean time, I was able to download the Windows installers by first creating a free Oracle Account, logging in, and then using the links https://download.oracle.com/otn/berkeley-db/db-6.2.38_64.msi and https://download.oracle.com/otn/berkeley-db/db-6.2.38_86.msi (found via the Wayback Machine). These versions should be compatible with SICStus Prolog 4.6 on Windows.

How to handle distro versions in Yocto

I'm looking for some advice on how to properly handle versioning when managing a distro using Yocto.
I have several embedded systems in production and have to rely on a third party to apply updates. Updates include one or more .ipk packages that get installed via opkg. I have no control over when the third parties apply the update(s) I send them. This introduces the issue I am trying to find a solution to. The systems are likely to be in various different states as some updates are applied and others are not. Is there a common solution to tell what state a system is in?
One problem I'm not clear on is how ensure the embedded system has certain updates applied before applying further updates. How do distros such as Ubuntu or Redhat handle this? Or do they?
Ubuntu and RedHat have a remote repositories. The systems have a internal list of installed packages. When you update the repository you get a new list of packages. You can then compare this list of installed packages against the new package list and install them. This is basically done with apt-get update && apt-get upgrade and the yum equivalent.
Yocto actually supports rpm and deb package format and their remote repositories. I am not familiar with opkg and if they have the option of a remote repository.
When I implemented a system I narrowed it down to the following options:
have a repository (deb and rpm definitely work here)
have a version package
using images
Version packages have the big disadvantages since you have to get your own logic on which packages to install. But you can require that version-1.deb needs software-v2.deb and tool-v1_5.deb. That works well with you own software but is a big manual overhead for the entire Yocto package stack.
Repository would be the usual way such as: apt-get update && apt-get -y upgrade. Works well and easy, but lacks also a risk free path to newer Yocto version.
The image way is more complicated and depends on your used hardware, size of image and transfer of the image. If you have a huge image and a slow network you might not want to do this. Basically you push your new image (or automatically pull it based on a version) d push it then to the device. The device dd's it to a secondary partition and then you flip the bootload to use the other partition. Depending on bootloader and/or hardware you can automatically flip back in case the partition you booted in is broken. Helpful for remote systems. Advantage is: you can also easily upgrade the entire OS without manually picking versions. And you can have (automatic) fail-over.

Binary compatibility between Linux distributions

Sorry if this is an obvious question, but I've found surprisingly few references on the web ...
I'm working with an API written in C by one of our business partners and supplied to us as a .so binary file, built on Fedora 11. We've been testing out the API on a Fedora 11 development machine with no problems. However, when I try to link against the API on our customer's target platform, which happens to be SuSE Enterprise 10.2, I get a "File format not recognized" error.
Commands that are also part of the binutils package, such as objdump or nm, give me the same file format error. The "file" command shows me:
ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), not stripped
and the "ldd" command shows:
ldd: warning: you do not have execution permission for `./libuscuavactivity.so.1.1'
./libuscuavactivity.so.1.1: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./libuscuavactivity.so.1.1)
[dependent library list]
I'm guessing this is due to incompatibility between the C libraries on the two platforms, with the problem being that the code was compiled against a new version of glibc etc. than the one available on SuSE 10.2. I'm posting this question on the off chance that there is a way to compile the code on our partner's Fedora 11 platform in such a way that it will run on SuSE 10.2 as well.
I think the trick is to build on a flavour of linux with the oldest kernel and C library versions of any of the platforms you wish to support. In my job we build on Debian 4, which allows us to officially support Debian 4 and above, RedHat 3,4,5, SuSE 10 plus various other distros (SELinux etc.) in an unofficial fashion.
I suspect by building on a nice new version of linux, it becomes difficult to support people on older machines.
(edit) I should mention that we use the default compiler that comes with Debian 4, which I think is GCC 4.1.2. Installing newer compiler versions tends to make compatibility much worse.
Windows has it problems with compatability between different realeases, service packs, installed SDKs, and DLLs in general (DLL Hell, anyone?). Linux is not immune to the same kinds of issues.
The compatability issues I have seen include:
Runtime library changes
Link library changes
Kernel changes
Compiler technology changes (eg: pre and post EGCS gcc versions. This might be your issue).
Packager issues (RPM vs. APT)
In your particular case, I'd have them do a "gcc -v" on their system and report to you the gcc version number. Compare that to what you are using.
You might have to get hold of that version of the compiler to build your half with.
You can use Linux Application Checker tool ([1], [2], [3]) in order to solve compatibility problems of an application between Linux distributions. It will check your file formats and all dependent libraries. It supports almost all popular Linux distributions including all versions of SuSE and Fedora.
This is just a personal opinion, but when distributing something in binary-only form on Linux, you have a few options:
Build the gamut of .debs and .rpms for every distro under the sun, with a nominal ".tar.gz full of binaries" package for anything you've missed. The first part is ideal but cumbersome. The latter part will lead you to point 2 and 3.
Do as some are suggesting and find the oldest distro you can find and build there. My own opinion is this is sort of a ridiculous idea. See point 3.
Distribute binaries, and statically link where ever you can. Especially for libstdc++, which appears to be your problem here. There are seemingly very many incompatible versions of libstdc++ floating around, which makes it a compatibility nightmare. If you can't link statically, you can also put *.so files alongside your binary, and use stuff like LD_PRELOAD or LD_LIBRARY_PATH to make them link preferentially at runtime. Note that if you take this route you may have to comply with LGPL etc. since you are now distributing other people's work alongside your project.
Of course, distributing your project in source form is always preferred on Linux. :-)
If the message is file format not recognized then the problem is most likely one mentioned by elmarco in a comment -- namely, different architecture. It might (I'm not sure) be a dynamic linker version mismatch, but that would mean the .so file was built with an ancient dynamic linker. I do not believe any incompatibility in libc could cause this -- they could cause link failures and runtime problems (latter very rarely), but not this.
I don't know about Suse, but I know fedora likes to stay on the bleeding edge. So you may very well be right about library versions. Why don't you ask and see if you can get the source code and build it on your Suse machine?

Resources