How to handle distro versions in Yocto - version

I'm looking for some advice on how to properly handle versioning when managing a distro using Yocto.
I have several embedded systems in production and have to rely on a third party to apply updates. Updates include one or more .ipk packages that get installed via opkg. I have no control over when the third parties apply the update(s) I send them. This introduces the issue I am trying to find a solution to. The systems are likely to be in various different states as some updates are applied and others are not. Is there a common solution to tell what state a system is in?
One problem I'm not clear on is how ensure the embedded system has certain updates applied before applying further updates. How do distros such as Ubuntu or Redhat handle this? Or do they?

Ubuntu and RedHat have a remote repositories. The systems have a internal list of installed packages. When you update the repository you get a new list of packages. You can then compare this list of installed packages against the new package list and install them. This is basically done with apt-get update && apt-get upgrade and the yum equivalent.
Yocto actually supports rpm and deb package format and their remote repositories. I am not familiar with opkg and if they have the option of a remote repository.
When I implemented a system I narrowed it down to the following options:
have a repository (deb and rpm definitely work here)
have a version package
using images
Version packages have the big disadvantages since you have to get your own logic on which packages to install. But you can require that version-1.deb needs software-v2.deb and tool-v1_5.deb. That works well with you own software but is a big manual overhead for the entire Yocto package stack.
Repository would be the usual way such as: apt-get update && apt-get -y upgrade. Works well and easy, but lacks also a risk free path to newer Yocto version.
The image way is more complicated and depends on your used hardware, size of image and transfer of the image. If you have a huge image and a slow network you might not want to do this. Basically you push your new image (or automatically pull it based on a version) d push it then to the device. The device dd's it to a secondary partition and then you flip the bootload to use the other partition. Depending on bootloader and/or hardware you can automatically flip back in case the partition you booted in is broken. Helpful for remote systems. Advantage is: you can also easily upgrade the entire OS without manually picking versions. And you can have (automatic) fail-over.

Related

GLIBC version not found when executing AppImage on different distro

I'm currently working on an application that I would like to publish to many distributions. So far, I've done all my testing on one distribution at a time (compile and run on the same distro). But when I take the outputted AppImage from compilation on my main computer (Arch Linux), and try to run it in a vm (Ubuntu 20.04), it gives me the error below:
gabriel#gabriel-VirtualBox:~/Downloads$ ./Neptune.Installer-x86_64.AppImage ./Neptune.Installer-x86_64.AppImage: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by ./Neptune.Installer-x86_64.AppImage)
What possible solutions are there to this? I've considered statically linking the library, but I'm unsure if that might cause licensing issues, as my program is not open source. Apart from that, I might consider simply compiling my program on a very old distribution such as Ubuntu 12 or something, but I won't know how well that carries over to other distros (for example, will my program still work on an old version of Fedora?)
This might be a complicated question but I just want to know what the best way to solve this issue is. Change libraries? Statically link? Compile on old distributions? Let me know what you think.
I've considered statically linking the library, but I'm unsure if that might cause licensing issues,
Yes.
very old distribution such as Ubuntu 12 or something, but I won't know how well that carries over to other distros
It doesn't (alpine linux). If you compile software, you have to run it with the set of libraries you compiled it against. Even if you compile on "very old distributions" there may be changes.
publish to many distributions
Choose the list of distributions and versions of these distributions you want to support. Tell users that you will support these distribution versions. (https://partner.steamgames.com/doc/store/application/platforms -> Steam only officially supports Ubuntu running Ubuntu 12.04 LTS or newer..).
Compile against every combination of distribution+version separately, and distribute your software separately for every such distribution version. For users convenience, create and share package repositories for specific distribution package manager with your software. On https://www.zabbix.com/download there are only so many combinations to choose from. Interest yourself in CI/CD in docker virtualized environments. I like gitlab.
Or alternatively distribute your application with all dependent shared libraries. Bundle it all together with the operating system and distribute in a form of a docker image or a qemu/virtualbox virtual image. Or distribute with just shared libraries files with a wrapper around LD_PRELOAD. Just like steam does. Install steam on your system, and see what happens in ~/.steam/steam/ubuntu12_64.
And hire a layer to solve the licensing issues.

Avoiding too specific dependencies

I am using a shared C library on Linux that is distributed in binary form. The problem is that the dependencies are set to require exactly the versions available on the development machine. For example, each release requires the (at the time) latest glibc and only the exact version of libreadline on their system.
I have contacted the developers and they don't know what to do about this. As far as I can tell, they are not consciously using the latest features, so the library should continue to work with older dependencies. I think they are using gcc on Linux, but they are also using a complex make system to control other compilers to build for Windows and Unix.
How and to what extent can you manage the build process so that a library requires dependencies just of a sufficient version and will also accept later versions?
This was a related question.
Edit: To be clear, I want to know how to build programs so they will accept dependencies with a specific version number or later numbers. Whether the developers compile it or I do, I want to be able to distribute a binary that does not require exactly the versions of dependencies present in the build environment.
Edit 2: After rephrasing the question, I realized this has been covered many times before. Some of the best Q&A:
Deploying Yesod to Heroku, can't build statically
Compile with older libc
Linking against an old version of libc
How can I link to a specific glibc version?
It's not very confidence inspiring. They should be building on a stable baseline release, it could just be a virtual install. Some versions of Linux, copy a build environment so packages aren't linked to updated library versions.
The openSUSE build service, lets devolopers build binary packages, for a wide variety of http://openbuildservice.org/about/
IIRC readline is a GPL program and checking at http://cnswww.cns.cwru.edu/php/chet/readline/rltop.html#Availability suggests it is GPL v 3 so they may be in violation of the GPL, if they are using libreadline functions and should provide you with the source to their library. I am not sure if you are meaning rpm/apt package dependencies, or their library is actually calling libreadline.
You can always extract files from rpm or apt packages, if necessary so avoiding software manager issues, caused by poor packaging.

Differences between a Linux package repository and a general software artifact repository

I've been investigating software artifact repositories like Artifactory, Nexus, and Archiva. Artifact repositories like these perform well in a continuous integration environment, with dependency management and automated build tools. They are largely influenced by Maven.
I am also aware of Linux package repositories, like what Debian or RedHat use. Downloading and installing software, with all necessary dependencies, is very easy with these systems.
What are the major differences between Maven-like repositories and Linux package repositories? Are there any fundamentally different design goals? Platform implementations aside, could they be interchangeable?
Artifactory's got the YUM addon which enables it to act just like any standard HTTP exposed YUM server (for both deployment and resolution).
Support for Debian packages is on the roadmap, too.
The various artifact creation applications out there don't do anything that rpm or deb don't do, in fact they are in general far less capable and much more convoluted. I suspect the biggest issue with avoidance of deb or rpm is ignorance. The other is that the artifact is OS agnostic (although the build certainly is not) and distribution agnostic, so there is that, but I don't see any significant advantage to that.
Linux package repositories
Primarily serve end-user software, but can also serve developer libraries
Release builds are prevalent, snapshot builds are sometimes available
Transitive dependencies commonly installed or removed automatically
Used to install packages frequently, and remove them infrequently
Packages are installed in a single, system-wide location
Serve only distribution-specific Linux software
Servers run on Linux
Command-line or graphical tools for browsing and downloading
Software artifact repositories
Primarily serve developer libraries, but can also serve end-user software
Release and snapshot builds are both common
Transitive dependencies handled various ways, depending on the package
Packages are essentially cached on the user's machine, and may be cleared frequently
Packages are cached per-project, not system-wide
Serve software for any operating system
Servers run on any desktop or server operating system
Web interface for browsing and downloading
Support a variety of protocols (Maven, Ivy, NuGet, P2, etc.)

how to define the "depends" for the running kernel

package A depends on package B-kmod and B-kmod has several variants. like B-kmod--{generic,pae-generic} etc. and in turn B-kmod depends on linux-image of the same flavor.
i'd like A to have depends on B-kmod-$(uname -r). how to express this in control file?
If you mean that you want A to depend on a kernel module being installed matching the kernel version of the kernel running at the time that A is installed, that is definitely impossible. Your best bet as an alternative is to check for the availability of the features you require during the preinst or postinst scripts and fail the install if they are not present. You must keep in mind that:
They might have the functionality provided by B-kmod even if a package by that name isn't installed
they might have installed it without using a package
they might be running inside a chroot where they cannot see the packages for the running kernel
They might reboot into another kernel after installing A. So A should gracefully degrade in that situation.

Collaborative kernel development

I have to develop a patch for the linux kernel (2.6) for an university course I'm attending. I have to develop this patch with a friend of mine and we need to edit the same files on our computers (2-3 PCs). So, we want to use something like a collaborative editor and/or a version control system. The problem is that we never used something like that and that we cannot release our software as open source till we take the exam. Obviously, we're using linux. I'm here to ask you suggestions in order to manage our work in the best way.
Thank you
You'll definitely want to use a version control system.
Linux kernel itself is developed with Git version control system. You'll probably find tons of resources for how to use Git in kernel development.
If you are looking for more traditional version control software, the Subversion is a safe bet.
With both system you can host git or subversion on your computers, and nobody else could access it. This way you do not need to publish to code before you are ready.
There are also hosting services (for example Github) that provide hosting for vcs repositories. However, in case of github, if you want a private repository, then you need pay for it.
Ask your professor or university IT department what version control systems are installed on the computers. Use whatever they have installed, they're all pretty much serve the same purpose.
When I was in school the computers had CVS installed, so I set a repository for myself and team members in my home directory.
As Juha said, if you want to stay linux-y use the distributed version control system Git. I think you'll be able to use git without a central "head" repository server by pulling patches.
git does not require a git server. Simply having a shared network file folder is sufficient.
I personally use Subversion, I pay for a host (http://versionshelf.com) which gives me the option of creating and managing Subversion or Mercurial repositories through a web interface with ease.
I've not looked into Mercurial much but it seems to be very well supported and has lots of IDE integration and GUI clients (https://www.mercurial-scm.org/wiki/OtherTools).

Resources