msieve installation asks to pick a target - c

I'm trying to install msieve following this guide.
But as soon as I type make all the scripts returns a message:
pick a target:
x86 32-bit Intel/AMD systems (required if gcc used)
x86_64 64-bit Intel/AMD systems (required if gcc used)
generic portable code
add 'ECM=1' if GMP-ECM is available (enables ECM)
add 'CUDA=1' for Nvidia graphics card support
What am I supposed to do?
make all ECM=1 NO_ZLIB=1 and make all ECM=1 aren't working

So, in this guide (link copied from the original question), the author says:
Download the msieve package:
Code:
svn co https://svn.code.sf.net/p/msieve/code/trunk $HOME/Math/msieve
But I think you didn't do that. You chose to use an 12-year-old version (v1.46) of msieve from some fork on Github (link also copied from the original question). In 12 years, a lot of things can change, including build procedures.
So I'd suggest you follow your guide more closely, either using the exact command shown above, possibly modifying the local repository path, (which will give you the trunk version, msieve 1.54) or in spirit, by downloading the source repository from the msieve sourceforge page, which will give you version 1.53, released in 2016. Whichever of the above you chose, the instructions you have should work.
Alternatively, you could follow the instructions in the version you downloaded, and change the all in make all to one of the three options listed (x86, x86_64, or generic), probably x86_64 (assuming you're using a 64-bit Linux, which has been the norm for some years now).

Related

GLIBC version not found when executing AppImage on different distro

I'm currently working on an application that I would like to publish to many distributions. So far, I've done all my testing on one distribution at a time (compile and run on the same distro). But when I take the outputted AppImage from compilation on my main computer (Arch Linux), and try to run it in a vm (Ubuntu 20.04), it gives me the error below:
gabriel#gabriel-VirtualBox:~/Downloads$ ./Neptune.Installer-x86_64.AppImage ./Neptune.Installer-x86_64.AppImage: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by ./Neptune.Installer-x86_64.AppImage)
What possible solutions are there to this? I've considered statically linking the library, but I'm unsure if that might cause licensing issues, as my program is not open source. Apart from that, I might consider simply compiling my program on a very old distribution such as Ubuntu 12 or something, but I won't know how well that carries over to other distros (for example, will my program still work on an old version of Fedora?)
This might be a complicated question but I just want to know what the best way to solve this issue is. Change libraries? Statically link? Compile on old distributions? Let me know what you think.
I've considered statically linking the library, but I'm unsure if that might cause licensing issues,
Yes.
very old distribution such as Ubuntu 12 or something, but I won't know how well that carries over to other distros
It doesn't (alpine linux). If you compile software, you have to run it with the set of libraries you compiled it against. Even if you compile on "very old distributions" there may be changes.
publish to many distributions
Choose the list of distributions and versions of these distributions you want to support. Tell users that you will support these distribution versions. (https://partner.steamgames.com/doc/store/application/platforms -> Steam only officially supports Ubuntu running Ubuntu 12.04 LTS or newer..).
Compile against every combination of distribution+version separately, and distribute your software separately for every such distribution version. For users convenience, create and share package repositories for specific distribution package manager with your software. On https://www.zabbix.com/download there are only so many combinations to choose from. Interest yourself in CI/CD in docker virtualized environments. I like gitlab.
Or alternatively distribute your application with all dependent shared libraries. Bundle it all together with the operating system and distribute in a form of a docker image or a qemu/virtualbox virtual image. Or distribute with just shared libraries files with a wrapper around LD_PRELOAD. Just like steam does. Install steam on your system, and see what happens in ~/.steam/steam/ubuntu12_64.
And hire a layer to solve the licensing issues.

The Linux kernels 3.* series won't boot properly upon being customly compiled in Ubuntu 16.04.1 by means of make-kpkg and gcc-4.8.5

Recently I upgraded up to Ubuntu 16.04.1 Xenial (from 14.04 Trusty) the build-host where I've compiled different linux kernels so far for my own project. Ubuntu 16.04.1 implies using a new updated environment for building binaries. These tools include a new gcc-5.4, libc6 (for userspace applications), etc. Also a new Ubuntu supplies (or suggests) a new kernel-package containing a new make-kpkg script and pulling different dependencies like build-essential, binutils, etc. with it
Ok, my task is to compile a linux kernel v3.10.12 (or v3.19) and run it within a VirtualBox machine (architecture x86_64, system Ubuntu 16.04.1). I used to be able to compile kernel-v3.10.12 and kernel-v3.19 in Ubuntu 14.04 Trusty deployed on the build server with the compiler gcc-4.8 and launch the kernels under the VirtualBox machine I mentioned above, but now something goes wrong while starting a kernel compiled
For example, let's consider v3.10.12 being compiled and run
For building the kernel I utilize 'make-kpkg' script provided by Ubuntu aptitude's package 'kernel-package'. I build the kernel for x86_64 using gcc-4.8 as I have always been doing
Once 'make-kpkg' has compiled the kernel and gathered linux-headers it starts packing them into deb-packages what makes me able to execute 'dpkg -i' on them in the system and install them in a 'debian' way
Okey, supposing I did it. Then I am going to reboot the system
When I choose my compiled kernel in the grub menu, it writes in the screen "Loading linux kernel... Loading initial ramdisk", then the inscription disappears, the screen goes black and I see only a cursor in the form of underscore "_" sign in the top-left side of the screen. That's all. Nothing is going to happen further. The booting process seems to have stuck
I tried swapping make-kpkg for an old one (from Trusty), swapping compiler gcc-4.8.5 for gcc-4.9, gcc-4.7, even gcc-5.2 having made a couple of supplementations inside the directory include/linux/ for the support of gcc-5.2, but nothing has come off, the result still persists being the same
I tried the same actions (on the same Ubuntu 16.04.1 and tool-chain) with new kernels 4. series* (for example, 4.6) meaning building the kernels, packing them into *.deb packages and installing into the VirtualBox machine and rebooting the system, and everything goes correctly, I see debug messages in the screen like I have always seen. I tried to use gcc-4.7, gcc-4.8, gcc-4.9, gcc-5.4 and everything works, I am able to load the linux-kernel-v4.6 appropriately and completely. But when I build 3.10.12 (or 3.19) kernels I cannot boot them properly and cannot have figured out why it has been happening
Actually, what I have found out is that the deal is in the kernel but not in initrd because I managed to substitute the 'broken' kernel by a working one having left 'initrd' built together with the 'broken' kernel and the debug logging started appearing and the kernel was loading until a rootfs came out to be mounted, at that moment the kernel didn't manage to load it and left in initramfs mode
Has someone faced the same issue I am observing? Actually I am almost exhausted having been struggling with this trouble for days
Maybe someone has any recipes or suggestion how to get rid of the problem?
I even put the 'panic' function code exactly in the first line of the function "asmlinkage void __init start_kernel(void)" but nothing happened, still the same black screen
Can the problem be related to a new glibc being used by gcc compiling my kernel? Personally, I am not prone to think so but in the world of linux everything can happen. On the other hand maybe toolchain (ld, as) somehow has affected? I am kindly asking to provide me a help.
I am nearly assured that someone before me has already encountered such an issue, I would have been searching for a topic alike but didn't find anything resembling
Thank you in advance
Short Answer
It's a glibc kernel version mismatch, if you need this you could create the glibc package such that it supports the kernel version that you need, by using the --enable-kernel flag at configuration time.
Long Answer
It's highly likely that your glibc was compiled in such a way that it only works down to a certain version of linux. This is done with the help of the --enable-kernel flag at the configuration stage. Any version older than the one specified in --enable-kernel will be rejected by glibc as a consequence no program will ever be loaded, like the init program presumably systemd's init.
This is from the configuration help of glibc
--enable-kernel=version
This option is currently only
useful on GNU/Linux systems. The version parameter should have the form X.Y.Z and describes the smallest version of the Linux kernel the generated library is expected to support. The higher the version number is, the less compatibility code is added, and the faster the code gets.
Finally I succeeded in this problem
Actually what I have done is to have compiled an old gcc-4.8.5 with an old glibc-2.19 on the host-system where I build the old-versioned kernels.
Glibc-2.19 was compiled with an option --enable-kernel=3.10.12 and with headers of an old-versioned linux-3.10.12. The compiler has turned out to be like a 'cross-compiler' with usage of glibc-2.19. So, I built an old kernel with the version 3.10.12 with this 'cross-compiler', which uses glibc-2.19, and everything has started working in a proper way
Thanks for the help and directing me to a right way to solve the problem, but I am obliged to notice that the deal was in host-system's glibc used but not in target-system glibc used as I had been assumedly said (but maybe I misunderstood #iharob)

Avoiding too specific dependencies

I am using a shared C library on Linux that is distributed in binary form. The problem is that the dependencies are set to require exactly the versions available on the development machine. For example, each release requires the (at the time) latest glibc and only the exact version of libreadline on their system.
I have contacted the developers and they don't know what to do about this. As far as I can tell, they are not consciously using the latest features, so the library should continue to work with older dependencies. I think they are using gcc on Linux, but they are also using a complex make system to control other compilers to build for Windows and Unix.
How and to what extent can you manage the build process so that a library requires dependencies just of a sufficient version and will also accept later versions?
This was a related question.
Edit: To be clear, I want to know how to build programs so they will accept dependencies with a specific version number or later numbers. Whether the developers compile it or I do, I want to be able to distribute a binary that does not require exactly the versions of dependencies present in the build environment.
Edit 2: After rephrasing the question, I realized this has been covered many times before. Some of the best Q&A:
Deploying Yesod to Heroku, can't build statically
Compile with older libc
Linking against an old version of libc
How can I link to a specific glibc version?
It's not very confidence inspiring. They should be building on a stable baseline release, it could just be a virtual install. Some versions of Linux, copy a build environment so packages aren't linked to updated library versions.
The openSUSE build service, lets devolopers build binary packages, for a wide variety of http://openbuildservice.org/about/
IIRC readline is a GPL program and checking at http://cnswww.cns.cwru.edu/php/chet/readline/rltop.html#Availability suggests it is GPL v 3 so they may be in violation of the GPL, if they are using libreadline functions and should provide you with the source to their library. I am not sure if you are meaning rpm/apt package dependencies, or their library is actually calling libreadline.
You can always extract files from rpm or apt packages, if necessary so avoiding software manager issues, caused by poor packaging.

How can I compile a Linux executable for a different machine?

I've written a Linux program in C, and I'm trying to get it to run on a server system. It looks like everything should work, but when I try it, I get this:
/lib64/libc.so.6: version `GLIBC_2.14' not found (required by <program>)
/lib64/libc.so.6: version `GLIBC_2.14' not found (required by ./libdbi.so.1)
(Where <program> is my program's name.)
So far as I can tell, my program only requires that version of GLIBC because libdbi does. I've tried compiling libdbi from source, and it still attempts to link to that version of GLIBC.
I don't own the server system (it's a shared system I run a website on, and have SSH access to), so I can't make any changes to it -- that's why the library file is in the same directory, and I've set LD_LIBRARY_PATH=.. Unfortunately I also don't have access to a compiler on it -- when I try to run GCC, I'm told "permission denied". It's run by a big corporation, and I'm only one customer; the chances of them making any changes at my request are essentially zero.
Is there any way to compile the program on my system so that it will work on the server?
Before I asked, I found these similar questions:
Compile C program in Linux with different glibc library: the link in the answer goes to a 404 page, and from what I've been able to determine, apgcc isn't available on Debian distributions.
Relink a shared library to a different version of libc: seems to say that this problem doesn't exist, because "glibc tend to be backwards compatible" (except they apparently aren't in this case).
How to compile Linux C program to run on another Linux machine?: suggests a chroot or virtual machine, which I've done before elsewhere, but how can I tell it to use a libc without that old GLIBC version?
is binary executable file portable: suggests static-linking, but libdbi dynamically-links to its driver files, so that apparently can't be done -- I get several errors referring to missing functions like ldopen.
There are others, but they seem to be variations on those.
I'd be willing to use a non-free solution (like one that I saw in another answer I can't find now) if I turn this into a commercial product, but for a single use it seems like massive overkill, not to mention the expense.
Is there any way to simply tell libdbi to link to a later GLIBC version, maybe? If not, is there any solution I've overlooked?
Big corporation or not, the least they owe you if you are paying for service in any way or being paid for development to meet a requirement is a careful description of the runtime environment so you can duplicate it on a development machine.
Then you must set out to systematically duplicate this environment. Since you're using libdbi you should be thorough. Database connections can exercise big chunks of the system API, so you want to have exactly the same version of Linux, gcc (even if you can't run it, you need to know the version other parts of the system were compiled with), and other tools and libraries. If you don't, you won't be able to have much confidence that your development machine tests translate to good behavior on the target.
A virtual machine is a good way to create a specialized development environment without messing up your existing one.
You must compile it on a machine that has the same version of glibc as the target machine, or an older version. shared library compatibility works in that direction only.
Find out what version of Linux the server uses, get a copy of it and install it in a VM
Virtualbox is good for this
You can use this environment for testing code as well as this particular compilation problem
You have the following options:
Compile your code on the server machine (which likely has gcc installed)
Compile your program with statically linked libraries (option -static for gcc)

Binary compatibility between Linux distributions

Sorry if this is an obvious question, but I've found surprisingly few references on the web ...
I'm working with an API written in C by one of our business partners and supplied to us as a .so binary file, built on Fedora 11. We've been testing out the API on a Fedora 11 development machine with no problems. However, when I try to link against the API on our customer's target platform, which happens to be SuSE Enterprise 10.2, I get a "File format not recognized" error.
Commands that are also part of the binutils package, such as objdump or nm, give me the same file format error. The "file" command shows me:
ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), not stripped
and the "ldd" command shows:
ldd: warning: you do not have execution permission for `./libuscuavactivity.so.1.1'
./libuscuavactivity.so.1.1: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./libuscuavactivity.so.1.1)
[dependent library list]
I'm guessing this is due to incompatibility between the C libraries on the two platforms, with the problem being that the code was compiled against a new version of glibc etc. than the one available on SuSE 10.2. I'm posting this question on the off chance that there is a way to compile the code on our partner's Fedora 11 platform in such a way that it will run on SuSE 10.2 as well.
I think the trick is to build on a flavour of linux with the oldest kernel and C library versions of any of the platforms you wish to support. In my job we build on Debian 4, which allows us to officially support Debian 4 and above, RedHat 3,4,5, SuSE 10 plus various other distros (SELinux etc.) in an unofficial fashion.
I suspect by building on a nice new version of linux, it becomes difficult to support people on older machines.
(edit) I should mention that we use the default compiler that comes with Debian 4, which I think is GCC 4.1.2. Installing newer compiler versions tends to make compatibility much worse.
Windows has it problems with compatability between different realeases, service packs, installed SDKs, and DLLs in general (DLL Hell, anyone?). Linux is not immune to the same kinds of issues.
The compatability issues I have seen include:
Runtime library changes
Link library changes
Kernel changes
Compiler technology changes (eg: pre and post EGCS gcc versions. This might be your issue).
Packager issues (RPM vs. APT)
In your particular case, I'd have them do a "gcc -v" on their system and report to you the gcc version number. Compare that to what you are using.
You might have to get hold of that version of the compiler to build your half with.
You can use Linux Application Checker tool ([1], [2], [3]) in order to solve compatibility problems of an application between Linux distributions. It will check your file formats and all dependent libraries. It supports almost all popular Linux distributions including all versions of SuSE and Fedora.
This is just a personal opinion, but when distributing something in binary-only form on Linux, you have a few options:
Build the gamut of .debs and .rpms for every distro under the sun, with a nominal ".tar.gz full of binaries" package for anything you've missed. The first part is ideal but cumbersome. The latter part will lead you to point 2 and 3.
Do as some are suggesting and find the oldest distro you can find and build there. My own opinion is this is sort of a ridiculous idea. See point 3.
Distribute binaries, and statically link where ever you can. Especially for libstdc++, which appears to be your problem here. There are seemingly very many incompatible versions of libstdc++ floating around, which makes it a compatibility nightmare. If you can't link statically, you can also put *.so files alongside your binary, and use stuff like LD_PRELOAD or LD_LIBRARY_PATH to make them link preferentially at runtime. Note that if you take this route you may have to comply with LGPL etc. since you are now distributing other people's work alongside your project.
Of course, distributing your project in source form is always preferred on Linux. :-)
If the message is file format not recognized then the problem is most likely one mentioned by elmarco in a comment -- namely, different architecture. It might (I'm not sure) be a dynamic linker version mismatch, but that would mean the .so file was built with an ancient dynamic linker. I do not believe any incompatibility in libc could cause this -- they could cause link failures and runtime problems (latter very rarely), but not this.
I don't know about Suse, but I know fedora likes to stay on the bleeding edge. So you may very well be right about library versions. Why don't you ask and see if you can get the source code and build it on your Suse machine?

Resources