After running the code in the first cell of this lecture, I am trying to call the function c_sum. However, I keep receiving the error:
error compiling c_sum: could not load library "/tmp/juliaOT2a9V"
/tmp/juliaOT2a9V.so: wrong ELF class: ELFCLASS64
I have tried modifying the code with the gcc flag -m64, but this hasn't helped. I'm new to coding, so I'm fairly confused as to what precisely the problem is, and how to fix it. Any help would be greatly appreciated!
Based on the error, it seems like the issue may be that you're trying to load a 64-bit shared object (.so) file into a 32-bit julia binary. What does your Julia versioninfo show? Here's mine:
julia> versioninfo()
Julia Version 1.6.0-DEV.420
Commit 0d5efa8846 (2020-07-10 14:27 UTC)
Platform Info:
OS: macOS (x86_64-apple-darwin19.5.0)
CPU: Intel(R) Core(TM) i7-8559U CPU # 2.70GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-9.0.1 (ORCJIT, skylake)
Environment:
JULIA_EDITOR = subl
JULIA_SHELL = /bin/bash
JULIA_INPUT_COLOR = bold
JULIA_ANSWER_COLOR = light_magenta
JULIA_PKG_DEVDIR = /Users/stefan/dev
If yours indicates that you're running a 32-bit julia, then you can either try installing a 64-bit julia or try compiling the C code to a 32-bit ELF shared object file using the -m32 flag rather than the -m64 flag. You can also use file to externally detect the format of these files, for example here's what I get on my macOS system:
julia> run(`file $(Sys.which("julia"))`);
/Users/stefan/dev/julia/usr/bin/julia: Mach-O 64-bit executable x86_64
julia> run(`file $(Clib * "." * Libdl.dlext)`);
/var/folders/4g/b8p546px3nd550b3k288mhp80000gp/T/jl_ZeTKsr.dylib: Mach-O 64-bit dynamically linked shared library x86_64
Since both my julia executable and the shared library file are Mach-O 64-bit, they're compatible and the example works. On your system julia may be 32-bit while gcc is generating 64-bit binaries by default because you're on a 64-bit system. This will probably be a problem in general, so even if passing the -m32 flag to gcc solves the immediate problem and allows you to make the example work, I would recommend using a 64-bit Julia binary instead. As a bonus, that will allow you to load larger data sets than a 32-bit Julia can, since the 64-bit binary can address all of your computer's (virtual) memory instead of just 4GB of it.
Historical note: How/why does your 64-bit Linux machine run both 32-bit ELF and 64-bit ELF files on a single system? In 2001, Intel introduced the Itanium IA-64 architecture, which was a pure 64-bit architecture meant for high-end servers. A year later AMD introduced the competing x86_64 architecture, which supported two process modes: 64-bit mode and 32-bit (legacy) mode. So you could have different processes on the same system running with different word sizes. IA-64 never really took off whereas x86_64 was wildly successful and eventually Intel started making x86_64 chips as well, which is probably what your machine is running, although it could also be an AMD chip. So now there are two different kinds of ELF binaries and that can both work on most PCs, but the granularity is process-level: you cannot load a 64-bit shared object into a 32-bit process or vice versa. Although your system can run 32-bit processes, since the system is primarily 64-bit, most of the programs and libraries are going to be 64-bit, which is why I've recommended that you switch to using a 64-bit Julia build.
More information about ELF-type mismatches here:
gcc error: wrong ELF class: ELFCLASS64.
Related
The question says it all. I need to cross-compile for a Cyrix CPU. The system the compiler (doesn't have to be gcc) needs to run on is a 64bit Kubuntu, with an i5 processor. I couldn't find anything useful googling, except for a piece of information saying that "Cx486DX is software-compatible with i486". So I ran
gcc -m32 -march=i486 helloworld.c -o helloworld486.bin
but executing helloworld486.bin on the Cyrix machine gives me a floating point exception. My knowledge about CPUs is rather limited and I'm out of ideas now, any help would be really appreciated.
Unfortunately you need more than just a compiler that generates instructions for the 486. The compiler libraries, as well as any libraries that are linked in statically should be suitable as well. The GCC version included in most current Linux distributions is able to generate 486-only object files (I think), but its libraries and stub objects (e.g. crtbegin.o) have been pre-generated for 686 CPUs.
There are two main alternatives here:
Use a Linux build system that is compiled for 486 itself, either in a VM or in a chroot jail. Unfortunately getting a modern Linux distribution for the 486 is a bit of an issue - every single major distribution has moved on. Perhaps a (much) older Linux distribution would be of help?
Create a full cross-compiler toolchain for the 486. You can then cross-compile separate versions of all needed libraries and have your build scripts use them. Quite honestly, ensuring that nothing from the (usually 686-based) build host slips through to the build result is not very easy. It oftens amounts to cross-compiling a whole Linux system from scratch, ala CLFS.
An automated cross-compiler toolchain build script, such as crosstool-ng might be of help.
Could you add more details about your target system? Is it an embedded system or just an old PC? What OS is it using? Would it be possible to just run your compile in a VM with a version of the target OS?
Compiling code to an object file needs to be done position-independent if the object file is intended to be loaded as a shared library (.so), because the base virtual address that the shared object file is loaded into in different processes may be different.
Now I didn't encounter errors when I tried to load an .so file compiled and linked without the -fpic GCC option on 32bit x86 computers, while it fails on 64bit bit x86 computers.
Random websites I found say that I don't need -fpic on 32bit because code compiled without -fpic works by coincidence according to the X86 32bit ABI also when used in a position-independent manner. But I still found software that ship with separate versions of libraries in their 32bit versions: One for PIC, and one for non-PIC. For example, the intel compiler ships with libirc.a and libirc_pic.a, the latter being compiled for position-independent mode (if one wants to link that .a file into an .so file).
I wonder what the precise difference between using -fpic and not using it is for 32bit code, and why some packages, like the intel compiler, still ship with separate versions of libraries?
It's not that non-PIC code works "by coincidence" on x86 (32-bit). It's that the dynamic linker for x86 supports the necessary "textrels" needed to make it work. This comes at a very high cost in memory consumption and startup time, since basically the entire code segment must be patched up at load time (and thus becomes non-shareable memory).
The dynamic linker maintainers claim that non-PIC shared libraries can't be supported on x86_64 because of fundamental issues in the architecture (immediate address displacements can't be larger than 32-bit) but this issue could be easily solved by just always loading libraries in the first 4gb of virtual address space. Of course PIC code is very inexpensive on x86_64 (PIC isn't a performance-killer like it is on 32-bit x86) so they're probably right to keep it unsupported and prevent fools from making non-PIC libraries...
the base virtual address that the shared object file is loaded into in different processes may be different
Because shared objects usually load at their preferred address, they may appear to work correctly. But fPIC is a good idea for all shared code.
I believe the reason that there aren't often two versions of the library is that many distributions use fPIC as the default for all code.
I have a small C program that I wish to port from Linux to Windows. I can do this with the MinGW compiler, and I have noticed that it has two different prefixes, amd64 and i586. I am on an i686 computer and I was wondering if I compile my C program using and amd64 architecture, will it run on my i686 machine? And vice-versa?
UPDATE:
Is there a compiler that compile C code to run on ANY architecture?
If you compile your code for i586 (actually what is commonly called x86) it should work fine on AMD64 (x86-64) processors, since x86-64 processors can execute "legacy" 32 bit code, as long as the OS supports this - and mainstream OSes usually do; Windows support for 32 bit applications in particular is really good, since most applications installed on the average Windows system are still 32 bit.
The contrary instead does not hold true, since the x86-64 instruction set is (loosely speaking) an expansion of the x86 architecture, so any non-64 bit x86 processor wouldn't know how to interpret the new machine code (and even if it knew it, it wouldn't have the resources to run it).
As for the edit, you can't generate machine code that runs natively everywhere; the usual solution in such cases is to use pseudo-compiled languages that output an intermediate-level machine code that needs an architecture-specific VM installed to be run (the classic example here is Java and .NET). If instead you use a language compiled to "native code", you have to generate an executable for each target platform.
Recently I've been playing around with cross compiling using GCC and discovered what seems to be a complicated area, tool-chains.
I don't quite understand this as I was under the impression GCC can create binary machine code for most of the common architectures, and all that else really matters is what libraries you link with and what type of executable is created.
Can GCC not do all these things itself? With a single build of GCC, all the appropriate libraries and the correct flags sent to GCC, could I produce a PE executable for a Windows x86 machine, then create an ELF executable for an embedded Linux MIPS device and finally an executable for an OSX PowerPC machine?
If not can someone explain how you would achieve this?
With a single build of GCC, all the
appropriate libraries and the correct
flags sent to GCC, could I produce a
PE executable for a Windows x86
machine, then create an ELF executable
for an embedded Linux MIPS device and
finally an executable for an OSX
PowerPC machine? If not can someone
explain how you would achieve this?
No. A single build of GCC produces object code for one target architecture. You would need a build targeting Intel x86, a build targeting MIPS, and a build targeting PowerPC. However, the compiler is not the only tool you need, despite the fact that you can build source code into an executable with a single invocation of GCC. Under the hood, it makes use of the assembler (as) and linker (ld) as well, and those need to be built for the target architecture and platform. Usually GCC uses the versions of these tools from the GNU binutils package, so you'd need to build that for the target platform too.
You can read more about building a cross-compiling toolchain here.
I don't quite understand this as I was
under the impression GCC can create
binary machine code for most of the
common architectures
This is true in the sense that the source code of GCC itself can be built into compilers that target various architectures, but you still require separate builds.
Regarding -march, this does not allow the same build of GCC to switch between platforms. Rather it's used to select the allowable instructions to use for the same family of processors. For example, some of the instructions supported by modern x86 processors weren't supported by the earliest x86 processors because they were introduced later on (such as extension instruction sets like MMX and SSE). When you pass -march, GCC enables all opcodes supported on that processor and its predecessors. To quote the GCC manual:
While picking a specific cpu-type will
schedule things appropriately for that
particular chip, the compiler will not
generate any code that does not run on
the i386 without the -march=cpu-type
option being used.
If you want to try cross-compiling, and don't want to build the toolchain yourself, I'd recommend looking at CodeSourcery. They have a GNU-based toolchain, and their free "Lite" version supports quite a few architectures. I've used it for Linux/ARM and Android/ARM.
On a 64bit Solaris Sparc system, can an Apache Server built in 64bit mode load a 32 bit plug-in?
No, it is not possible. When a 64 bit process tries to load a 32 bit shared object, the
runtime linker gives the following error:
ld.so.1: app: fatal: ./lib32.so: wrong ELF class: ELFCLASS32
On no sensible system 64-bit binary cannot load 32-bit shared library. They might pass pointers around, you know.