I'm developing a few programs on my pc, that runs Ubuntu 64bit.
I'd like to run these applications on another pc, that runs on 32. Is possible to compile on my machine or do I need to recompile the applications on the other?
In general you need to provide the compiler an environment similar to the target execution environment. Depending on how similar or different one environment is to another, this may be simple or complicated.
Assuming the compiler is GCC, you should only need to add -m32 to your compilation flags to make them work on a 32 bit system; assuming all other things are equal. Ensure you have the necessary 32-bit dependencies installed on your system (this means the base C library dependencies as well as a 32 bit version for each library your application links against).
Since you are only compiling for x86 on a 64 bit host, the path to this is generally simple. I would recommend however setting up a dedicated environment which you can use to compile -- typically some kind of chroot (See pbuilder, schroot, chroot, debootstrap and others).
There are compiler settings/flags that should allow you to do this on your machine; which specific ones you need would depend on the compiler you are using.
Related
I am building an operating system that supports both 32 and 64 bit, using the Meson build system. I have just started adding support for 64 bit, but I came accross a problem. When I use the 64 bit C compiler, both the kernel and stage2 gets compiled using that compiler. Which is the problem, the stage2 folder, has to be compiled with the 32 bit C compiler, not the 64 bit compiler. Is there any way I can achieve this in meson? Should I switch to CMake?
Which is the problem, the stage2 folder, has to be compiled with the 32 bit C compiler, not the 64 bit compiler.
Neither build system will let you do this in a single build... both CMake and Meson have a deeply held assumption that there is one compiler per language1. If you need to use multiple compilers, you will need to split your build into multiple independent projects. How you orchestrate building them is up to you... with CMake, I would suggest using a superbuild with ExternalProject. With Meson, I'm not sure what the standard approach is.
1. Technically Meson allows you to define a host/native compiler, too (for compiling build-time tools), but it is not suitable for this case since the host might not be either target and you might well end up wanting three or more compilers in use (e.g. for ARM).
Should I switch to CMake?
Probably not, since you already have something working with Meson. But if your build rules get complex enough, you might find CMake's ability to write functions and abstractions to be a greater benefit than a liability.
Is there a way to force GCC and/or Clang compiler to use LP64 data model when targeting Windows (ignoring that Windows use LLP64 data model)?
No, because the requested capability would not work
You are "targeting Windows", presumably meaning you want the compiler to produce code that will run under Windows in the usual way. In order to do that, the program must invoke functions in the Windows API. There are effectively three versions of the Windows API: win16, win32, and win64. Since you want 64-bit pointers (the "P64" in "LP64"), the only possible target is win64.
In order to call a win64 function, you must include windows.h. That header file uses long. If there were a compiler switch to insist that long be treated as a 64-bit integer (LP64) rather than 32-bit (LLP64), then the compiler's understanding of how to call functions and lay out data structures that use long would be wrong; the resulting program would not run correctly.
The same problem applies to the standard C and C++ libraries. If you link to an existing compiled library (as is typical), the calls into it won't work (since it will use LLP64). If you were to build one from source using a hypothetical switch to force LP64, its calls into the Windows API would fail.
But you can try Cygwin
Cygwin uses LP64 and produces binaries that run on Windows. That is possible, despite what I wrote above, because the Cygwin DLL acts as a bridge between the Cygwin LP64 environment and the native win64 LLP64 environment. Assuming you have code originally written for win32 that you now want to take advantage of a 64-bit address space with no or minimal code changes, I suspect this is the easiest path. But I should acknowledge that I've never used Cygwin in quite this way so there might be problems I am not aware of.
I have written a program code in c compiled and executed in gcc compiler. I want to share the executable file of program without sharing actual source code. Is there any way to share my program without revealing actual source code so that executable file could run on other computers with gcc compilers??
Is there any way to share my program without revealing actual source code so that executable file could run on other computers with gcc compilers?
TL;DR: yes, provided a greater degree of similarity than just having GCC. One simply copies the binary file and any needed auxiliary files to a compatible system and runs it.
In more detail
It is quite common to distribute compiled binaries without source code, for execution on machines other than the ones on which those binaries were built. This mode of distribution does present potential compatibility issues (as described below), but so does source distribution. In broad terms, you simply install (copy) the binaries and any needed supporting files to suitable locations on a compatible system and execute them. This is the manner of distribution for most commercial software.
Architecture dependence
Compiled binaries are certainly specific to a particular hardware architecture, or in certain special cases to a small, predetermined set of two or more architectures (e.g. old Mac universal binaries). You will not be able to run a binary on hardware too different from what it was built for, but "architecture" is quite a different thing from CPU model.
For example, there is a very wide range of CPUs that implement the x86_64 architecture. Most programs targeting that architecture will run on any such CPU. Indeed, the x86 architecture is similar enough to x86_64 that most programs built for x86 will also run on x86_64 (but not vise versa). It is possible to introduce finer-grained hardware dependency, but you do not generally get that by default.
Operating system dependence
Furthermore, most binaries are built to run in the context of a host operating system. You will not be able to run a binary on an operating system too different from the one it was built for.
For example, Linux binaries do not run (directly) on Windows. Windows binaries do not run (directly) on OS X. Etc.
Library dependence
Additionally, a program built against shared libraries require a compatible version of each required shared library to be available in the runtime environment. That does not necessarily have to be exactly the same version against which it was built; that depends on the library, which of its functions and data are used, and whether and how those changed over time.
You can sidestep this issue by linking every needed library statically, up to and including the C standard library, or by distributing shared libraries along with your binary. It's fairly common to just live with this issue, however, and therefore to support only a subset of all possible environments with your binary distribution(s).
Other
There is a veritable universe of other potential compatibility issues, but it's unlikely that any of them would catch you by surprise with respect to a program that you wrote yourself and want to distribute. For example, if you use nVidia CUDA in your program then it might require an nVidia GPU, but such a requirement would surely be well known to you.
Executable are often specific to the environment/machine they were created on. Even if the same processor/hardware is involved, there may be dependencies on libraries that may prevent executables from just running on other machines.
A program that uses only "standard libraries" and that links all libraries statically, does not need any other dependency (in the sense that all the code it need is in the binary itself or into OS libraries that -being part of the system itself- are already on the system).
You have to link the standard library statically. Otherwise it will only work if the version of the standard library for your compiler is installed in your OS by default (which you can't rely on, in general).
When writing software that is CPU arch dependent, such as C code running on x86 or C code running on ARM cpus. There generally is two ways to go about compiling this code, either Cross-Compile to the ARM CPU arch (if you're developing on an x86 system for example) or copy your code to a native arch cpu system and compile it naively.
I'm wondering if there is a benefit to the native approach vs the cross-compile approach? I noticed that the Fedora ARM team is using a build-server cluster of slow/low power ARM devices to "naively" compile their Fedora ARM spin... surely a project backed by Red Hat has access to some powerful build servers running x86 cpus that could get the job done in 1/2 the time... so why their choice? Am I missing something by cross-compiling my software?
The main benefit is that all ./configure scripts do not need to be tweaked when running natively. If you are using a shadow rootfs, then you still have configurations running uname to detect the CPU type, etc. For instance, see this question. pkgconfig and other tools try to ease cross-building, but packages normally get native-building on x86 correct first, and then maybe native-building on ARM. cross-building can be painful as each package may need individual tweaks.
Finally, if you are doing profile guided optimizations and running test suitesas per Joachim, it is pretty much impossible to do this in a cross build environment.
Compile speed on the ARM is significantly faster than the human package builders, read configure, edit configure, re-run configure, compile, link cycles.
This also fits well with a continuous integration strategy. Various packages, especially libraries, can be built/deployed/tested quickly. The testing of libraries may involve hundreds of dependent packages. Arm Linux distrubutions will typically need to prototype changes when upgrading and patching a base library which may have hundreds of dependent packages that at least need retesting. A slow cycle done by a computer is always better than a fast compile followed by manual human intervention.
No technically you're not missing anything by cross-compiling within the context of .c -> .o -> a.out (or whatever); A cross compiler will give you the same binary as a native compiler (versions etc. notwithstanding)
The "advantages" of building natively come from post-compile testing and managing complex systems.
1) If I can run unit-tests quickly after compiling I can get to any bugs/issues quickly the cycle is presumably shorter than the cross-compiling cycle;
2) if I am compiling some target software that has 3rd-party libraries that it uses, then building, deploying and then using them to build my target would probably be easier on native platform; I don't want to deal with the cross-compile builds of those because half of them have build processes written by crazy monkeys that make cross compiling them a pain.
Typically for most things one would try to get to a base build and the compile the rest natively. Unless I have a sick set up where my cross compiler is super wicked fast and I the time I save there is worth the set up required to make the rest of the things (such as unit testing and dependencies management) easier.
At least those are my thoughts
The only benefit of compiling natively is that you don't have to transfer the program to the target platform as it's already there.
However that is not such a big benefit when considering that most target platforms are massively underpowered compared to a modern x86 PC. The amounts of memory, faster CPU and especially much faster disks makes compilation times many times quicker on a PC. So much so that the advantage of native building isn't really an advantage anymore.
It depends a lot on the compiler. How does the toolchain handle the difference between native and cross compile. Is it simply a case of the toolchain always thinks it being built as a cross compiler, but one way to build it is to let the configure script auto-detect the host rather than you doing it manually (and auto-set the prefix, etc)?
Dont assume that just because it is built to be a native compiler it is really native. There are many instances where distros dumb down their native compiler (and kernel and other binaries) so that that distro runs on a wider range of systems. On an ARMv6 system you might be running a compiler that defaults to ARMv4 for example.
That begs a similar question to your own, if I build the toolchain with one default architecture then specify another is that different that building the toolchain for the target architecture?
Ideally you would hope that a mostly debugged compiler/toolchain would give you the same results whether you were native or cross compiled and independent of the default architecture. Now I have seen on an older llvm that the llvm-gcc when run on a 64 bit host, cross compiling to arm would build all ints as 64 bit adding a lot to the code, same compiler version, same source code on a 32 bit host would give different results (32 bit ints). Basically the -m32 switch did not work for llvm-gcc (at the time), I dont know if that is still the case as I switched to clang when doing llvm work and never looked back at llvm-gcc...llvm/clang for example is mostly a cross compiler all the time, the linker is the only thing that appears to be host specific, you can take an off the shelf llvm and compile for any of the targets on any host system (provided your build didnt disable any of the supported targets of course).
Although many people think "local compile" benefits more or at least it has no difference compared to "cross compile", the truth is quite the contrary.
For people who works on lower level, i.e. linux kernel, they usually suffer from copy around compile platform. Take x86 and ARM as example, direct idea is building ARM compile base, but it is a bad idea.
Binary is not same sometimes, for example,
# diff hello_x86.ko hello_arm.ko
Binary files hello_x86.ko and hello_arm.ko differ
# diff hello_x86_objdump.txt hello_arm_objdump.txt
2c8
< hello_x86.ko: file format elf64-littleaarch64
---
> hello_arm.ko: file format elf64-littleaarch64
26,27c26,27
< 8: 91000000 add x0, x0, #0x0
< c: 910003fd mov x29, sp
---
> 8: 910003fd mov x29, sp
> c: 91000000 add x0, x0, #0x0
Generally higher level app is OK to use both, lower level (hardware related) work is suggested to use x86 "cross compile" since it has much better toolchain.
Anyway, compile is a work about GCC Glibc and lib.so, and if one is familiar with these, either way should be easy to go.
PS: Below is the source code
# cat hello.c
#include <linux/module.h> /* Needed by all modules */
#include <linux/kernel.h> /* Needed for KERN_ALERT */
#include <linux/init.h> /* Needed for the macros */
static int hello3_data __initdata = 3;
static int __init hello_3_init(void)
{
printk(KERN_ALERT "Hello, world %d\n", hello3_data);
return 0;
}
static void __exit hello_3_exit(void)
{
printk(KERN_ALERT "Goodbye, world 3\n");
}
module_init(hello_3_init);
module_exit(hello_3_exit);
MODULE_LICENSE("GPL");
I been looking into Cygwin/Mingw/lcc and I liked to be able to compile perl native C extensions on my windows(preferably under cygwin) and then run them on Solaris and HP unix without any further fuss, is this possible?
This all stems from my original perl cross-platform question here.
(This is a very old question, but missing some useful info --
I've personally done this for Solaris (SPARC & x86), AIX, HP-UX and Linux (x86, x64).)
Getting C++ cross-compiled is much harder than straight C.
HP-UX 32-bit PA-RISC is not supported because it uses SOM format instead of ELF and binutils doesn't (and likely won't ever) support SOM. In other words, you can only cross-compile 64-bit PA-RISC. (Requires PA-RISC 2.0 chip.)
I would go with mingw instead of cygwin, if you can. Cygwin introduces a lot of file permission headaches and cygwin1.dll dependencies that can be troublesome. If possible, however, build on linux. Everything will be much faster because all the tools and scripts you're running are designed for an environment where exec and stat are fast operations. Windows + NTFS is not that environment.
Start with the crosstools script, but be prepared to spend a lot of time on this.
Try with the very latest gcc/binutuils first, but if you can't overcome problems try dropping back to older packages. E.g. for Power3 (AIX) gcc 4.x series cross compiler generates bad code, 3.x is fine.
When copying native libs and headers make sure you are copying from the oldest machine you're likely to run on. Copying a new libc means your code won't run on any machine with an older libc.
When copying native libs and headers you probably want 'tar -h' to turn symlinks into actual files, also watch that on Solaris some requisite crt object files are buried in a cc directory, not under /usr/lib
Cross-compiler are very hard to setup and get working correctly.
Consider that (the people at) NetBSD have to put in a huge amount of work to get cross-compiling to work, and they're running the same OS, just different architectures.
You'd have to, at least, copy all the headers from the other OSs to Windows, and get a cross-compiler, linker etc for the target OS/architecture.
Also that may well not be possible - perl and shared libraries may be compiled with a native/non-gcc compiler which won't be available on Windows at all.
I agree with Douglas, that getting a cross compiler up and working is very hard to do. This is generally, your choice of last resort. If you are boot strapping, or making a binary for an embedded device, then often cross-compiling is your only option. You should be comfortable compiling your own gcc under Cygwin before considering cross compiling. To cross compile, you need to build a gcc to run under windows, but which will create binaries for your execution platform. Sample instructions for doing this can be found here.
Perhaps you are wanting to cross compile because you don't have root and/or can't compile on your target platform. For example, I had a hosting provider which ran Redhat Linux. I could run Perl CGI scripts, and associated modules, but I could not compile on the target machine, and an libraries I built had to exist in my own directory.
To solve this, I could have attempted to cross compile for my target platform, but instead, I decided to setup a similar host inside a VM on Windows. From within Cygwin, you can create a script which ssh's into your VM, copies your source, and does a full configure/build. The last step was to deploy the binary artifact onto my hosted system.
I've successfully had both Solaris 10 and Open Solaris running within a VM on Windows. Unfortunately, you might have a harder time running HPUX under a VM.
Why don't you have a read up on "Grand Unified Builder" (http://lilypond.org/gub/ and http://valentin.villenave.info/The-LilyPond-Report-11 (section #4))
I don't know how it works, but GUB allows the Lilypond developers to compile for about 11 platforms on a linux box.
Compile on Windows then use Wine to run them on any *nix. It works well most of the time.
No, this isn't possible at the binary level. There are so many differences at binary level between the various OSes and CPUs.
But what you can do is make the your C extensions source compatible so that it can compile to different platforms. C was designed as a "portable assembly language". As long as you stick with routines that are cross-platform, then they will usually work the same. You'll still need to test because there could be bugs that exists on particular platform.
This can't be done ... but is it that much of a hassle to recompile the code under Solaris or HP?