Why the /lib/libc.so.1 (linker/loader) is always mounted on Solaris 10 ? I have tried both mount and df output. It shows me /lib/libc.so.1 entry.
For both SPARC and x86 architectures, Solaris provides optimized C standard libraries. At boot time, the best suited for your machine, i.e. the one taking advantage of CPU specific instructions and features, is lofs mounted on top of the standard one.
Since Solaris 10, no static libc is provided so this dynamic libc, being the interface between the kernel and the userland, is a mandatory component of every program running on Solaris.
More details here.
One might ask why is this done with a lofs mount and not by a lightweight feature like a symlink.
The reason is a symlink is persistent, i.e. survives a reboot. Using a symlink might then render a system unusable should the hardware capabilities evolve or should should for some other reason the wrong library would have been linked to. Again, all Solaris commands are dynamically linked to libc.so. There has not been a libc.a since a long time.
Using a lofs mount ensure the first stage of system boot are done with using the safe default libc.so, and the optimized one is only selected at the right time and in particular allows a safe boot with all services disabled (-m milestone=none) not to be affected by a capabilities change.
libc.so is required to run unix commands like ssh or awk that were written in C and use dynamic (runtime) linking. libc.so is a link to libc.so.1 which is the "base" version of the C library for the implementation of Solaris 10 you are running.
Solaris does not work exactly the way Linux does with versions of libc because there are different versions of sparc architecure. The lowest common denominator is sparc 1. I have a Ultrasparc III box and other more modern boxes.
Try the file command on libc.so.1: file /lib/libc.so.1 In order for the utilities and other code to get the max from the box, the architecture "sparc setting" of libc matches the box. Read about and try the isalist and isainfo commands.
Related
Does GCC (or alternatively Clang) defines any macro when it is compiled for the Arch Linux OS?
I need to check that my software restricts itself from compiling under anything but Arch Linux (the reason behind this is off-topic). I couldn't find any relevant resources on the internet.
Does anyone know how to guarantee through GCC preprocessor directives that my binaries are only compilable under Arch Linux?
Of course I can always
#ifdef __linux__
...
#endif
But this is not precise enough.
Edit: This must be done through C source code and not by any building systems, so, for example, doing this through CMake is completely discarded.
Edit 2: Users faking this behaviour is not a problem since the software is distributed to selected clients and thus, actively trying to "misuse" our source code is "their decision".
Does GCC (or alternatively Clang) defines any macro when it is compiled for the Arch Linux OS?
No. Because there's nothing inherently specific to Arch Linux on the binary level. For what it's worth, when compiling the only things you/the compiler has to care about is the target architecture (i.e. what kind of CPU it's going to run with), data type sizes and alignments and function calling conventions.
Then later on, when it's time to link the compiled translation unit objects into the final binary executable, the runtime libraries around are also of concern. Without taking special precautions you're essentially locking yourself into the specific brand of runtime libraries (glibc vs. e.g. musl; libstdc++ vs. libc++) pulled by the linker.
One can easily sidestep the later problem by linking statically, but that limits the range of system and midlevel APIs available to the program. For example on Linux a purely naively statically linked program wouldn't be able to use graphics acceleration APIs like OpenGL-3.x or Vulkan, since those rely on loading components of the GPU drivers into the process. You can however still use X11 and indirect GLX OpenGL, since those work using wire protocols going over sockets, which are implemented using direct syscalls to the kernel.
And these kernel syscalls are exactly the same on the binary level for each and every Linux kernel of every distribution out there. Although inside of the kernel there's a lot of leeway when it comes to redefining interfaces, when it comes to the interfaces toward the userland (i.e. regular programs) there's this holy, dogmatic, ironclad rule that YOU NEVER BREAK USERLAND! Kernel developers breaking this rule, intentionally or not are chewed out publicly by Linus Torvalds in his in-/famous rants.
The bottom line to this is, that there is no such thing as a "Linux distribution specific identifier on the binary level". At the end of the day, a Linux distribution is just that: A distribution of stuff. That means someone or more decided on a set of files that make up a working Linux system, wrap it up somehow and slap a name on it. That's it. There's nothing inherently specific to "Arch" Linux other than it's called "Arch" and (for the time being) relies on the pacman package manager. That's it. Everything else about "Arch", or any other Linux distribution, is just a matter of happenstance.
If you really want to sort different Linux distributions into certain bins regarding binary compatibility, then you'd have to pigeonhole the combinations of
Minimum required set of supported syscalls. This translates into minimum required kernel version.
What libc variant is being used; and potentially which version, although it's perfectly possible to link against a minimally supported set of functions, that has been around for almost "forever".
What variant of the C++ standard library the distribution decided upon. This actually also inflicts programs that might appear to be purely C, because certain system level libraries (*cough* Mesa *cough*) will internally pull a lot of C++ infrastructure (even compilers), also triggering other "fun" problems¹
I need to check that my software restricts itself from running under anything but Arch Linux (the reason behind this is off-topic). I couldn't find any relevant resources on the internet.
You couldn't find resources on the Internet, because there's nothing specific on the binary level that makes "Arch" Arch. For what it's worth right now, this instant I could create a fork of Arch, change out its choice of default XDG skeleton – so that by default user directories are populated with subdirs called leech, flicks, beats, pics – and call it "l33tz" Linux. For all intents and purposes it's no longer Arch. It does behave significantly different from the default Arch behavior, which would also be of concern to you, if you'd relied on any specific thing, and be it most minute.
Your employer doesn't seem to understand what Linux is or what distinguished distributions from each other.
Hint: It's not the binary compatibility. As a matter of fact, as long as you stay within the boring old realm of boring old glibc + libstdc++ Linux distributions are shockingly compatible with each other. There might be slight differences in where they put libraries other than libc.so, libdl.so and ld-linux[-${arch}].so, but those two usually always can be found under /lib. And once ld-linux[-${arch}].so and libdl.so take over (that means pulling in all libraries loaded at runtime) all the specifics of where shared objects and libraries are to be found are abstracted away by the dynamic linker.
1: like becoming multithreaded only after global constructors were executed and libstdc++ deciding it wants to be singlethreaded, because libpthread wasn't linked into a program that didn't create a single thread on its own. That was a really weird bug I unearthed, but yshui finally understood https://gitlab.freedesktop.org/mesa/mesa/-/issues/3199
You can list the predefined preprocessor macros with
gcc -dM -E - /dev/null
clang -dM -E - /dev/null
None of those indicate what operating system the compiler is running under. So not only you can't tell whether the program is compiled under Arch Linux, you can't even tell whether the program is compiled under Linux. The macros __linux__ and friends indicate that the program is being compiler for Linux. They are defined when cross-compiling from another system to Linux, and not defined when cross-compiling from Linux to another system.
You can artificially make your program more difficult to compile by specifying absolute paths for system headers and relying on non-portable headers (e.g. /usr/include/bits/foo.h). That can make cross-compilation or compilation for anything other than Linux practically impossible without modifying the source code. However, most Linux distributions install headers in the same location, so you're unlikely to pinpoint a specific distribution.
You're very likely asking the wrong question. Instead of asking how to restrict compilation to Arch Linux, start from why you want to restrict compilation to Arch Linux. If the answer is “because the resulting program wouldn't be what I want under another distribution”, then start from there and make sure that the difference results in a compilation error rather than incorrect execution. If the answer to “why” is something else, then you're probably looking for a technical solution to a social problem, and that rarely ends well.
No, it doesn't. And even if it did, it wouldn't stop anyone from compiling the code on an Arch Linux distro and then running it on a different Linux.
If you need to prevent your software from "from running under anything but Arch Linux", you'll need to insert a run-time check. Although, to be honest, I have no idea what that check might consist of, since linux distros are not monolithic products. The actual check would probably have to do with your reasons for imposing the restriction.
I have written a program code in c compiled and executed in gcc compiler. I want to share the executable file of program without sharing actual source code. Is there any way to share my program without revealing actual source code so that executable file could run on other computers with gcc compilers??
Is there any way to share my program without revealing actual source code so that executable file could run on other computers with gcc compilers?
TL;DR: yes, provided a greater degree of similarity than just having GCC. One simply copies the binary file and any needed auxiliary files to a compatible system and runs it.
In more detail
It is quite common to distribute compiled binaries without source code, for execution on machines other than the ones on which those binaries were built. This mode of distribution does present potential compatibility issues (as described below), but so does source distribution. In broad terms, you simply install (copy) the binaries and any needed supporting files to suitable locations on a compatible system and execute them. This is the manner of distribution for most commercial software.
Architecture dependence
Compiled binaries are certainly specific to a particular hardware architecture, or in certain special cases to a small, predetermined set of two or more architectures (e.g. old Mac universal binaries). You will not be able to run a binary on hardware too different from what it was built for, but "architecture" is quite a different thing from CPU model.
For example, there is a very wide range of CPUs that implement the x86_64 architecture. Most programs targeting that architecture will run on any such CPU. Indeed, the x86 architecture is similar enough to x86_64 that most programs built for x86 will also run on x86_64 (but not vise versa). It is possible to introduce finer-grained hardware dependency, but you do not generally get that by default.
Operating system dependence
Furthermore, most binaries are built to run in the context of a host operating system. You will not be able to run a binary on an operating system too different from the one it was built for.
For example, Linux binaries do not run (directly) on Windows. Windows binaries do not run (directly) on OS X. Etc.
Library dependence
Additionally, a program built against shared libraries require a compatible version of each required shared library to be available in the runtime environment. That does not necessarily have to be exactly the same version against which it was built; that depends on the library, which of its functions and data are used, and whether and how those changed over time.
You can sidestep this issue by linking every needed library statically, up to and including the C standard library, or by distributing shared libraries along with your binary. It's fairly common to just live with this issue, however, and therefore to support only a subset of all possible environments with your binary distribution(s).
Other
There is a veritable universe of other potential compatibility issues, but it's unlikely that any of them would catch you by surprise with respect to a program that you wrote yourself and want to distribute. For example, if you use nVidia CUDA in your program then it might require an nVidia GPU, but such a requirement would surely be well known to you.
Executable are often specific to the environment/machine they were created on. Even if the same processor/hardware is involved, there may be dependencies on libraries that may prevent executables from just running on other machines.
A program that uses only "standard libraries" and that links all libraries statically, does not need any other dependency (in the sense that all the code it need is in the binary itself or into OS libraries that -being part of the system itself- are already on the system).
You have to link the standard library statically. Otherwise it will only work if the version of the standard library for your compiler is installed in your OS by default (which you can't rely on, in general).
I downloaded the glibc source code, modified some portion of the standard library and then used LD_PRELOAD to use that modified standard library (in the form of an .so file) with my program. However, when I copied that .so file to another computer and tried to run the same program using LD_PRELOAD there, I got a segmentation fault.
Notice that both computers have x86-64 processors. Moreover, both computers have gcc 4.4 installed. Although the computer in which it is not running has also gcc 4.1.2 installed besides gcc 4.4. However, one is running Ubuntu 10.04 (where I compiled), while the other is running CentOS 5. Is that the cause of the segmentation fault? How can I solve this problem? Notice that I don't have administrative rights on the computer with CentOS 5.
When you LD_PRELOAD the C library, I believe you're loading it in addition to the default C library. When they're the exact same version, all the symbols match, and yours takes precedence. So it works. When they're different versions, you may well have a mix, on a per-symbol basis.
Also, the NSS (name service switch, e.g., all the stuff from /etc/nsswitch.conf) API is not stable. These modules are separate from the main libc.so, but are dynamically loaded when a program e.g., does a user id to username mapping. Loading the wrong version (because you copied libc.so over) will do all kinds of badness.
Further, Ubuntu may be using eglibc and CentOS glibc. So you could be looking at a different fork of glibc.
If your LD_PRELOAD library included only the symbols you actually need to override, and overrode them to the minimum amount possible (e.g., if possible, call the overridden function), then your library has a much higher chance of being portable.
For an example of how to do this, see (for example) fakeroot.
If you're changing so much of libc that your only choice is to override all of it, then (a) you're doing something very weird; (b) you probably want to use LD_LIBRARY_PATH, not LD_PRELOAD; see the ld.so(8) manpage for details.
It is likely that your libc is not portable between kernel versions.
If you compile a program in say, C, on a Linux based platform, then port it to use the MacOS libraries, will it work?
Is the core machine-code that comes from a compiler compatible on both Mac and Linux?
The reason I ask this is because both are "UNIX based" so I would think this is true, but I'm not really sure.
No, Linux and Mac OS X binaries are not cross-compatible.
For one thing, Linux executables use a format called ELF.
Mac OS X executables use Mach-O format.
Thus, even if a lot of the libraries ordinarily compile separately on each system, they would not be portable in binary format.
Furthermore, Linux is not actually UNIX-based. It does share a number of common features and tools with UNIX, but a lot of that has to do with computing standards like POSIX.
All this said, people can and do create pretty cool ways to deal with the problem of cross-compatibility.
EDIT:
Finally, to address your point on byte-code: when making a binary, compilers usually generate machine code that is specific to the platform you're developing on. (This isn't always the case, but it usually is.)
In general you can easily port a program across various Unix brands. However you need (at least) to recompile it on each platform.
Executables (binaries) are not usable on several platforms, because an executable is tightly coupled with the operating system's ABI (Application Binary Interface), i.e. the conventions of how an application communicates with the operating system.
For instance if your program prints a string onto the console using the POSIX write call, the ABI specifies:
How a system call is done (Linux used to call the 0x80 software interrupt on x86, now it uses the specific sysenter instruction)
The system call number
How are the function's arguments transmitted to the system
Any kind of alignment
...
And this varies a lot across operating systems.
Note however that in some cases there may be “ABI adapters” allowing to run binaries of one OS onto another OS. For instance Wine allows you to run Windows executables on various Unix flavors, NDISwrapper allows you to use Windows network drivers on Linux.
"bytecode" usually refers to code executed by a virtual machine (e.g. for java or python). C is compiled to machine code, which the CPU can execute directly. Machine language is hardware-specific so it it would be the same under any OS running on an intel chip (even under Windows), but the details of how the machine code is wrapped into an executable file, and how it is integrated with system calls and dynamically linked libraries are different from system to system.
So no, you can't take compiled code and use it in a different OS. (However, there are "cross-compilers" that run on one OS but generate code that will run on another OS).
There is no "core byte-code that comes from a compiler". There is only machine code.
While the same machine instructions may be applicable under several operating systems (as long as they're run on the same hardware), there is much more to a hosted executable than that, and since a compiled and linked native executable for Linux has very different runtime and library requirements from one on BSD or Darwin, you won't be able to run one binary on the other system.
By contrast, Windows binaries can sometimes be executed under Linux, because Linux provides both a binary format loader for Windows's PE format, as well as an extensive API implementation (Wine). In principle this idea can be used on other platforms as well, but I'm not aware of anyone having written this for Linux<->Darwin. If you already have the source code, and it compiles in Linux, then you have a good chance of it also compiling under MacOS (modulo UI components, of course).
Well, maybe... but most probably not.
But if it does, it's not "because both are UNIX" it's because:
Mac computers happen to use the same processor nowadays (this was very different in the past)
You happen to use a program that has no dependency on any library at all (very unlikely)
You happen to use the same runtime libraries
You happen to use a loader/binary format that is compatible with both.
If I just want to use the gsl_histogram.h library from Gnu Scientific Library (GSL), can I copy it from an existing machine (Mac OS Snow Leopard) that has GSL installed to a different machine (Linux CentOS 5.7) that doesn't have GSL installed, and just use an #include <gls_histogram.h> statement in my c program? Would this work?
Or, do I have to go through the full install of GSL on the Linux box, even though I only need this one library?
Just copying a header gsl_histogram.h is not enough. Header states merely the interface that is exposed by this library. You would need to copy also binaries like *.so and *.a files, but it's hard to tell which ones to copy. So I think the you'd better just install it on your machine. It's pretty easy, just use this tutorial to find and install GSL package.
So there are surely a lot of libraries out there. However the particular one is Gnuplot. Using it you even do not need to compile the code, however you do need to read a bit of documentation. But luckily there is already a question about how to draw a histogram with Gnuplot on Stackoverflow: Histogram using gnuplot? It worth noting that Gnuplot is actually very powerful tool, so invested time into reading its documentation will certainly pay off.
You cannot copy libraries from OS and expect them to work unchanged.
OS X uses the Mach-O object file format while modern Linux systems use the ELF object file format. The usual ld.so(8) linker/loader will not know how to load the Mach-O format object files for your executable to execute. So you would need the Apple-provided ld.so(8) -- or whatever they call their loader. (It's been a while.)
Furthermore, the object files from OS X will be linked against the Apple-supplied libc, and require the corresponding symbols from the Apple-supplied library. You would also need to provide the Apple-provided libc on the Linux system. This C library would try to make system calls using the OS X system call numbers and calling conventions. I guarantee the system call numbers have changed and almost certainly calling conventions are different.
While the Linux kernel's binfmt_misc generic object loader can be used to teach the kernel how to load different object file formats, and the kernel's personality(2) system call can be used to select between different calling conventions, system call numbers, and so on, the amount of work required to make this work is nothing short of immense: the WINE Project has been working on exactly this issue (but with the Windows format COFF and supporting libraries) since 1993.
It would be easier to run:
apt-get install libgs0-dev
or whatever the equivalent is on your distribution of choice. If your distribution does not make it easily available, it would still be easier to compile and install the library by hand rather than try to make the OS X version work.