Can "libc" be moved to another system by just re-compiling? - c

Can the C library libc be moved from one system to another by just re-compiling it?

No, you cannot move libc by just recompiling.
The core part of the operating system (called the kernel), manages things like threads, processes, memory management, drivers, thermal sensors, and other vital things. glibc uses the Linux kernel for parts of the standard that require system calls.
You'd need to modify it for your system before compiling it. Or, if you are using the Linux kernel, you'd just need to cross-compile the Linux kernel and glibc for your target.

Related

What is the relation between Linux kernel and GNU C library?

We know that Linux kernel is written in C. But does it also call standard C functions like malloc() or extra functions like mmap() which are provided by GNU C library (glibc)? In that case, it's strange, because direct low-level interaction with hardware, e.g. memory management, is supposed to be almost always the task of a kernel. So, which is dependent on the other? Which is more fundamental/low-level?
We know that Linux kernel is written in C. But does it also call standard C functions like malloc()
No. However, the kernel defines similar functions like kmalloc. Note this is not part of a library; it's part of the kernel itself.
or extra functions like mmap()
Not mmap, but there are a lot of memory management functions in the kernel.
which are provided by GNU C library (glibc)?
Definitely not. The kernel does not use glibc ever.
So, which is dependent on the other?
Some parts of glibc depends on the kernel. Other parts (like strcpy) have nothing to do with the kernel and don't depend on it. The kernel never depends on glibc. You can run programs on Linux that use a different libc (like "musl") or that don't use a libc at all.

How to include a user level C program in Linux source to be compiled with the Linux kernel?

I have a C program that is tied to a specific version of Linux kernel source.
I only want to compile the application against the kernel source, and then run the binary in a shell.
I will run the code as a user-space application, and not in the kernel space, after building the Linux kernel.
How and where can I include this program in the Linux source so that it can be compiled with the Linux kernel?
Userspace programs do exist in the Linux kernel source tree in the tools/ subdirectory.
There does not seem to be a clear-cut (or any) definition of what kind of program constitutes a "tool" that requires/deserves inclusion/distribution with the kernel source.
The types of utilities that do (currently) exist in the kernel source tree include an admin program for examining status bits of memory pages (tools/vm/page-types.c) to three simple programs that utilize/demonstrate the ("new") chardev GPIO interface (tools/gpio/gpio-event-mon.c and others).
The largest category of userspace programs or tools in the kernel source is in the tools/testing/selftests/ kernel subdirectory.
The documentation is in Documentation/dev-tools/kselftest.rst
======================
Linux Kernel Selftests
======================
The kernel contains a set of "self tests" under the tools/testing/selftests/
directory. These are intended to be small tests to exercise individual code
paths in the kernel. Tests are intended to be run after building, installing
and booting a kernel.
...
kselftest runs as a userspace process. Tests that can be written/run in
userspace may wish to use the `Test Harness`_. Tests that need to be
run in kernel space may wish to use a `Test Module`_.
Alternatively there are many kernel subsystems and hardware components that do not have their tools in the kernel source, but rather have that code available as separate source/project packages.
Given the stability of the binary API that the Linux kernel provides to userspace, a program is rarely tied to a specific kernel version.
A regression (i.e. a change which causes something to break for existing users) is avoided whenever possible by the kernel maintainers.
One reason for inclusion of programs with the kernel source seems to be convenience for the kernel maintainers.
Kernel builders are encouraged to also build and run the selftest programs.

Is a Linux executable "compatible" with OS X?

If you compile a program in say, C, on a Linux based platform, then port it to use the MacOS libraries, will it work?
Is the core machine-code that comes from a compiler compatible on both Mac and Linux?
The reason I ask this is because both are "UNIX based" so I would think this is true, but I'm not really sure.
No, Linux and Mac OS X binaries are not cross-compatible.
For one thing, Linux executables use a format called ELF.
Mac OS X executables use Mach-O format.
Thus, even if a lot of the libraries ordinarily compile separately on each system, they would not be portable in binary format.
Furthermore, Linux is not actually UNIX-based. It does share a number of common features and tools with UNIX, but a lot of that has to do with computing standards like POSIX.
All this said, people can and do create pretty cool ways to deal with the problem of cross-compatibility.
EDIT:
Finally, to address your point on byte-code: when making a binary, compilers usually generate machine code that is specific to the platform you're developing on. (This isn't always the case, but it usually is.)
In general you can easily port a program across various Unix brands. However you need (at least) to recompile it on each platform.
Executables (binaries) are not usable on several platforms, because an executable is tightly coupled with the operating system's ABI (Application Binary Interface), i.e. the conventions of how an application communicates with the operating system.
For instance if your program prints a string onto the console using the POSIX write call, the ABI specifies:
How a system call is done (Linux used to call the 0x80 software interrupt on x86, now it uses the specific sysenter instruction)
The system call number
How are the function's arguments transmitted to the system
Any kind of alignment
...
And this varies a lot across operating systems.
Note however that in some cases there may be “ABI adapters” allowing to run binaries of one OS onto another OS. For instance Wine allows you to run Windows executables on various Unix flavors, NDISwrapper allows you to use Windows network drivers on Linux.
"bytecode" usually refers to code executed by a virtual machine (e.g. for java or python). C is compiled to machine code, which the CPU can execute directly. Machine language is hardware-specific so it it would be the same under any OS running on an intel chip (even under Windows), but the details of how the machine code is wrapped into an executable file, and how it is integrated with system calls and dynamically linked libraries are different from system to system.
So no, you can't take compiled code and use it in a different OS. (However, there are "cross-compilers" that run on one OS but generate code that will run on another OS).
There is no "core byte-code that comes from a compiler". There is only machine code.
While the same machine instructions may be applicable under several operating systems (as long as they're run on the same hardware), there is much more to a hosted executable than that, and since a compiled and linked native executable for Linux has very different runtime and library requirements from one on BSD or Darwin, you won't be able to run one binary on the other system.
By contrast, Windows binaries can sometimes be executed under Linux, because Linux provides both a binary format loader for Windows's PE format, as well as an extensive API implementation (Wine). In principle this idea can be used on other platforms as well, but I'm not aware of anyone having written this for Linux<->Darwin. If you already have the source code, and it compiles in Linux, then you have a good chance of it also compiling under MacOS (modulo UI components, of course).
Well, maybe... but most probably not.
But if it does, it's not "because both are UNIX" it's because:
Mac computers happen to use the same processor nowadays (this was very different in the past)
You happen to use a program that has no dependency on any library at all (very unlikely)
You happen to use the same runtime libraries
You happen to use a loader/binary format that is compatible with both.

Is there any libc project that does not requires linux kernel

I am using a custom user space environment that has barely no OS support: only one char device, mass storage interface and a single network socket.
To provide C programming to this platform, I need a libc. Is there any libc project that is configurable enough so that I can map low-level IO to the small API I have access to ?
AFAIK glibc and uclibc are expecting linux syscalls, so I can't use them (without trying to emulate linux syscalls, which is something I prefer to avoid).
There are several different libc's to choose from, but all will need some work to integrate into your system.
uClibc has a list of other C libraries.
The most interesting ones on that list are probably
dietlibc
newlib
FreeDOS has a LIBC
EGLIBC might be simpler to port than the "standard" glibc.
newlib might serve this purpose.

how do drivers become parts of operating systems?

I know that OS kernels are made up of drivers, but how does the driver become a part of the os?, does the kernel decompile itself, and then add the driver and recompile itself?, or are the drivers plug-ins for the kernel?, someone told me that for most operating systems, the drivers actually become a part of the kernel, but whenever I compile a c program, it turns into an ordinary executable
The driver architecture depends entirely on your operating system. For most operating systems running on computers (as opposed to embedded devices), thinking of drivers as 'plug-ins' for the kernel is pretty much accurate. That said, there are plenty of older, smaller, and less sophisticated operating systems which require you to build the driver in as part of the kernel - no dynamic loading possible. These days, several operating systems have support for "user-mode" drivers, which are device drivers that don't ever run in the kernel memory space at all.
It depends on the o/s.
Classically, the kernel was a monolithic executable that contained all the drivers - and was rebuilt when a new driver needed to be added, including the code for the new driver along with all the old ones.
In modern Linux, and probably other o/s too, the drivers are dynamically loaded by the kernel when needed. The driver is created in a form that allows the kernel to do that loading; typically, that means in a shared object or dynamic link library format.
In operating systems like Linux drivers can be actually compiled into the kernel image. Although even if statically linked, they may well exhibit a plug-in type architecture that allows one to easily only include the drivers one needs.
Alternatively, they are dynamically linked and loaded either at boot time or on demand when required by some system level software.

Resources