Guide for steps to modify a filesystem in linux - filesystems

I intend to make some changes to XFS filesystem.
Is there a documentation on the steps involved in doing so? I tried google but found nothing of much use. I have a rough understanding of the steps involved (see below), but I am looking for a detailed description so that I don't get stuck on simple things.
Steps in my mind:
Ensure my OS is not using XFS currently (no directories formatted as
XFS)
Download the source of XFS for my kernel version
Make changes to the source file
compile the modified source code (this step requires some conf files, which I am not sure where I can get from)
rmmod the xfs module and then insmod the xfs module so that changes are reflected in the system.
create a new partition, format it with XFS and test if things are alright after my changes.
Looking forward to some useful pointers.
Its OK if the pointers are for some other FS like ext3 or 4 as I believe the details would not vary from FS to FS.
Thanks

Your steps might work, but if you run into any issues you could wind up with an unbootable system.
The modules for the kernel must be built with the same version of compiler as the kernel itself or you'll have trouble. I've been stymied every time I tried to build a module for the kernel that came with the distro because the distribution maintainers invariably used some customized version of the compiler that I was not able to match.
A safer but longer option is install the kernel source package for your distro, and modify the XFS module source as necessary. Then build the entire kernel, including the customized XFS module, following the instructions for your linux distribution. Google for your distro and 'custom linux kernel', you should find dozens of hits.
Once built, you'll want to install your new kernel alongside your old one, and configure the bootloader to make the kernel selectable at boot time. This way even if something goes horribly wrong, you can still boot your system using your existing kernel.

Related

How to create bootable application image (efi)?

I trying to boot an elf microkernel in an UEFI environment. So i compiled a minimal boot loader and created an ESP image. This works fine if I boot via an HDD but I want to direct boot it via the qemu -kernel option (This is some special requirement as I am working with AMD SEV). This doesn't work.
I can boot my kernel like this with grub if I use grub mkimage with a fat image included i.e. like this:
mcopy -i "${basedir}/disk.fat" -- "${basedir}/kernel" ::kernel
mcopy -i "${basedir}/disk.fat" -- "${basedir}/module" ::module
grub-mkimage -O x86_64-efi
-c "${basedir}/grub-bootstrap.cfg"
-m "${basedir}/disk.fat"
-o "${basedir}/grub.efi"
But the goal for my system is minimalism and security hence the microkernel, so grub and it's vulnerabilities is out of question.
So my question is:
How to create a bootable application image similar to grub-mkimage?
I have read about efi stub boot but couldn't really figure out how to build an efi stub image.
Normally I am a bare metal embedded programmer, so the whole uefi boot thing is a bit weird to me. I am glad for any tips or recommendations. Also I figured stack overflow might not be the best place for such low level questions, can you maybe recommend other forums?
I want to direct boot it via the qemu -kernel option
Why? It's a qemu-specific hack that doesn't exist on anything else (including any real computer). By using this hack the only thing you're doing is failing to test anything you'd normally use to boot (and therefore failing to test anything that actually matters).
(This is some special requirement as I am working with AMD SEV)
That doesn't make any sense (it's a little bit like saying "I have a banana in my ear because I'm trying to learn how to play piano").
AMD's SEV is a set of extensions intended to enhance the security of virtual machines that has nothing at all to do with how you boot (or whether you boot from BIOS or UEFI or a qemu-specific hack).
I am glad for any tips or recommendations.
My recommendation is to stop using GRUB specific (multi-boot), Qemu specific (-kernel) and Linux/Unix specific (elf) tools and actually try to use UEFI. This will require you to write your own boot loader using (Microsoft's) PE32+ file format that uses UEFI's services itself. Note that GNU's tools (their "Gnu-EFI" stuff for GCC) is relatively awful (it puts a PE32+ wrapper around an ELF file and does run-time patching to make the resulting Franken-monster work); and there are much better alternatives now (e.g. the Clang/LLVM/lld toolchain).
If you care about security, then it'll also involve learning about UEFI SecureBoot (and key management, and digital signatures). If you care about secure virtual machines I'd also recommend learning about the SKINIT instruction from AMD"s manual (used to create a dynamic root of trust after boot); but don't forget that this is AMD specific and won't work on any Intel CPU, and is mostly obsolete (the "trusted measurement" stuff from BIOS and TPM was mostly superseded by SecureBoot anyway), and (even on Intel CPUs) if you're only the guest then the hyper-visor can emulate it in any way it wants (and it won't guarantee anything is secure).
Finally; note that booting a micro-kernel directly doesn't make much sense either. There's no device drivers in a micro-kernel; so after booting a micro-kernel you end up with a "can't start any device drivers because there are no device drivers" problem. Instead you need to load many files (e.g. maybe an initial RAM disk), then (e.g.) start some kind of "boot log handler" (to display error messages, etc); then find and start the kernel, then start other processes (e.g. "device manager" to detect devices and drivers; "VFS layer" to handle file systems and file IO; etc). For the whole thing; starting the kernel is just one relatively insignificant small step (not much more than starting a global shared library that provides multi-tasking) buried among a significantly larger amount of code that does all the work.
Sadly; booting a monolithic kernel directly can make sense because it can contain all the drivers (or at least, has enough built into the kernel's executable file to handle an initial RAM disk if it's "modular monolithic" with dynamically loaded drivers); and this "monolithic with stuff that doesn't belong in any micro-kernel" idea is what most beginner tutorials assume.

How to dump an executable SBCL image that uses osicat

I have a simple common lisp server program, that uses the osicat library to interface with the posix filesystem. I need to do this because the system creates symbolic links to files, and uses the POSIX stat metadata, and neither of those things are straightforward to do in portable lisp.
I am managing the dependencies with quicklisp, and I have all of this pinned to a working distribution. The app is portable between CCL and SBCL, and I tend to build it in the first and deploy it using the latter. I declare the dependencies for the app with an asdf defsystem, and I can use quicklisp to load it for easy development from local projects.
For deployment I was just using some ansible playbooks that replicated a developer environment on a remote (.e. setting up quicklisp, pushing code into local projects, running out of a user home directory) which was hacky, but mostly ok. More recently, as it's becoming more stable I have been compiling it using sb-ext:save-lisp-and-die, using a simple compile script. This means I get an executable that I can run more like a server, with service management scripts, and an anonymous user account.
This has been working very well, and so I recently moved this step to the next level, and I'm building .deb packages with my compile script, so I can bundle up everything into a relocatable binary. This also sort of works, but the resultant binaries are not relocatable from the original build host. They refuse to start up, and it appears that they try to dynamically load a shared library for osicat
Unhandled SIMPLE-ERROR in thread #<SB-THREAD:THREAD "main thread" RUNNING
Mar 15 12:47:14 annie [479]: {10005C05B3}>:
Mar 15 12:47:14 annie [479]: Error opening shared object "libosicat.so":
Mar 15 12:47:14 annie [479]: libosicat.so: cannot open shared object file: No such file or directory.
it looks like the image expects to find this in the original build tree's quicklisp archives
(ERROR "Error opening ~:[runtime~;shared object ~:*~S~]:~% ~A." "/home/builder/buil...quicklisp/dists/quicklisp/software/osicat-20180228-git/posix/libosicat.so
(SB-SYS:DLOPEN-OR-LOSE #S(SB-ALIEN::SHARED-OBJECT :PATHNAME #P"
so poking around the source, I realise that when quicklisp fetches osicat and exercise its build operation, it compiles this DLL to wrap it's interface with the system libaries, rather than just ffi to them directly - possibly because it's using cffi groveller, I don't really know much about cffi (yet). This is fine, but rather than linking to a .so using the system linker it's trying to dlopen it from a fixed path, which is not very portable, and kind of breaks the usefulness of save-image
I'm a bit stumped at this point, but before I go diving any much further into QL and cffi builds, I wondered if there's some build or compile configuration I'm missing that would make it bootstrap in a more 'static' fashion or influence the production of the wrapped library. Ideally I just want a single blob I can wrap in an installer, and link it against system libraries, but if I have to deploy some additional artefacts that's probably alright. I don't know how to make the autogenerated shared objects occur at a more controlled path.
At that point though, I may as well write a .so for my posix calls and distribute this alongside the app and try and FFI to it more directly. That would be a bit of a pain, so I would prefer to not do this.
You're right, when a dumped image is starting up, it is trying to reload the shared libraries. Which, as you are experiencing, is not working if the image is not starting on the machine it was dumped on.
This is almost what static-program-op set out to solve. A simple system definition like this should help you compile a static program:
(defsystem "foo"
:defsystem-depends-on ("cffi-grovel")
:build-operation "static-program-op" ; "asdf" package is implied
:build-pathname "foo" ; path of the generated binary
:entry-point "foo:main" ; function to use as the entry point
;; ... everything else ...
)
If your system depends on grovel files (defined by :cffi-wrapper-file, :c-file or :o-file), such as the ones provided by osicat, then it will statically link those to your dumped image.
However, it's not perfect.
Essentially, there are still some issues. Some are being fixed upstream by CFFI itself (e.g. not reloading shared libraries of the statically embedded libraries), others are a bit harder. (E.g. SBCL default compilation options don't let you use static-program-op by default. This is being fixed in Debian builds of SBCL, but other distributions are being less responsive.)
This is obviously a problem that the community at large has met, and there are several libraries that are around to help:
The first one, that has been around for a while, is Deploy. The approach it takes is that it embeds the dumped image and the libraries into an archive, and rearranges things for the binary to load them from wherever it is extracted to.
The second one, which I am biased towards because I made it, is linux-packaging. It takes the approach of fixing static-program-op by extending it, but requires you to build a custom SBCL. However, it generates distribution packages like .deb and .rpm, in order to be able to specify dependencies for system shared libraries (e.g. if you depend on sqlite, it will figure out which package provides it and add it as a dependency in the .deb). I highly recommend looking at the .gitlab-ci.yml for examples.
I recommend reading the webpages of both of those libraries to make your choice, they both have their advantages and drawbacks. <joke>Obviously, linux-packaging is superior.</joke>
Maybe you can use sb-posix:symlink and sb-posix:fstat on SBCL instead, removing the osicat dependency by feature toggle.

How to install xsock?

I downloaded library from http://sourceforge.net/projects/xsock/.
In INSTALL file are steps to run this libs.
I changed location to xsock/libxsock and type in terminal ./configure
Nothing happend... How to solve this?
cd' to the directory containing the package's source code and type
./configure' to configure the package for your system. If you're
using csh' on an old version of System V, you might need to type
sh ./configure' instead to prevent csh' from trying to execute
configure' itself.
Running `configure' takes a while. While running, it prints some
messages telling which features it is checking for.
Type `make' to compile the package.
...
4...
The library is broken, and cannot be built as distributed. A number of autoconf/automake files are missing from the archive.
Given that the library appears to have been primarily developed on Windows systems, it seems likely to me that the UNIX parts of the build process for this library have not been maintained, or may never have worked at all. My recommendation is that you find another library — this one seems to be largely unmaintained, and the code quality seems rather low.

Standalone application in C, a good idea?

The term has several definition according to Wikipedia, however what I'm really interested in is creating a program that has all its needed dependencies included within the source folder, so the end user doesn't need to install additional libraries for the app to install. For example, how Mac apps has all its dependencies all within the program itself already...
or is there a function that autotools does this? I'm programming in the Linux environment...
Are you talking about the source code of your application, or about your application binary?
The answer I'd give for both the cases depends on what libraries you're using.
If you're using libraries that you can find anywhere, that are somehow standard and/or that are quite big, you shouldn't attach them to your application, just require them both to build and to run your application.
Anyway don't be much concerned about your source code: little people will build your application, and they probably know something about programming and how a Linux system works; it won't be a big deal to require many (also not-so-common) dependences to build your application.
For what concerns the binary version it could be a little more problematic, since it will be used by end users who often don't know anything about libraries and programming stuff: you could choose to statically link the smallest and most uncommon libraries to your binary, in order to have less dependences.
You could do it, if you link statically, but it'd be somewhat unusual, and depending on what your program is supposed to do, you might be limiting yourself.
The alternative, if this is not just a one-off project, is to create a Linux Standard Base compatible RPM package and restrict yourself to linking against the libraries and symbols that LSB defines.
Run ldd on your program to discover all dependencies, then copy these to your directory, and add a program-wrapper script that issues
#!/bin/sh
LD_LIBRARY_PATH="${0##*/}:$LD_LIBRARY_PATH" exec "${0##*/}/real-program" "$#";
Duplicating the Mac OS X .app behavior on a plain POSIX system is difficult because it is very hard to guarantee that a process can find it's own executable (there are several way that will almost always work...). Mac OS X provides a OS service for this, but Linux (for instance) does not.
Once you've accomplished that feat, this becomes possible. Though, as others have mentioned, it loses the ability to share resource demands (disk space, RAM space, cache space) with other programs that use the same libraries because you'd be using static copies, or dynamically loading your own copy from the .app-like bundle.

Creation of pseudo device-node under /dev

Question:
How to (where to find additional information i.e., examples) programatically create a pseudo device-node under /dev from a kernel module?
From your question I'm guessing your talking about Linux (since you are talking about kernel modules). In that case I'd strongly recommend reading Linux Device Driver. I'd recommend looking at chapter 14 to understand better how device work.
It should also be noted that in most current desktop and server distribution of Linux, udev is responsible for creating entries in /dev. You can configure udev with rules that allow you to create your the device node with a specific name and location. In the embedded world it might be mdev with busybox that's responsible for populating /dev or even it could simply be the deprecated devfs.
Linux Device Driver is certainly a must read. However, I would start with chapter 3, since it is a step by step example on how to create a char device driver.
The kernel API is a moving target. More than often, you will discover that some example that used to compile against a previous version of the kernel generates a warning or an error with a newer version. In this situation, being able to browse through the sources without getting lost is very useful.

Resources