I have some code that umounts a file system on a device and then immediately removes the device from device-mapper using the DM_DEV_REMOVE ioctl command.
Sometimes, as part of a stress test, I run this code in a tight loop of:
create the device
mount the file system on the device
unmount the file system
remove the device
Often, when running this test over thousands of iterations, I will eventually get the errno EBUSY when trying to remove the device. The umount is always successful.
I have tried searching on this issue, but mostly what I find is people having issues with getting EBUSY when umounting, which is not the problem I am having.
The closest thing to being helpful that I could find is that in the man page for dmsetup it talks about using the --retry option as a workaround for udev rules opening up devices when you are trying to remove them. Unfortunately for me though, I have been able to confirm that udev does not have my device open when I am trying to remove it.
I have used the DM_DEV_STATUS command to check the open_count for my device, and what I see is that the open_count is always 1 before the umount and when my test succeeds it was 0 after the umount and when it fails it was 1 after the umount.
Now, what I am trying to find out to root-cause my issue is, "Could my resource busy failure be caused by umount asynchronously releasing my device, thus creating a race condition?". I know that umount is supposed to be synchronous when it comes to the actual unmounting, but I couldn't find any documentation for whether releasing/closing the underlying device could occur asynchronously or not.
And, if it isn't umount holding a open handle to my device, are there any other likely candidates?
My test is running on a 3.10 kernel.
Historically, system calls blocked the process involved until all the task is done (being write(2) to a block device the first major exception for obvious reasons) The reason was that you need one process to do the job and the syscall involved process was there for that reason (and you could charge the cpu processing to that user's account)
Nowadays, there are plenty of kernel threads involved in solving non-process related issues, and the umount(2) syscall can be one of the syscalls demanding some background (I think it isn't as umount(2) is not frequently issued to justify a change in the code)
But linux is not a unix descendant, so umount(2) could be implemented this way. I don't believe that, anyway.
umount(2) syscall normally succeeds, except when inodes on the filesystem are in use. That's not the case. But the kernel can be involved in some heavy duty process that makes it to alloc some kernel memory (not swappable) and fail in the request. This can lead to the error (note that this is only a guess, I have not checked this in the code, you had better to look at the umount(2) syscall implementation) you get anyway.
There's another issue, that could block your umount process (or fail) in case you have touched someway the filesystem. There's some references dependency code that makes filesystems capable of resist power failures in a consistent status (in linux, this is calles ordered data, in BSD systems it is called software updates, that makes erased files to not be freed immediately after unlink(2). This could block umount(2) (or make it fail) if some data has to be updated on the filesystem, previous to make the actual umount(2) call. But again, this should not be your case, as you say, you don't modify the mounted filesystem.
Related
I'm looking for a reliable, unobtrusive runtime check a process can make for whether it's running on Linux without mmu. By unobtrusive, I mean having minimal or no side effects on process state. For example, getting EINVAL from fork would be one indication, but would create a child process if the test failed. Attempting to cause a fault and catch a signal is out of the question since it involves changing global signal dispositions. Anything involving /proc or /sys would be unreliable since they may not be mounted/visible (e.g. in chroot or mount namespace).
Failure of mprotect with ENOSYS seems to be reliable, and can be done without any side effects beyond the need to map a test page to attempt it on. But I'm not sure if it's safe to rely on this.
Are there any better/recommended ways to go about this?
Before anyone tries to challenge the premise and answer that this is known statically at compile time, no, it's not. Assuming you build a position-independent executable for an ISA level supported by both mmu-ful and mmu-less variants of the architecture, it can run on either. (I'm the author of the kernel commit that made this work.)
I have been reading about system calls and how they work in Linux. I still have more reading to do but one thing that nothing I have read has answered is, WHY do we need system calls?
I understand that system calls are requests from user space program for the kernel to do something, but my question is basically: Why can't the user space program do the thing itself? Why doesn't Glibc do the actual operation instead of just being a wrapper for a system call?
For example, if I call fopen() in my program, why does glibc call the open system call? Why doesn't glibc just do the operation itself?
I understand that it would mean that glibc developers would have a lot more work and that they would have to have an intimate knowledge of Linux, but isn't glibc already very closely related to Linux kernel?
Also, I understand the system call functions are run in ring 0 in the CPU...but what's really the point of that? If I execute a program, I am giving it express permission to run, so what security is added by separating what code can be run in different contexts since you are giving it all permission anyway?
Why doesn't glibc just do the operation itself?
Well that is more less the ways things went in good old MS/DOS systems: no separation between kernel code and user code, and user code could happily directly access the hardware.
This just has 2 major problems:
It works (rather) fine on a single user and not multi tasking system, but as soon as multiple programs can simultaneously run in a system, you have to synchronize hardware accesses and memory usage => those are the parts dedicated to the kernel
There is no protection of the system from a poorly coded program. In a modern OS, an erroneous program can crash, but the system itself should survive. In MS/DOS a program crash usually ended in a system reboot.
For those reasons, all modern OS (except maybe some lightweight embedded ones) use isolation between different user processes and the kernel. And that just mean that you need a way to allow a user mode process to require a privileged action (reading or writing a physical disk is) from the kernel: that is exactly what system calls are made for.
Why doesn't glibc just do the operation itself?
Short answer: Because it can't.
Long answer:
A program running in Linux can run in two modes : UserLand or KernelLand.
The Kernel Land has every rights and can do everything, including talking with hardware, or providing userspace callbacks. For instance, when you call fopen(), the kernel does all the dirty talking with your filesystem (ext4 for instance), the caching, everything down to talking with the SATA Controller to access data on the hard-drive.
GLibc could do that using the device exposed by the kernel in /dev, but that would mean recoding from scratch all the filesystems layers, the sockets, the firewalling...
The kernel just provides easy usable API for programmers to have elevated rights and communicate with the devices. That's how Linux (and most modern OS) is made.
What security is added by separating what code can be run in different contexts since you are giving it all permission anyway?
The permissions are managed by the kernel. If you don't have syscall, you don't have permissions. Or should the program you run check their own permission? Once again, it would be reinventing the wheel every time.
If the code generated by a C implementation were the only thing that were going to be running on the target system (as it would be for many freestanding implementations, and for a very small number of hosted implementations) and if implementation knew precisely what hardware it would be running upon (true of some freestanding implementations, but seldom true for hosted ones), its runtime library might be able to perform operations like "fopen" by directly communicating with the storage hardware. It is rare, however, for either condition to apply, much less both of them.
If multiple programs will be using storage device, it will generally be necessary that they either coordinate their actions somehow or else that sequences of operations performed by different programs do not overlap, and that every program "forget" anything it thinks it knows about the state of storage any time another program might have written to it.
Otherwise, suppose a disk contains a single file and program #1 uses "fopen" to open it for reading. Each directory sector holds 8 entries, so the program would read the first directory sector and observe that slot #0 identifies the file of interest while #1-#7 are blank.
Now suppose program #2 uses "fopen" to create a file for writing. It would read the directory sector, observe that slots #1-#7 are blank, and rewrite the directory sector with information about the new file in slot #1.
Finally, suppose program #1 wants to write a file. If it doesn't know about program #2, it might reasonably believe it knows what the directory contains (it had read it earlier, and has no reason to believe it's changed), place information about the new file in slot #1, and replace the directory sector on disk with its new version, obliterating the entry written by program #2.
Having both programs route their operations through an operating system ensures that when program #2 wants to create its file, it can exploit the fact that it had just read the directory for program #1 (and thus doesn't need to reread it). More importantly, when program #1 goes to write a file, the operating system will know that the directory contains the file written by program #2, and will thus ensure that the new file gets placed in slot #2.
Contrary to what other answers say, even microcomputer C implementations running on platforms like MS-DOS essentially always relied upon the OS for file I/O. Some would include their own console I/O routines because the ones in MS-DOS were about four times as slow as they should have been, but the need for coordination when using file I/O meant that very few programs would try to do it themselves.
1- You don't wanna deal with low-level hardware communications. At least most people don't. Each of them has hundreds of commands.
2- Make a simple mistake and your CPU/RAM or I/O device might be useless forever.
3- When you are part of a network, you can share resources. The system calls and kernel keeps your co-worker from damaging your hard disk.
Another consideration is that the OS kernel needs to provide an abstraction for the myriad different types of hardware via a uniform API - without which you'd invariably be making device specific calls in your program.
While the previously-idle disk spins up for two seconds, or the networked disk gets connected for thirty seconds, what is the library going to do?
The full answer to your question is very broad but let me take a simple example based upon your question about fopen.
Let us say that we have a large system that has hundred or thousands of users. One of those users is, say the HR department with files containing confidential information about employees.
If that disk could be accessed at will in user mode, then any person on the system could open any file on the system, including those with confidential information.
In other words operating systems managed SHARED resources. These include disk, CPU, and memory. If these could be controlled in user mode, there would be no way to ensure that these were shared equitably.
I am writing a block device driver for linux.
It is crucial to support unsafe removal (like usb unplug). In other words, I want to be able to shut down the block device without creating memory leaks / crashes even while applications hold open files or performing IO on my device or if it is mounted with file system.
Surely unsafe removal would possibly corrupt the data which is stored on the device, but that is something the customers are willing to accept.
Here is the basics steps I have done:
Upon unsafe removal, block device spawns a zombie which will automatically fail all new IO requests, ioctls, etc. The zombie substitutes make_request function and changes other function pointers so kernel would not need the original block device.
Block device waits for all IO which is running now (and use my internal resources) to complete
It does del_gendisk(); however this does not really free's kernel resources because they are still used.
Block device frees itself.
The zombie keeps track of the amount of opens() and close() on the block device and when last close() occurs it automatically free() itself
Result - I am not leaking the blockdevice, request queue, gen disk, etc.
However this is a very difficult mechanism which requires a lot of code and is extremely prone to race conditions. I am still struggling with corner cases, per_cpu counting of io's and occasional crashes
My questions: Is there a mechanism in the kernel which already does that? I searched manuals, literature, and countless source code examples of block device drivers, ram disks and USB drivers but could not find a solution. I am sure, that I am not the first one to encounter this problem.
Edited:
I learned from the answer below, by Dave S about the hot-plug mechanism but it does not help me. I need a solution of how to safely shut down the driver and not how to notify the kernel that driver was shut down.
Example of one problem:
blk_queue_make_request() registers a function through which my block devices serves IO. In that function I increment per_cpu counters to know how many IO's are in flight by each cpu. However there is a race condition of function being called but counter was not increased yet, so my device thinks there are 0 IO's, releases the resources and then IO comes and crashes the system. Hotplug will not assist me with this problem as far as I understand
About a decade ago I used hotplugging on a software driver project to safely add/remove an external USB disk drive which interfaced to an embedded Linux driven Set-top Box.
For your project you will also need to write a hot plug. A hotplug is a program which is used by the kernel to notify user mode software when some significant (usually hardware-related) events take place. An example is when a USB device has just been plugged in or removed.
From Linux 2.6 kernel onwards, hotplugging has been integrated with the driver model core so that any bus or class can report hotplug events when devices are added or removed.
In the kernel tree, /usr/src/linux/Documentation/usb/hotplug.txt has basic information about USB Device Driver API support for hotplugging.
See also this link, and GOOGLE as well for examples and documentation.
http://linux-hotplug.sourceforge.net/
Another very helpful document which discusses hotplugging with block devices can be found here:
https://www.kernel.org/doc/pending/hotplug.txt
This document also gives a good example of illustrating hotplug events handling:
Below is a table of the main variables you should be aware of:
Hotplug event variables:
Every hotplug event should provide at least the following variables:
ACTION
The current hotplug action: "add" to add the device, "remove" to remove it.
The 2.6.22 kernel can also generate "change", "online", "offline", and
"move" actions.
DEVPATH
Path under /sys at which this device's sysfs directory can be found.
SUBSYSTEM
If this is "block", it's a block device. Anything other subsystem is
either a char device or does not have an associated device node.
The following variables are also provided for some devices:
MAJOR and MINOR
If these are present, a device node can be created in /dev for this device.
Some devices (such as network cards) don't generate a /dev node.
DRIVER
If present, a suggested driver (module) for handling this device. No
relation to whether or not a driver is currently handling the device.
INTERFACE and IFINDEX
When SUBSYSTEM=net, these variables indicate the name of the interface
and a unique integer for the interface. (Note that "INTERFACE=eth0" could
be paired with "IFINDEX=2" because eth0 isn't guaranteed to come before lo
and the count doesn't start at 0.)
FIRMWARE
The system is requesting firmware for the device.
If the driver is creating device it could be possible to suddenly delete it:
echo 1 > /sys/block/device-name/device/delete where device-name may be sde, for example,
or
echo 1 > /sys/class/scsi_device/h:c:t:l/device/delete, where h is the HBA number, c is the channel on the HBA, t is the SCSI target ID, and l is the LUN.
In my case, it perfectly simulates scenarios for crushing writes and recovery of data from journaling.
Normally to safely remove device more steps is needed so deleting device is a pretty drastic event for data and could be useful for testing :)
please consider this:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/online_storage_reconfiguration_guide/removing_devices
http://www.sysadminshare.com/2012/09/add-remove-single-disk-device-in-linux.html
When writing a non-blocking program (handling multiple sockets) which at a certain point needs to open files using open(2), stat(2) files or open directories using opendir(2), how can I ensure that the system calls do not block?
To me it seems that there's no other alternative than using threads or fork(2).
As Mel Nicholson replied, for everything file descriptor based you can use select/poll/epoll. For everything else you can have a proxy thread-per-item (or a thread pool) with the small stack that would convert (by means of the kernel scheduler) any synchronous blocking waits to select/poll/epoll-able asynchronous events using eventfd or a unix pipe (where portability is required).
The proxy thread shall block till the operation completes and then write to the eventfd or to the pipe to wake up the select/poll/epoll.
Indeed there is no other method.
Actually there is another kind of blocking that can't be dealt with other than by threads and that is page faults. Those may happen in program code, program data, memory allocation or data mapped from files. It's almost impossible to avoid them (actually you can lock some pages to memory, but it's privileged operation and would probably backfire by making the kernel do a poor job of memory management somewhere else). So:
You can't really weed out every last chance of blocking for a particular client, so don't bother with the likes of open and stat. The network will probably add larger delays than these functions anyway.
For optimal performance you should have enough threads so some can be scheduled if the others are blocked on page fault or similar difficult blocking point.
Also if you need to read and process or process and write data during handling a network request, it's faster to access the file using memory-mapping, but that's blocking and can't be made non-blocking. So modern network servers tend to stick with the blocking calls for most stuff and simply have enough threads to keep the CPU busy while other threads are waiting for I/O.
The fact that most modern servers are multi-core is another reason why you need multiple threads anyway.
You can use the poll( ) command to check any number of sockets for data using a single thread.
See here for linux details, or man poll for the details on your system.
open( ) and stat( ) will block in the thread they are called from in all POSIX compliant systems unless called via an asynchronous tactic (like in a fork)
I am attempting to have a bunch of independent programs intelligently allocate shared resources among themselves. However, I could have only one program running, or could have a whole bunch of them.
My thought was to mmap a virtual file in each program, but the concurrency is killing me. Mutexes are obviously ineffective because each program could have a lock on the file and be completely oblivious of the others. However, my attempts to write a semaphore have all failed, since the semaphore would be internal to the file, and I can't rely on only one thing writing to it at a time, etc.
I've seen quite a bit about named pipes but it doesn't seem to be to be a practical solution for what I'm doing since I don't know how many other programs there will be, if any, nor any way of identifying which program is participating in my resource-sharing operation.
You could use a UNIX-domain socket (AF_UNIX) - see man 7 unix.
When a process starts up, it tries to bind() a well-known path. If the bind() succeeds then it knows that it is the first to start up, and becomes the "resource allocator". If the bind() fails with EADDRINUSE then another process is already running, and it can connect() to it instead.
You could also use a dedicated resource allocator process that always listens on the path, and arbitrates resource requests.
Not entirely clear what you're trying to do, but personally my first thought would be to use dbus (more detail). Should be easy enough within that framework for your processes/programs to register/announce themselves and enumerate/signal other registered processes, and/or to create a central resource arbiter and communicate with it. Readily available on any system with gnome or KDE installed too.