Direct access to hard disk with no FS from C program on Linux - c

I want to access the whole hard disk directly from a C program. There's no FS on it and never's gonna be one.
I just want to open /dev/sda (for example) and do I/O at the block/sector level of the disk.
I'm planning to write some programs for learning C programming in the Linux environment (I know C language, Python, Perl and Java) but lack confidence with the Linux environment.
For my learning purposes I'm thinking about playing with kyoto-cabinet and saving the value corresponding to the computed hash directly into a "block/sector" of the hard disk, recording the pair: "hash, block/sector reference" into a kyoto-cabinet hash database file.
I don't know if this is feasible using standard C I/O functions or otherwise I'd have to write a "device driver" or something like...

As mentioned elsewhere, under *NIX systems, block devices like /dev/sda can be accessed as plain files. Note that if file system is mounted from the device, opening it as file for writing would fail.
If you want to play with block devices, I would advise to first use the loop device, which presents a plain file as a block device. For example:
dd if=/dev/zero of=./loop_file_10MB bs=1024 count=10K
losetup /dev/loop0 $PWD/loop_file_10MB
After that, /dev/loop0 would behave as if it was a block device, but all information written would be stored in the file.

As device files for drives (e.g. /dev/sda) are block devices, this means you can open, seek and use the file almost like a normal file.

Yes, as others have noted, you can simply open the block device.
However, it's a really good idea to do IO (writes anyway) on block boundaries and whole blocks. You can use something like pread() and pwrite() to do these IO, or mmap some or all of the device.
There are a bunch of ioctls which can be used, see "man sd" for some more info. They don't seem to all be documented in the same place.
In linux/fs.h BLKROSET and a bunch of other ioctls are defined, you have to look around to find out how to use them. You can do useful things like find out how big the device is, and what the block size is.
The source code of the util-linux-ng package is your friend, it contains examples.

Related

What is the most efficient way to copy many files programmatically?

Once upon a time long ago, we had a bash script that works out a list of files that need to be copied based on some criteria (basically like a filtered version of cp -rf).
This was too slow and was replaced by a C++ program.
What the C++ program does is essentially:
foreach file
read entire file into buffer
write entire file
The program uses Posix calls open(), read() and write() to avoid buffering and other overheads vs iostream and fopen, fread & fwrite.
Is it possible to improve on this?
Notes:
I am assuming these are not sparse files
I am assuming GNU/Linux
I am not assuming a particular filesystem is available
I am not assuming prior knowledge of whether the source and destination are on the same disk.
I am not assuming prior knowledge of the kind of disk, SSD, HDD maybe even NFS or sshfs.
We can assume the source files are on the same disk as each other.
We can assume the destination files will also be on the same disk as each other.
We cannot assume whether the source and destinations are on the same disk or or not.
I think the answer is yes but it is quite nuanced.
Copying speed is of course limited by disk IO not CPU.
But how can we be sure to optimise our use of disk IO?
Maybe the disk has the equivalent of multiple read or write heads available? (perhaps an SSD?)
In which case performing multiple copies in parallel will help.
Can we determine and exploit this somehow?
This is surely well trod territory so rather than re-invent the wheel straight away (though that is always fun) it would be nice to hear what others have tried or would recommend.
Otherwise I will try various things and answer my own question sometime in the distant future.
This is what my evolving answer looks like so far...
If the source and destination are different physical disks then
we can at least read and write at the same time with something like:
writer thread
read from write queue
write file
reader thread
foreach file
read file
queue write on writer thread
If the source and destination are on the same physical disk and we happen to be on a filesystem
with copy on write semantics (like xfs or btrfs) we can potentially avoid actually copying the file at all.
This is apparently called "reflinking".
The cp command supports this using --reflink=auto.
See also:
https://www.reddit.com/r/btrfs/comments/721rxp/eli5_how_does_copyonwrite_and_deduplication_work/
https://unix.stackexchange.com/questions/80351/why-is-cp-reflink-auto-not-the-default-behaviour
From this question
and https://github.com/coreutils/coreutils/blob/master/src/copy.c
it looks as if this is done using an ioctl as in:
ioctl (dest_fd, FICLONE, src_fd);
So a quick win is probably:
try FICLONE on first file.
If it succeeds then:
foreach file
srcFD = open(src);
destFD = open(dest);
ioctl(destFD,FICLONE,srcFD);
else
do it the other way - perhaps in parallel
In terms of low-level system APIs we have:
copy_file_range
ioctl FICLONE
sendfile
I am not clear when to choose one over the other except that copy_file_range is not safe to use with some filesystems notably procfs.
This answer gives some advice and suggests sendfile() is intended for sockets but in fact this is only true for kernels before 2.6.33.
https://www.reddit.com/r/kernel/comments/4b5czd/what_is_the_difference_between_splice_sendfile/
copy_file_range() is useful for copying one file to another (within
the same filesystem) without actually copying anything until either
file is modified (copy-on-write or COW).
splice() only works if one of the file descriptors refer to a pipe. So
you can use for e.g. socket-to-pipe or pipe-to-file without copying
the data into userspace. But you can't do file-to-file copies with it.
sendfile() only works if the source file descriptor refers to
something that can be mmap()ed (i.e. mostly normal files) and before
2.6.33 the destination must be a socket.
There is also a suggestion in a comment that reading multiple files then writing multiple files will result in better performance.
This could use some explanation.
My guess is that it tries to exploit the heuristic that the source files and destination files will be close together on the disk.
I think the parallel reader and writer thread version could perhaps do the same.
The problem with such a design is it cannot exploit any performance gain from the low level system copy APIs.
The general answer is: Measure before trying another strategy.
For HDD this is probably your answer: https://unix.stackexchange.com/questions/124527/speed-up-copying-1000000-small-files
Ultimately I did not determine the "most efficient" way but I did end up with a solution that was sufficiently fast for my needs.
generate a list of files to copy and store it
copy files in parallel using openMP
#pragma omp parallel for
for (auto iter = filesToCopy.begin(); iter < filesToCopy.end(); ++iter)
{
copyFile(*iter);
}
copy each file using copy_file_range()
falling back to using splice() with a pipe() when compiling for old platforms not supporting copy_file_range().
Reflinking, as supported by copy_file_range(), to avoid copying at all when the source and destination are on the same filesystem is a massive win.

Read from disk after write instead of cache

Here's the task I'm trying to perform on a linux host, with a C program:
Write random data to the disk, call fysnc() to flush data to disk, then read back what was written from the disk to ensure the disk controller wrote the data correctly. The problem I am running into is that reads appear to be answered by server-side caching rather than from the device itself. Here's what I've already tried:
1. O_DIRECT (a gigantic pain in the butt, abandoned)
2. posix_fadvise(fd,0,0,POSIX_FADV_DONTNEED)
3. posix_fadvise(fd,0,0,POSIX_FADV_NOREUSE)
4. O_SYNC
5. O_ASYNC
In every case, iostat shows 0 rrqm/s and thousands of write requests. I could be a woefully uninformed linux user, but it is my belief that if no IOs are shown in rrqm/s then reads are being answered by the OS cache instead of the device itself.
"Why not use iozone or iometer, or any of the billions of other tools that already stress disks?" Well, to be honest, if HP-UX's HAZARD worked on anything except HP-UX, I would, but nothing else comes close to what hazard can do, so I'm making my own.
You need to do the equivalent of the following shell commands:
sync # Instruct all data to get flushed to disk
echo 3 > /proc/sys/vm/drop_caches # Instruct VM system to clear caches
and then try reading the file again.
One way to do it from C would be something approximating:
sync();
int fd = open("/proc/sys/vm/drop_caches", O_WRONLY|O_TRUNC)
write(fd, "3\n");
close(fd);
You should not go thru the file system to test a disk. You should read and write the raw partitions (e.g. /dev/sdc5)
On most current Linux systems and hardware, disks have a SMART interface. You should use it, see smartmontools and study its source code. (I guess that there are some ioctl(2) related to that.)

Writing and reading to linux /proc/... filesystem without lseek()

In this source code http://man7.org/tlpi/code/online/dist/sysinfo/procfs_pidmax.c.html the file /proc/sys/kernel/pid_max is first simply read (using the read syscall) and then simply written (using the write syscall).
Why is it no necessary to lseek to the beginning before writing? I thought the file-offset pointer is the same for read's and write's (that's what the author of the associated books says).
This is because of /proc is not real file system so pid_max writes are handled in a way you don't need any seek. I even don't know if seeks are supported here.
Just to give you feeling of how different /proc files are here is reference for pretty old but illustrative kernel bug specially related to pid_max: https://bugzilla.kernel.org/show_bug.cgi?id=13090
This link should explain you even more details: T H E /proc F I L E S Y S T E M
And finally developerWorks article "Access the Linux kernel using the /proc filesystem" with step-by-step illustration of kernel module code which have /proc FS API. This looks like 100% what you need.
I've looked at kernel source, files under /proc/sys/ is under sysctl table control, read/write callbacks for each entry support file offset. "pid_max entry" has one int value to operate and, hence, offset in those callbacks actually is not using.

Working with block special files/devices to implement a filesystem

I've implemented a basic filesystem using FUSE, with all foreseeable POSIX functionality implemented [naturally I haven't even profiled yet ;)]. Currently I'm able to run the filesystem on a regular file (st_mode & S_IFREG), but the next step in development is to host it on an actual block device. Running my code as is, immediately fails on reading st_size after calling fstat on the device. Of course I don't expect the problems to stop there so:
What changes are required to operate on block devices as opposed to regular files?
What are some special considerations I need to make with regard to performance, limitations, special features and the like?
Are there any tutorials and references with dealing with block special files? Googling has turned up very little useful; I only have background knowledge (ironically from MSDN in my dark past) and some scanty information in the manpages.
Update0
I've pointed out what I mean by "regular file".
I don't want to concentrate on getting the device size, I want general guidelines for differences between regular files and device files with respect to performance and usage.
Currently I'm able to run the
filesystem on a regularly file, but
the next step in development is to
host it on an actual block device
I don't completely understand what you mean - I assume you are saying that "you currently save your filesystem data to a plain file on a normally mounted filesystem - but now wish to use a raw block device for your data storage".
If so - having done this a few times - I'd advise the following:
Never use an "actual" block device for you filesystem. Always use a partition. There are several rarely-used partition-types that you can use to denote that such a filesystem may be your filesystem type, and that your filesystem can check and mount it if it is such. Thus, you will never be running on something like "/dev/sdb", but rather you will store you data on one such as /dev/sdb1, and assign it some partition type. This has obvious advantages, like allowing your filesystem to be co-resident on a single phyiscal disk as another, etc.
If you are implementing any caching in your filesystem (like Linux does with the Page Cache), do all I/Os to the block devices with O_DIRECT. This requires you to pass page-alligned memory to do all I/O, and requires that the requests be sector/block aligned - but will remove a data copy which would otherwise be required when data is moved from the block device to the page cache, then from the page-cache to your user-space [filesystem] reader.
What do you mean that the fstat "fails"? This is an fstat trying to determing the length of the block device? Do you receive an error? What is it?
block devices behave very much like files - tools like dd can operate on them without any special handling. fstat, though, returns information about the special-file node, not the blockdev it refers to. you probably want to use the BLKGETSIZE64 ioctl to read the size.
there's no particular reason to use a partition over a raw device, though - a blockdev is a blockdev. O_DIRECT is good, as well, assuming your workload won't generate repeated accesses. don't confuse it with a real protocol for ensuring the permanence and atomicity of your filesystem, though (fsync, barriers, etc).

Reading a sector on the boot disk

This is a continuation of my question about reading the superblock.
Let's say I want to target the HFS+ file system in Mac OS X. How could I read sector 2 of the boot disk? As far as I know Unix only provides system calls to read from files, which are never stored at that location.
Does this require either 1) the program to run kernel mode, or 2) the program to be written in Assembly? I would prefer to avoid either of these restrictions, particularly the latter.
I've done this myself on the Mac, see my disk editor tool: http://apps.tempel.org/iBored
You'd open the drive using the /dev/diskN or /dev/rdiskN (N is a disk index number starting from 0). Then you can use lseek (make sure to use the 64 bit range version!) and read/write calls on the opened file.
Also, use the shell command "ls /dev/disk*" to see which drives exist currently. And note that the drives also exist with a "sM" extension where M is the partition number. That way, could can also read partitions directly.
Or, you could just use the shell tool "xxd" or "dd" to read data and then use their output. Might be easier.
You'll not be able to read your root disk and other internal disks unless you run as root, though. You may be able to access other drives as long as they were mounted by the user, or have their permissions disabled. But you may also need to unmount the drive's volumes first. Look for the unmount command in the shell command "diskutil".
Hope this helps.
Update 2017: On OS X 10.11 and later SIP may also prevent you from directly accessing the disk sectors.
In Linux, you can read from the special device file /dev/sda, assuming the hard drive you want to read is the first one. You need to be root to read this file. To read sector 2, you just seek to offset 2*SECTOR_SIZE and read in SECTOR_SIZE bytes.
I don't know if this device file is available on OS X. Check for interestingly named files under /dev such as /dev/sda or /dev/hda.
I was also going to suggest hitting the /dev/ device file for the volume, but you might want to contact Amit Singh who has written an hfsdebug utility and has probably done just what you want to do.
How does this work in terms of permissions? Wouldn't reading from /dev/... be insecure since if you read far enough you would be able to read files for which you do not have read access?

Resources