I am working on a data-deduplication scheme between disks and am wondering how I can dig down into the XFS internals and create/modify extents for files.
Here is an example of what I want to do, suppose we have the file:
bippity
boppity
boo
And we have a block size of 8 bytes(enough for bippity and the new line)
Now I change the file to
bip
boppity
boo
Only the first line changed, I would like to create a file that creates an extent(or block) for the first line, writes the data to that extent, and then joins that extent with the extent already on disk, so only one change has to be pushed out to disk.
Is this possible on xfs(or even better, on a general file system)? I don't mind getting down into the nitty-gritty, but I cannot seem to find a lot of info on this particular subject.
Related
I'm trying to access a file as a char array, via memory mapping it, or copying it into a buffer or whatever, but both of these need the size of the file, easy enough, thought I, just use fseek(file, 0, SEEK_END).
However: according to C++ Reference "Library implementations [of fseek] are allowed to not meaningfully support SEEK_END," Meaning that I can't get the size of a file using that method.
Next I tried fstat, which is less portable, but at least will provide a compile error rather than a runtime problem; but The Open Group notes that fstat does not need to provide a meaningful value for st_size.
So: has anyone actually come across a system where these methods do not work?
The notes about files not having valid sizes reported are there because, in Linux, there are many "files" for which "file size" is not a meaningful concept.
There are two main cases:
The file is not a regular file. In particular, pipes, sockets, and character device files are streams of data where data is consumed on read, and not put on disk, so a size does not make much sense.
The file system that the file resides on does not provide the file size. This is especially common in "virtual" filesystems, where the file contents are generated when read and, again, have no disk backing.
To expand, filesystems do not necessarily keep file contents on disk. Since the filesystem API is a convenient API for expressing hierarchal data, and there are many tools for operating on files, it sometimes makes sense to expose data as a file hierarchy. For example, /proc/ contains information about processes (such as open files and used memory) and /sys/ contains driver-specific information and options (anything from sensor sampling rates to LED colors). With FUSE (Filesystem in UserSpacE), you can program a filesystem to do pretty much anything, from SSHing into a remote computer to exposing Twitter as a filesystem.
For a lot of these filesystems, "file size" may not make much sense. For example, an LED driver might expose three files red, green, and blue. They can be read to get the current color or written to to change the color. Now, is it really worth implementing a file size for them, since they are merely settings in RAM, don't have any disk backing, and can't be removed? Not really.
In summary, files are not necessarily "things on disk". For many of the more advanced usages of files, "file size" either does not make sense or is not worth providing.
I'm wondering what is better in therms of performance: write in one big text file (something about 10GB or more) or use a subfolder system that will have 3 levels with 256 folders in each one, the last level will be the text file. Example:
1
1
2
3
1
2
3
4
4
2
3
4
It will be heavy accessed (will be opened, append some text stuff then closed), so I don't know what is better, open and close file pointers thousand times in a second, or change a pointer inside one big file thousand times.
I'm at a core i7, 6GB DDR3 of RAM and a 60MB/s write disk speed under ext4.
You ask a fairly generic question, so the generic answer would be to go with the big file, access it and let the filesystem and its caches worry about optimizing access. Chances are they came up with a more advanced algorithm than you just did (no offence).
To make a decision, you need to know answers to many questions, including:
How are you going to determine which of the many files to access for the information you are after?
When you need to append, will it be to the logical last file, or to the end of whatever file the information should have been found in?
How are you going to know where to look in any given file (large or small) for where the information is?
Your 2563 files (16 million or so if you use all of them) will require a fair amount of directory storage.
You actually don't mention anything about reading the file - which is odd.
If you're really doing write only access to the file or files, then a single file always opened with O_APPEND (or "a") will probably be best. If you are updating (as well as appending) information, then the you get into locking issues (concurrent access; who wins).
So, you have not included enough information in the question for anyone to give any definitive answer. If there is enough more information in the comments you've added, then you should have placed those comments into the question (edit the question; add the comment material).
I'm writing a program that needs to read and write lots of data in random order, and since I don't want to use hundreds of small files, I'm trying to develop a sort of virtual file system that writes to one large file that keeps track of where the "files" are in the "disk" file.
Thus, I've been trying to find detailed information about file system implementations, but theres one thing that never seems to be explained in a way I can understand: How does the file system track free/deleted sectors for new file creation? FAT, for example, has an index of all sectors at the beginning that seems to be the only place that holds this information, but searching the index for a new area of free space in a linear, O(n) fashion seems like it would be rather inefficient, especially if there are no deleted sectors and you have to insert something at the end of the list. Am I missing something, or is this is how file systems really detect unused sectors for writing? Thanks!
The answer depends on the overall file system architecture. It can be a linear list of free pages, or the free space can be counted in the same way as other files (eg. linked lists).
Practically developing an efficient file system is quite serious task for a side task that you have. So it makes sense to use some already created virtual file system, such as the one CodeBase offers or our Solid File System.
I found a helpful PDF explaining how free space is mapped in linux file systems. This is more along the lines of what I was looking for.
http://www.kernel.org/doc/ols/2010/ols2010-pages-121-132.pdf
it's just like a link list: each file may be seperated into many partitions and at the end of the partition each partition it's refering to begining of the next the same goes for the free speaces. think of free space as one large file that contains every byte which is not inside another file!
I am trying to modify the ext3 file system. Basically I want to ensure that the inode for a file is saved in the same (or adjacent) block as the file that it stores metadata for. Hopefully this should help disk access performance
I grabbed the kernel source, compiled it, read a bunch about inodes and looked the inode.c file in the fs subdirectory. However, I am just not sure how I can ensure that any new file being created, and the inode for this file, can be saved in the same or adjacent blocks. Any help or pointers to further readings would be appreciated. Thanks!
Interesting idea.
I'm not deeply familiar with ext3, but I can give you some general pointers.
Currently ext3 stores inodes in predetermined places. Each block group has its own inode table, an array of inodes. So when you have an inode number (i.e., as the result of looking up a filename in a directory), you can find the corresponding inode on disk by using the inode number first to select the correct block group and then to index into that block group's inode table.
If you want to put the inodes next to the corresponding file data, you'll need a new scheme for finding an inode on disk. If you're willing to dedicate a block for each inode, then one possible scheme would be to allocate a new block every time you need an inode and then use the block number as the inode number. This might have the benefit that for small files you could store the data in that same block.
To make something like this happen, creating a new file (i.e., allocating an inode) would have to work very differently than in the current ext3 file system. Instead of using a bitmap to find an unused, pre-allocated and pre-initialized inode, you would have to allocate an empty block and initialize it yourself. So, you'll probably want to look at how the file system allocates blocks when it's writing to a file, then mimic that for allocating an inode.
An alternative scheme would be to store the inode inside the directory. So you save an I/O not because the inode is next to its data, but because when you lookup the filename you also read the inode. This was done back in the 90s as an experiment in BSD's FFS file system, and was written up in an excellent USENIX Paper. Those ideas never made it into FFS, or into any other main stream file system that I'm aware of, so it might be interesting to see how they work in ext3.
Regardless of whether you pursue one of these schemes or come up with something of your own, you'll also have to modify mke2fs to initialize the file system on disk in a way that your new file system variant will understand.
Good luck! It sounds like a fun project.
Kudos for getting into file system design!
First, a bit of engineering advice before you get too deep into hacking: make a copy of the ext3 tree and rename the file system to something else. I've found that when introducing experimental changes into a file system, you really don't want it to be used for your main system. Your system should still boot even if you introduce a bug that randomly loses files (it will eventually happen). You'll also need to branch the ext3 userspace tools to work with your new system.
Second, go get a copy of Understanding the Linux Kernel, 3 ed. by Bovet and Cesati. It presents an organized view of kernel subsystems, and I've found its explanations to be worthwhile. It's written for an older kernel (2.6.x for some x < 15; I forget exactly), but it's still accurate in many places. Read through its descriptions of file systems. I believe it covers ext3.
Third, about your actual project, you aren't proposing a simple modification to ext3. That file system has a pretty straightforward way of mapping an inode number to a disk block. You'll need to find a new way of doing this mapping. I would not anticipate any changes to the rest of ext3. Solving this challenge may be one of the key design points of your architecture. Note that keeping around a big array of inode -> disk block maps doesn't solve your problem: it's probably no better than existing ext3.
so I want to ask, and forgive me if this is obvious, or newbie question:
if I create a file, say a text file - save it, (I'm using Ubuntu), so this file I have created, has some extra information associated with it, such as, the place on my hard drive where it has been saved. How to examine this information? Where does this information get stored for my specific file? How to examine the file as it is stored on my disk, I assume in terms of, what, bytes?
Maybe I need to focus this question,
Thanks,
B
This is the responsibility of your file system. In very brief, a file system is a data structure which is laid out onto your entire disk -- that's what "formatting" a disk does -- and your files are saved into that data structure. There are lots of file systems, and their details vary quite widely. http://www.forensics.nl/filesystems has a whole bunch of papers on file system design and organization. I'd start with McKusick's A Fast File System for UNIX; it's old, but it contains lots of ideas that are still influential today.
You need a filesystem-specific forensics tool if you want to look at the data structures on your disks. Ubuntu's probably using something in the ext2 family, so try debugfs.
I think maybe you do need to focus it a bit :-)
For UNIX file systems, there are many different types.
The one I'm most familiar with (ext2) has a "file" on disk containing directory entries. These entries are simple names and pointers to the file itself (which is why you can have multiple directory entries pointing to the same file, hard links).
The file itself is an inode which contains the properties of the file (owner, size, permissions and so on).
The inode also contains direct and indirect pointers to the contents of the file. By direct, I mean a pointer to a data block.
An indirect pointer is a pointer to a pointer to contents. I believe you can go to another two levels of indirection, which gives you truly massive file sizes:
More details on Wikipedia.