I understand if a file has 1 byte, it will still take up an entire block on disk (e.g. 4KB). Is the same true for a zero-length file? I am specifically wondering about NTFS but insight on other file systems welcome!
No, in case of NTFS, if file has 1 byte, it doesn't use any block. In general, if file has less than 300 bytes (approximately and in case that file record in MFT has 512 bytes - this value depends on file name length, size of MTF file record, etc.), data are located in MFT (master file table). Only if it doesn't fit in one file record (in MFT), then data are externalized to blocks (usually 4 KB).
Related
I got it now.
File slack can break down into RAM slack and drive slack.
And since all data can rarely 100% reach the end of a sector after a file's been saved in sectors or clusters, OS writes RAM data right after that file, and put deleted files or data in the following space (sector(s)) (drive slack).
I'm trying to understand RAM slack and file slack raised on the topic of computer forensics.
So I kind of know that the RAM slack is like the leftover between the end of the logical file and the end of that one sector.
And if the file size is less than one sector size, what will be the File slack?
Suppose there are 512 bytes per sector.
The file (will call this original file later) is 400 bytes, and I got 112 bytes slack (is it a RAM slack or File slack?).
If I deleted the original file, it seems that the OS is not really deleting the file but is making the original sector(s) original file occupied available for reallocation, and the OS will add new file to this sector, which would probably (or must?) be smaller than the original file (ok, 200 bytes) and will be allocated to the original sector with the additional slack space of 112 bytes.
Therefore,
512bytes =
200bytes of new file + 200bytes of new slack + 112bytes of original RAM slack
What is a sparse file and why do we need it?
The only thing that I am able to get is that it is a very large file and it is efficient(in gigabytes). How is it efficient ?
Say you have a file with many empty bytes \x00. These many empty bytes \x00 are called holes. Storing empty bytes is just not efficient, we know there are many of them in the file, so why store them on the storage device? We could instead store metadata describing those zeros. When a process reads the file those zero byte blocks get generated dynamically as opposed to being stored on physical storage (look at this schematic from Wikipedia):
This is why a sparse file is efficient, because it does not store the zeros on disk, instead it holds enough data describing the zeros that will be generated.
Note: the logical file size is greater than the physical file size for sparse files. This is because we have not stored the zeros physically on a storage device.
Edit:
When you run:
$ dd if=/dev/zero of=output bs=1G count=4
The command here copies 4G blocks of null bytes to output. To see that:
$ stat output
File: ouput
Size: 4294967296 Blocks: 8388616 IO Block: 4096 regular file
--omitted--
You can see that this file has 8388616 blocks allocated to it, these blocks store nothing but empty bytes copied from /dev/zero and they do occupy physical disk space, they're holes stored on disk (sparse zeros). dd did what you asked for, copying blocks of data from one file to another.
Now, run this command to detect the holes and make the file sparse in-place:
$ fallocate -d output
$ stat output
File: swapfile
Size: 4294967296 Blocks: 0 IO Block: 4096 regular file
--omitted--
Do you notice something? The the number of blocks now is 0 because the blocks that were storing only empty bytes were de-allocated. Remember, output's blocks store nothing, only a bunch of empty zeros, fallocate -d detected the blocks that contain only empty zeros and deallocated them, since all the blocks for this file contain zeros, they were all de-allocated.
Also notice how the size remained the same. This is the logical (virtual) size of the file, not its size on disk. It's crucial to know that output doesn't occupy physical storage space now, it has 0 blocks allocated to it and thus I doesn't really use disk space. The size preserved after running fallocate -d so when you later read from the file, you get the empty bytes generated to you by the filesystem at runtime. The physical size of output however, is zero, it uses no data blocks.
Remember, when you read output file the empty bytes are generated by the filesystem at runtime dynamically, they're not really physically stored on disk, and the file's size as reported by stat is the logical size, and the physical size is zero for output. In this case the filesystem has to generate 4G of empty bytes when a process reads the file.
To generate a sparse file using dd:
$ dd if=/dev/zero of=output2 bs=1G seek=0 count=0
$ stat
stat output2
File: output2
Size: 4294967296 Blocks: 0 IO Block: 4096 regular file
GNU dd internally uses lseek and ftruncate, so check truncate(2) and lseek(2).
A sparse file is a file that is mostly empty, i.e. it contains large blocks of bytes whose value is 0 (zero).
On the disk, the content of a file is stored in blocks of fixed size (usually 4 KiB or more). When all the bytes contained in such a block are 0, a file system that implements sparse files does not store the block on disk, instead it keeps the information somewhere in the file meta-data.
Advantages of using sparse files:
empty blocks of data do not occupy disk space; they are not stored as the regular blocks of data, their identifiers (that use only several bytes) are stored instead in the file meta-data; this way 4 KiB of disk space (or more) are saved for each empty block;
reading an empty block of data from a sparse file does not take time; this happens because no data is read from disk; since the file system knows all the bytes in the block are 0, it just sets to 0 all the bytes in the input buffer and the data is ready; there is no need to access the slow storage device;
writing an empty block of data into a sparse file does not take time; on writing, the file system detects that the block is empty (all its bytes are 0) and puts the block ID into the list of empty blocks (in the file meta-data); no data is written to the disk.
More information about sparse files can be found on the Wikipedia page.
ll /srv/node/dcodxx/test.sh
-rw-r--r--. 1 root root 7 Nov 5 11:18 /srv/node/dcodxx/test.sh
The size of the file is shown in bytes. This file is stored in an xfs filesystem with block size 4096 bytes.
xfs_info /srv/node/sdaxx/
meta-data=/dev/sda isize=256 agcount=32, agsize=7630958 blks
= sectsz=4096 attr=2, projid32bit=0
data = bsize=4096 blocks=244190646, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=119233, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Does this mean that a block can house more than one file, if not what happens to the remaining bytes (4096-7)?
Also, where is the 256 bytes reserved for an inode stored, if it stored in the same block as the file, shouldn't the file size be larger(256+7)?
File data is stored in units of the filesystem block size, and no block sharing is currently possible across multiple files on XFS. So used disk space is always the number of bytes in the file rounded up to the next block size - a 1-byte file will consume 4k of diskspace on a 4k block size filesystem.
The inode itself contains file metadata such as size, timestamps, extent data, etc - and on xfs it can also contain extended attribute information.
The on-disk inode is separate from the file data blocks, and will always consume 256 bytes on a filesystem with 256 byte inodes, regardless of the amount of metadata used. If more than 256 bytes is required to store additional extent information or extended attribute data, additional filesystem-block-sized metadata blocks will be allocated.
Does this mean that a block can house more than one file, if not what happens to the remaining bytes (4096-7)?
A block cannot contain more than one file. If a file is bigger than one block, multiple blocks are used.
Modern filesystems like XFS have a functionality called "inline", where files small enough (no more than 60 bytes) can be stored in the inode, in the space taken to store pointers to the blocks.
where is the 256 bytes reserved for an inode stored, if it stored in the same block as the file, shouldn't the file size be larger(256+7)?
Inode information is stored in the inode table.
I was wondering about the actual (disk-)size of each MFT record. Since the number of clusters per MFT record is set in the bootsector, i guess each one has the same size.
However, each record header stores an additional value: its Allocated size (at 0x1C). As far as i could observe, this value was always equivalent to the value stored in the bootsector.
Is it possible that these two are different (and when)?
If not, the Allocated size value in each record is kind of a waste, right?
It's not actually that much of a waste. You should try to look at what happens when the number of attributes stored in the file record exceeds 1 KB. (by adding additional file names, streams, etc.) It is not clear (to me at least) for different versions of NTFS if the additional attributes are stored in the data section of the volume or in another File Record.
In previous versions of NTFS the size of a MFT File Record was equal to the size of a cluster (generally 4KB) which was a waste of space since sometimes all the attributes would take less than 1 KB of space. Since NT 5.0 (I may be wrong), after some research, Microsoft decided that all MFT File Records should be 1KB. So, one reason for storing that number may be backwards compatibility. Imagine you found an old hard drive which still used 4KB file records and you want to add some file to that drive or copy some files.
Another use for storing that number there would be that you wouldn't need to read the boot sector every time you get a file record to see what it's size should be. Imagine if you were the algorithm that has to mitigate the transfer between 4KB records to 1KB records because of backwards compatibility. If you didn't know what to expect you would have to read the boot sector to find out what size of a record to expect.
What if you didn't have access to the boot sector or you're trying to recover files from a drive that had it's boot sector wiped or has bad clusters? What would happen if the volume is on multiple extents and you're reading the MFT from one extent and the boot sector is in another extent that you don't have access to?
Usually, filesystems are designed by more than a few people over a long time. If those values would be redundant I should think they would certainly notice.
What is the significance of the file system block size? If my filesystem block size is set at, say 8K, does that mean that all read/write I/O will happen at size 8K? So if my application wants to read say 16 bytes at offset 4097 then a 4K block starting from offset 4096 will be read?
How do writes work in this case? Suppose I want to write say 64 bytes.
You are right. The block size is the unit of work for the file system. Every read and write is done in full multiples of the block size.
The block size is also the smallest size on disk a file can have. If you have a 16 byte Block size,then a file with 16 bytes size occupies a full block on disk.
The book "Practical file system design" states:
Block: The smallest unit writable by a disk or file system. Everything a
file system does is composed of operations done on blocks. A file system
block is always the same size as or larger (in integer multiples) than the
disk block size.
Normally when you have to deal with files in programming you should use Stream abstraction.
I/O operations through code are often reads and writes to streams; reading and writing from and to streams, can be buffered so that chunks of file can be read or written.
Block size on fs refers to mapping disk surface; minor the size of the single block major the number of blocks (and so the elements in the table that keeps information on allocation of files).
So OS's so can map file on disk discretely based on block size and have a smaller "map of files".
As I know this doesn't affect stream abstraction in API's of programming language.