Most games come with their resources (models, textures, etc.) packed into special files (like .pk3 files in Quake 3 for example). Apparently, those files somehow get "mounted" and are used as if they were separate file systems.
I'd like to know how this is achieved. The only strategy I came up with so far is placing offset-size information in the file's header, then memory-mapping the file and accessing the resources as if they were independent write-protected chunks of memory.
I'd like to know if my strategy is viable and if there are better alternatives.
Thanks!
Your strategy is quite reasonable; in fact, it's an exact analogue of what a filesystem driver does with a raw block device. What problems are you having with that implementation?
Your approach sounds reasonable. Basically you'll have an ISAM file with (optional) metadata. You can split the file into sections ("directories") based on criteria (content type, frequency of use, locality of use, etc) and have a separate index/table of contents for each section. If you allow a section as a content type, then you can nest them and handle them in a consistent fashion.
If your requirements are fairly basic, you could consider simply using a zip/tar file as a container. Looking at the code for these would probably be a good place to start, regardless.
I can't say about exact format of Quake3, but there are several approaches to do what you need:
archive files (ZIP, Tar, etc)
compound storage of different kind. On Windows there's Microsoft Structured Storage
embedded single-file database
virtual file system, which implements an actual file system, but stored not on the disk partition, but somewhere else (in resources, in memory etc).
Each of those approaches has it's own strengths and weaknesses.
Our SolFS product is an example of virtual file system mentioned above. It was designed for tasks similar to yours.
Archives and some compound file implementations usually have linear sequential structure - the files are written one by one with some directory at the beginning or at the end of file.
Some compound file implementations, databases and virtual file system have page-based structure ("page" is similar to sector or cluster in FAT or NTFS) where files can be scattered across the storage.
Related
File systems provide a mechanism for categorizing (and thus navigating) data on a disk. This makes sense to me. If I want to find some "group" of data, I don't want to have to remember byte offsets myself. I would rather have some look up system that I can dynamically navigate.
However, I don't understand why different file systems must exist. For example, why NTFS, FAT16/32, EXT?
Why should different operating systems (Linux, Windows, etc.) rely on different methods for organizing data on disk?
I think a more appropriate question (and the question you'd like answered) is "Why do multiple file systems exist?". The answer depends on the particular file system, but in many cases it comes down to one (or a mix of) of three reasons:
addressing some type of issue in existing file systems, or
a split due to difference in opinion, or
corporate interests.
The FAT family
The original FAT file system was introduced in the late 1970s. In many ways, FAT is great: it has a low memory footprint, and simple design. IIRC, it's still used in embedded systems to this date.
The FAT family of file systems comprises of the original 8-bit FAT, FAT12, FAT16, and FAT32. (There are several other versions, but they're not relevant to this answer.) There were several feature-differences between each version of the FAT file systems, some of which demonstrate the motivation for creating a new version. For example, in moving from 8-bit FAT to FAT12:
the maximum filename length increased from 9 characters to 11 or 255 characters by switching from 6.3 filename encoding to 8.3 filename encoding or LFN extensions, respectively.
support for subdirectories was added.
file size granularity decreased from 128 bytes to 1 byte.
None of these features individually were likely the motivation for the creation of FAT12, but together these features are a clear win over 8-bit FAT. Refer to the FAT Wikipedia page for a more complete list of differences.
NTFS
Before discussing NTFS, we should look at its predecessor: HPFS. The simple design of FAT turned out to be a problem: it constrained what features FAT could offer, and how it performed. HPFS was created to address the shortcomings of FAT. For example, HPFS provide several features FAT could not:
Support for mixed case file names, in different code pages
More efficient use of disk space (files are not stored using multiple-sector clusters but on a per-sector basis)
An internal architecture that keeps related items close to each other on the disk volume
Separate datestamps for last modification, last access, and creation (as opposed to last-modification-only datestamp in then-times implementations of FAT)
Root directory located at the midpoint, rather than at the beginning of the disk, for faster average access
That should be compelling enough to demonstrate why HPFS was created, but how does NTFS fit into the picture? HPFS was a joint project by Microsoft and IBM. Due to several differences in opinion, they separated, and Microsoft created NTFS. This is another reason new file systems are created: difference in opinion. There's nothing inherently wrong with this, but it does have the side effect of occasionally fragmenting projects.
The extended family
As with NTFS, we need to examine the predecessor of ext to understand why it was created. The predecessor of ext is the MINIX file system. MINIX was created for teaching purposes, so it was simple and elided several complex features the UNIX file system offered. The first file system supported by Linux was the MINIX filesystem. The simplicity of the MINIX file system soon became an issue:
MINIX restricted filename lengths to 14 characters (30 in later versions), it limited partitions to 64 megabytes, and the file system was designed for teaching purposes, not performance.
And thus, the extended file system (ie. ext) was created to address the shortcomings of the MINIX file system.
In a similar vain, ext2 was created to address the shortcomings of ext, and so on. For example, ext2 added three separate timestamps (atime, ctime, and mtime), ext3 adding journaling, and ext4 extended storage limits. These were all breaking changes which required a "new" file system. They weren't the only changes between versions, but these changes demonstrate why creating another file system was necessary.
Why do different operating systems use different file systems?
Several file systems are widely used today. Apple File System (APFS) on Apple devices, NTFS on Windows devices, and several different file systems on Linux. Why do different operating systems use different file systems? For Linux, the reason is obvious: Linux needed an open source file system. That's why it initially used the MINIX file system.
For Windows and Apple devices, the difference is more, shall we say, political. Microsoft created NTFS to address the issues it thought were important, and Apple created APFS to address the issues it thought were important. Commercial OS vendors also create their own file systems for product differentiation.
Why does Linux use several different file systems?
We can kinda see why different OSs use different file systems, but several file systems are actively in use on Linux alone, eg. ext4, Btrfs, ZFS, XFS, and F2FS. What gives?
Linux is a different environment. The Linux kernel source is openly available, and can be modified, booted, and tested by any user. So, if one file system does not support the features you want, or offer the performance you need, you can create a new file system (which is, of course, easier said than done). For example,
Btrfs addressed (among other things) the lack of snapshots on ext3/4.
ZFS was created for the Solaris operating system, but later ported to Linux. (ZFS also has a very rich set of features.)
XFS was created to improve performance by using different underlying data structures (ie. B-trees).
F2FS was created to address performance on solid state media. SSDs offer lower latency, and greater throughput compared to spinning disks. It turns out simply using a faster disk does not necessary equate to better file system performance.
different OS uses different FS Because each of them has a different philosophy and different goals.
For example windows use ntfs because they want secure and smart FS (without have philosophy like fast or small)
Ubuntu (with most modern distributions) use ext4 (And also supports others) Mostly because its simple and speed.
I don't think it's something technical, it's just different companies worked on the same thing at the same time plus the closed source nature of some OSs like windows and mac which make it hard for other companies to replicate the full functionality and illegal to reverse engineer it, it's like why different OSs in the first place.
I am writing an embedded system, where I am creating a USB mass storage device driver that uses an 8MB chunk of RAM as the FAT fileystem..
Although it made sense at the time to allow the OS to take my zero'd out RAM area and format a FAT partition itself, I ran into problems that I am diagnosing on Jan Axelson's (popular author of books on USB) PORTS forum. The problem is related to erasing the blocks in the drive, and may be circumvented by pre-formatting the memory area to a FAT filesystem before USB enumeration. Then, I could see if the drive will operate normally. Ideally, the preformatted partition would include a file on it to test the read operation.
It would be nice if I could somehow create and mount a mock 8MB FAT filesystem on my OS (OSX), write a file to it, and export it to an image file for inclusion in my project. Does someone know how to do this? I could handle the rest. I'm not too concerned whether that would be FAT12/16/32 at the moment, optional MBR inclusion would be nice..
If that option doesn't exist, I'm looking to use a pre-written utility to create a FAT img file that I could include into my project and upload directly to RAM. this utility would allow me to specify an 8MB filesystem with 512-byte sectors, for instance, and possibly FAT12 / FAT16 / FAT32.
Is anyone aware of such a utility? I wasn't able to find one.
If not, can someone recommend a first step to take in implementing this in C? I'm hoping a library exists. I'm pretty exhausted after implementing the mass storage driver from scratch, but I understand I might have to 'get crinkled' and manually create the FAT partition. It's not too hard. I imagine some packed structs and some options. I'll get there. I already have resources on FAT filesystem itself.
I ended up discovering that FatFS has facilities for formatting and partitioning the "drive" from within the embedded device, and it relieved of me of having to absolutely format it manually or use host-side tools.
I would like to cover in more detail the steps taken, but I am exhausted. I may edit in further details at a later time.
There are several, they're normally hidden in the OS source.
On BSD (ie OS-X) you should have a "mkdosfs" tool, if not the source will be available all over the place ... here's a random example
http://www.blancco.com/downloads/source/utils/mkdosfs/mkdosfs.c
Also there's the 'mtools' package, it's normally use for floppies, but I think it does disk images too.
Neither of these will create partition tables though; you'd need something else if that's required too.
I'm creating several programs in C that will have to communicate through files.
They will be using files because the communication will not be linear, i.e. program #5 could use a file that program #2 created.
The execution of these programs will be linear (serial).
There will be a single control program which manages the execution of these cascading programs. This program will be the one creating the files, and should only pass file names to the programs
Since disk I/O is slow (lets assume the OS doesn't cache these operations), I would need to use memory-mapped files.
However, the requirement is that the control program can seamlessly switch between regular and memory-mapped files - which means that the cascading programs will have to be unaware of whether they're writing/reading to/from a memory-mapped file or a regular one.
How can I create a file, which presents itself to the rest of the system as a normal file (has a place in the FS hierarchy, a file name, can be read and written), but is in fact in memory and not on the disk?
The terminology you're using here is a little weird - memory-mapping is a way of accessing a file (any file), not a separate type of file from one that's stored on disk.
That being said, if you want to have some of your files written out to disk and some not, the easiest way to do that would be to store them in an in-memory filesystem, such as tmpfs. There is usually one of these mounted at /dev/shm on most Linux systems.
This question has been asked with varying degrees of success in the past...
Are there tools, or C/C++ unix functions to call that would enable me to retrieve the location on disk of a file? Not some virtual address of the file, but the disk/sector/block the file resides in?
The goal here is to enable overwriting of the actual bits that exist on disk. I would probably need a way to bypass the kernel's superimposition of addresses. I am willing to consider an x86 asm based solution...
However, I feel there are tools that do this quite well already.
Thanks for any input on this.
Removing files securely is only possible under very specific circumstances:
There are no uncontrolled layers of indirection between the OS and the actual storage medium.
On modern systems that can no longer be assumed. SSD drives with firmware wear-leveling code do not work like this; they may move or copy data at will with no logging or possibility of outside control. Even magnetic disk drives will routinely leave existing data in sectors that have been remapped after a failure. Hybrid drives do both...
The ATA specification does support a SECURE ERASE command which erases a whole drive, but I do not know how thorough the existing implementations are.
The filesystem driver has a stable and unique mapping of files to physical blocks at all times.
I believe that ext2fs does have this feature. I also think that ext3fs and ext4fs also work like this in the default journaling mode, but not when mounted with the data=journal option which allows for file data to be stored in the journal, rather than just metadata.
On the other hand reiserfs definitely works differently, since it stores small amounts of data along with the metadata, unless mounted with the notail option.
If these two conditions are met, then a program such as shred may be able to securely remove the content of a file by overwriting its content multiple times.
This method still does not take into account:
Backups
Virtualized storage
Left over data in the swap space
...
Bottom line:
You can no longer assume that secure deletion is possible. Better assume that it is impossible and use encryption; you should probably be using it anyway if you are handling sensitive data.
There is a reason that protocols regarding sensitive data mandate the physical destruction of the storage medium. There are companies that actually demagnetize their hard disk drives and then shred them before incinerating the remains...
It seems like there isn't anything inherent in an operating system that would necessarily require that sort of abstraction/metaphor.
If so, what are they? Are they still used anywhere? I'd be especially interested in knowing about examples that can be run/experimented with on a standard desktop computer.
Examples are Persistent Haskell, Squeak Smalltalk, and KeyKOS and its descendants.
It seems like there isn't anything inherent in an operating system
that would necessarily require that sort of abstraction/metaphor.
There isn't any necessity, it's completely bogus. In fact, forcing everything to be accessible via a human readable name is fundamentally flawed, and precludes security due to Zooko's triangle.
Examples of hierarchies similar to this appear as well in DNS, URLs, programming language module systems (Python and Java are two good examples), and torrents, X.509 PKI.
One system that fixes some of the problems caused by DNS/URLs/X.509 PKI is Waterken's YURL.
All these systems exhibit ridiculous problems because the system is designed around some fancy hierarchy instead of for something that actually matters.
I've been planning on writing some blogs explaining why these types of systems are bad, I'll update with links to them when I get around to it.
I found this http://pages.stern.nyu.edu/~marriaga/papers/beyond-the-hfs.pdf but it's from 2003. Is something like that what you are looking for?
About 1995, I started to design an object oriented operating system
(SOOOS) that has no file system.
Almost everything is an object that exists in virtual memory
which is mapped/paged directly to the disk
(either local or networked, I.e. redudimentary cloud computing).
There is a lot of overhead in programs to read and write data in specific formats.
Image never reading and writing files.
In SOOOS there are no such things as files and directories,
Autonomous objects, which would essentially replace files, can be organized
suiting your needs, not simply a restrictive hierarchical file system.
There are no low level drive format structures (I.e. clusters)
that have additional level of abstraction and translation overhead.
SOOOS Data storage overhead is simply limited to page tables
that can be quickly indexed as with basic virtual memory paging.
Autonomous objects each have their own dynamic
virtual memory space which serves as the persistent data store.
When active they are given a task context and added to the active process task list
and then exist as processes.
A lot of complexity is eliminated in my design, simply instanciate objects
in a program and let the memory manager and virtual memory system handle
everything consistently with minimal overhead.
Booting the operating system is simply a matter of loading the basic kernal
setting up the virtual memory page tables to the key OS objects and
(re)starting the OS object tasks. When the computer is turned-off,
shutdown is essentially analogous to hibernation
so the OS is nearly in instant-on status,
The parts (pages) of data and code are loaded only as needed.
For example to edit a document, instead of starting a program by loading the entire
executable in memory, simply load the task control structure of the
autonomous object and set the instruction pointer to the function to be performed.
The code is paged in only as the instruction pointer traverses its virtual memory.
Data is always immediately ready to be used and simply paged in only as accessed
with no need to parse files and manage data structures which often
have a distict represention in memory from secondary storage.
Simply use the program's native memory allocation mechanism and
abstract data types without disparate and/or redundent data structures.
Object Linking and Embedding type of program interaction,
memory mapped IO, and interprocess communication you
get practically for free as one would implement
memory sharing using the facilities of the processor's Memory Management Unit.