Any concern over supporting multiple "." in filename for modern filesystems? - filesystems

We are implementing an API that makes use of internal databases. We don't really have any control over which filesystems this will be implemented on.
My question is, does any modern filesystem not support multiple "." characters in a filename? (IE we want to use foo.bar.db.....) We have already blown through that massive 8 characters allowed by dos so I am not too worried about the 8.3 rule.
Regards,
PablitoRun

Related

Why must multiple filesystem "types" exist?

File systems provide a mechanism for categorizing (and thus navigating) data on a disk. This makes sense to me. If I want to find some "group" of data, I don't want to have to remember byte offsets myself. I would rather have some look up system that I can dynamically navigate.
However, I don't understand why different file systems must exist. For example, why NTFS, FAT16/32, EXT?
Why should different operating systems (Linux, Windows, etc.) rely on different methods for organizing data on disk?
I think a more appropriate question (and the question you'd like answered) is "Why do multiple file systems exist?". The answer depends on the particular file system, but in many cases it comes down to one (or a mix of) of three reasons:
addressing some type of issue in existing file systems, or
a split due to difference in opinion, or
corporate interests.
The FAT family
The original FAT file system was introduced in the late 1970s. In many ways, FAT is great: it has a low memory footprint, and simple design. IIRC, it's still used in embedded systems to this date.
The FAT family of file systems comprises of the original 8-bit FAT, FAT12, FAT16, and FAT32. (There are several other versions, but they're not relevant to this answer.) There were several feature-differences between each version of the FAT file systems, some of which demonstrate the motivation for creating a new version. For example, in moving from 8-bit FAT to FAT12:
the maximum filename length increased from 9 characters to 11 or 255 characters by switching from 6.3 filename encoding to 8.3 filename encoding or LFN extensions, respectively.
support for subdirectories was added.
file size granularity decreased from 128 bytes to 1 byte.
None of these features individually were likely the motivation for the creation of FAT12, but together these features are a clear win over 8-bit FAT. Refer to the FAT Wikipedia page for a more complete list of differences.
NTFS
Before discussing NTFS, we should look at its predecessor: HPFS. The simple design of FAT turned out to be a problem: it constrained what features FAT could offer, and how it performed. HPFS was created to address the shortcomings of FAT. For example, HPFS provide several features FAT could not:
Support for mixed case file names, in different code pages
More efficient use of disk space (files are not stored using multiple-sector clusters but on a per-sector basis)
An internal architecture that keeps related items close to each other on the disk volume
Separate datestamps for last modification, last access, and creation (as opposed to last-modification-only datestamp in then-times implementations of FAT)
Root directory located at the midpoint, rather than at the beginning of the disk, for faster average access
That should be compelling enough to demonstrate why HPFS was created, but how does NTFS fit into the picture? HPFS was a joint project by Microsoft and IBM. Due to several differences in opinion, they separated, and Microsoft created NTFS. This is another reason new file systems are created: difference in opinion. There's nothing inherently wrong with this, but it does have the side effect of occasionally fragmenting projects.
The extended family
As with NTFS, we need to examine the predecessor of ext to understand why it was created. The predecessor of ext is the MINIX file system. MINIX was created for teaching purposes, so it was simple and elided several complex features the UNIX file system offered. The first file system supported by Linux was the MINIX filesystem. The simplicity of the MINIX file system soon became an issue:
MINIX restricted filename lengths to 14 characters (30 in later versions), it limited partitions to 64 megabytes, and the file system was designed for teaching purposes, not performance.
And thus, the extended file system (ie. ext) was created to address the shortcomings of the MINIX file system.
In a similar vain, ext2 was created to address the shortcomings of ext, and so on. For example, ext2 added three separate timestamps (atime, ctime, and mtime), ext3 adding journaling, and ext4 extended storage limits. These were all breaking changes which required a "new" file system. They weren't the only changes between versions, but these changes demonstrate why creating another file system was necessary.
Why do different operating systems use different file systems?
Several file systems are widely used today. Apple File System (APFS) on Apple devices, NTFS on Windows devices, and several different file systems on Linux. Why do different operating systems use different file systems? For Linux, the reason is obvious: Linux needed an open source file system. That's why it initially used the MINIX file system.
For Windows and Apple devices, the difference is more, shall we say, political. Microsoft created NTFS to address the issues it thought were important, and Apple created APFS to address the issues it thought were important. Commercial OS vendors also create their own file systems for product differentiation.
Why does Linux use several different file systems?
We can kinda see why different OSs use different file systems, but several file systems are actively in use on Linux alone, eg. ext4, Btrfs, ZFS, XFS, and F2FS. What gives?
Linux is a different environment. The Linux kernel source is openly available, and can be modified, booted, and tested by any user. So, if one file system does not support the features you want, or offer the performance you need, you can create a new file system (which is, of course, easier said than done). For example,
Btrfs addressed (among other things) the lack of snapshots on ext3/4.
ZFS was created for the Solaris operating system, but later ported to Linux. (ZFS also has a very rich set of features.)
XFS was created to improve performance by using different underlying data structures (ie. B-trees).
F2FS was created to address performance on solid state media. SSDs offer lower latency, and greater throughput compared to spinning disks. It turns out simply using a faster disk does not necessary equate to better file system performance.
different OS uses different FS Because each of them has a different philosophy and different goals.
For example windows use ntfs because they want secure and smart FS (without have philosophy like fast or small)
Ubuntu (with most modern distributions) use ext4 (And also supports others) Mostly because its simple and speed.
I don't think it's something technical, it's just different companies worked on the same thing at the same time plus the closed source nature of some OSs like windows and mac which make it hard for other companies to replicate the full functionality and illegal to reverse engineer it, it's like why different OSs in the first place.

How to make NFS support posix_fallocate?

My full text search engine store indexed data on NFS store.
Due to the frequently read/write ioes,I want to preallocate huge continuous disk space for each table file and so resort to posix_fallocate.
On an NFS volume,My little demo failed with "EOPNOTSUPP" responsed to posix_fallocate.
Does NFS protocal/specification include the posix_fallocate scenario?
(While the title of the question mentions posix_fallocate the definition of posix_fallocate means EOPNOTSUPP is not an error code it will return. It seems highly likely the questioner was actually using fallocate on Linux as that CAN return EOPNOTSUPP)
NFS can support fallocate on Linux (http://wiki.linux-nfs.org/wiki/index.php/Fallocate ) but:
Your client and server have to be using NFS 4.2 or later.
An NFS 4.2+ server doesn't HAVE to implement fallocate - it's optional (see the posting Re: [PATCH 4/4] Remove broken posix_fallocate, posix_falllocate64 fallback code [BZ#15661] in a libc-alpha mailing list thread]).
Re posix_fallocate, you can never know whether a Linux glibc posix_fallocate call was real or emulated without careful observation or manual checking beforehand (e.g. by making a platform native fallocate call and checking if it failed or wasn't supported) and if you've done the native call why did you need the posix_fallocate call?. Further, different platforms have a different call for a native fallocate (or completely lack it) so if portability to non-Linux platforms is a concern you will need to write the appropriate native fallocate wrappers for each platform. If you have to do pre-allocation even when what's below you doesn't have a supported call for it, you will have to fall back to doing it by hand (e.g. via nice chunky writes or hacks like glibc's posix_fallocate...).
Note: When it is possible to do a real pre-allocation (e.g. via fallocate) it can be DRAMATICALLY faster than having to do full writes because the filesystem can fulfill it by just setting appropriate metadata.

DB vs. filesystems - for non-image files and endianess

I've read the many discussions about Databases vs. file systems for storing files. Most of these discussions talk about images and media files. My question is:
1) Do the same arguments apply to storing .doc, .pdf, .xls, .txt? Are there anything special about document files I should be aware of?
2) If I store in a database as binary, will there be endian issues if my host swaps machines? e.g., I insert into the database on a big-endian machine, it gets ported to an little-endian machine, then I try to extract (e.g., write to file, send it to my desktop, then try to open).
Thanks for any guidance!
1) Yes, pretty much the same arguments apply to storing PDFs and whatnot... anything that's compressed also comes to mind.
Every file format that's non-text has to deal with the question of endianness if it wants to be portable across hosts of different endianness. They mostly do it by defining what the endianness of all binary fields within the file that are longer than one byte should be. Software that writes and reads the format than has to take special care to byte-swap iff it's running on a platform of the opposite endianness. Images are no different than other binary file formats. The choice is arbitrary, but big endian (network byte order) is a popular choice especially with network software because of the ubiquity of macros in C that deal with this almost automatically.
Another way of defining binary file formats so that they are endian-portable is to support either endianness for binary fields, and include a marker in the header to say which one was used. On opening the file, readers consult the marker. That way the file can be read back slightly more efficiently on the same host where it was written or other hosts with the same endianness (which is the common case) while hosts of the opposite endianness need to expend a little bit more effort.
As for the database, assuming you are using a field type like a blob, you'll get back the very same bytestream when you read as whatever you wrote, so you don't have to worry about the endianness of the database client or server.
2) That depends on the database. The database might use an underlying on-disk format that is compatible with any endianness, by defining its on-disk format as described above.
Databases aren't often aiming for portability of their underlying file formats though, considering (correctly) that moving the underlying data files to a database host of different endianness is rare. According to this answer, for example, MySQL's MyISAM is not endian-portable.
I don't think you need to worry about this too much though. If the database server is ever switched to a host of different endianness, ensuring that the data remains readable is an important step of the process and the DBA handling the task (perhaps yourself?) won't forget to do it, because if they do forget, then nothing will work (that is, the breakage won't be limited to binary BLOBs!)

How to implement a very simple filesystem?

I am wondering how the OS is reading/writing to the hard drive.
I would like as an exercise to implement a simple filesystem with no directories that can read and write files.
Where do I start?
Will C/C++ do the trick or do I have to go with a more low level approach?
Is it too much for one person to handle?
Take a look at FUSE: http://fuse.sourceforge.net/
This will allow you to write a filesystem without having to actually write a device driver. From there, I'd start with a single file. Basically create a file that's (for example) 100MB in length, then write your routines to read and write from that file.
Once you're happy with the results, then you can look into writing a device driver, and making your driver run against a physical disk.
The nice thing is you can use almost any language with FUSE, not just C/C++.
I found it quite easy to understand a simple filesystem while using the fat filesystem on the avr microcontroller.
http://elm-chan.org/fsw/ff/00index_e.html
Take look at the code you will figure out how fat works.
For learning the ideas of a file system it's not really necessary to use a disk i think. Just create an array of 512 byte byte-arrays. Just imagine this a your Harddisk an start to experiment a bit.
Also you may want to hava a look at some of the standard OS textbooks like http://codex.cs.yale.edu/avi/os-book/OS8/os8c/index.html
The answer to your first question, is that besides Fuse as someone else told you, you can also use Dokan that does the same for Windows, and from there is just a question of doing Reads and Writes to a physical partition (http://msdn.microsoft.com/en-us/library/aa363858%28v=vs.85%29.aspx (read particularly the section on Physical Disks and Volumes)).
Of course that in Linux or Unix besides using something like Fuse you only have to issue, a read or write call to the wanted device in /dev/xxx (if you are root), and in these terms the Unices are more friendly or more insecure depending on your point of view.
From there try to implement a simple filesystem like Fat, or something more exoteric like an tar filesystem, or even some simple filesystem based on Unix concepts like UFS or Minux, or just something that only logs the calls that are made and their arguments to a log file (and this will help you understand, the calls that are made to the filesystem driver during the regular use of your computer).
Now your second question (that is much more simple to answer), yes C/C++ will do the trick, since they are the lingua franca of system development, also a lot of your example code will be in C/C++ so you will at least read C/C++ in your development.
Now for your third question, yes, this is doable by one person, for example the ext filesystem (widely known in Linux world by it's successors as ext2 or ext3) was made by a single developer Theodore Ts'o, so don't think that these things aren't doable by a single person.
Now the final notes, remember that a real filesystem interacts with a lot of other subsystems in a regular kernel, for example, if you have a laptop and hibernate it the filesystem has to flush all changes made to the open files, if you have a pagefile on the partition or even if the pagefile has it's own filesystem, that will affect your filesystem, particularly the block sizes, since they will tend to be equal or powers of the page block size, because it's easy to just place a block from the filesystem on memory that by coincidence is equal to the page size (because that's just one transfer).
And also, security, since you will want to control the users and what files they read/write and that usually means that before opening a file, you will have to know what user is logged on, and what permissions he has for that file. And obviously without filesystem, users can't run any program or interact with the machine. Modern filesystem layers, also interact with the network subsystem due to the fact that there are network and distributed filesystems.
So if you want to go and learn about doing kernel filesystems, those are some of the things you will have to worry about (besides knowing a VFS interface)
P.S.: If you want to make Unix permissions work on Windows, you can use something like what MS uses for NFS on the server versions of windows (http://support.microsoft.com/kb/262965)

Where can I get started with Unicode-friendly programming in C?

So, I’m working on a plain-C (ANSI 9899:1999) project, and am trying to figure out where to get started re: Unicode, UTF-8, and all that jazz.
Specifically, it’s a language interpreter project, and I have two primary places where I’ll need to handle Unicode: reading in source files (the language ostensibly supports Unicode identifiers and such), and in ‘string’ objects.
I’m familiar with all the obvious basics about Unicode, UTF-7/8/16/32 & UCS-2/4, so on and so forth… I’m mostly looking for useful, C-specific (that is, please no C++ or C#, which is all that’s been documented here on SO previously) resources as to my ‘next steps’ to implement Unicode-friendly stuff… in C.
Any links, manpages, Wikipedia articles, example code, is all extremely welcome. I’ll also try to maintain a list of such resources here in the original question, for anybody who happens across it later.
A must read before considering anything else, if you’re unfamiliar with Unicode, and what an encoding actually is: http://www.joelonsoftware.com/articles/Unicode.html
The UTF-8 home-page: http://www.utf-8.com/
man 3 iconv (as well as iconv_open and iconvctl)
International Components for Unicode (via Geoff Reedy)
libbasekit, which seems to include light Unicode-handling tools
Glib has some Unicode functions
A basic UTF-8 detector function, by Christoph
International Components for Unicode provides a portable C library for handling unicode. Here's their elevator pitch for ICU4C:
The C and C++ languages and many operating system environments do not provide full support for Unicode and standards-compliant text handling services. Even though some platforms do provide good Unicode text handling services, portable application code can not make use of them. The ICU4C libraries fills in this gap. ICU4C provides an open, flexible, portable foundation for applications to use for their software globalization requirements. ICU4C closely tracks industry standards, including Unicode and CLDR (Common Locale Data Repository).
GLib has some Unicode functions and is a pretty lightweight library. It's not near the same level of functionality that ICU provides, but it might be good enough for some applications. The other features of GLib are good to have for portable C programs too.
GTK+ is built on top of GLib. GLib provides the fundamental algorithmic language constructs commonly duplicated in applications. This library has features such as (this list is not a comprehensive list):
Object and type system
Main loop
Dynamic loading of modules (i.e. plug-ins)
Thread support
Timer support
Memory allocator
Threaded Queues (synchronous and asynchronous)
Lists (singly linked, doubly linked, double ended)
Hash tables
Arrays
Trees (N-ary and binary balanced)
String utilities and charset handling
Lexical scanner and XML parser
Base64 (encoding & decoding)
I think one of the interesting questions is - what should your canonical internal format for strings be? The 2 obvious choices (to me at least) are
a) utf8 in vanilla c-strings
b) utf16 in unsigned short arrays
In previous projects I have always chosen utf-8. Why ; because its the path of least resistance in the C world. Everything you are interfacing with (stdio, string.h etc) will work fine.
Next comes - what file format. The problem here is that its visible to your users (unless you provide the only editor for your language). Here I guess you have to take what they give you and try to guess by peeking (byte order marks help)

Resources