What kinds of things are stored in 1 byte files? - file

Page 301 of Tanenbaum's Modern Operating Systems contains the table below. It gives the file sizes on a 2005 commercial Web server. The chapter is on file systems, so these data points are meant to be similar to what you would see on a typical storage device.
File length (bytes)
Percentage of files less than length
1
6.67
2
7.67
4
8.33
8
11.30
16
11.46
32
12.33
64
26.10
128
28.49
...
...
1KB
47.82
...
...
1 MB
98.99
...
...
128 MB
100
In the table, you will see that 6.67% of files on this server are 1 byte in length. What kinds of processes are creating 1 byte files? What kind of data would be stored in these files?

I wasn't familiar with that table, but it piqued my interest. I'm not sure what the 1-byte files were at the time, but perhaps the 1-byte files of today can shed some light?
I searched for files of size 1 byte with
sudo find / -size 1c 2>/dev/null | while read line; do ls -lah $line; done
Looking at the contents of these files on my system, they contain a single character: a newline. This can be verified by running the file through hexdump. A file with a single newline can exist for multiple reasons, but it probably has to do with the convention of terminating a line with a newline.
There is a second type of file with size 1 byte: symbolic links where the target is a single character. ext4 appears to report the length of the target as the size of the symbolic link (at least for short-length targets).

Related

Different number of blocks allocated using stat() and ls -s

I was trying to get number of blocks allocated to a file using C. I used the stat struct with its variable called st_blocks. However this is returning different number of blocks as compared to ls -s. Can anybody explain the reason for this and if there is a way to correct this?
There is no discrepancy; just a misunderstanding. There are two separate "block sizes" here. Use ls -s --block-size=512 to use 512 byte block size for ls, too.
The ls -s command lists the size allocated to the file in user-specified units ("blocks"), the size of which you can specify using the --block-size option.
The st_blocks field in struct stat is in units of 512 bytes.
You see a discrepancy, because the two "block sizes" are not the same. They just happen to be called the same name.
Here is an example that you can examine this effect. This works on all POSIXy/Unixy file systems (that support sparse file), but not on FAT/VFAT etc.
First, let's create a file that but is one megabyte long, but has a hole at the beginning (they read zeros, but are not actually stored on disk), with a single byte at end (I'll use 'X').
We do this by using dd to skip the first 1048575 bytes of the file (creating a "hole", and thus a sparse file on filesystems that support such):
printf 'X' | dd bs=1 seek=1048575 of=sparse-file count=1
We can use the stat utility to examine the file. Format specifier %s provides the logical size of the file (1048576), %b the number of blocks (st_blocks):
stat -c 'st_size=%s st_blocks=%b' sparse-file
On my system, I get st_size=1048576 st_blocks=8, because the actual filesystem block size is 4096 bytes (= 8×512), and this sparse file needs only one filesystem block.
However, using ls -s sparse-file I get 4 sparse-file, because the default ls block size is 1024 bytes. If I run
ls --block-size=512 -s sparse-file
then I see 8 sparse-file, as I'd expect.
"Blocks" here are not real filesystem blocks. They're convenient chunks for display.
st_blocks is using probably 512 byte blocks. See the POSIX spec.
st_blksize is the preferred block size for this file, but not necessarily the actual block size.
BSD ls -s always uses 512 byte "blocks". OS X, for example, uses BSD ls by default.
$ /bin/ls -s index.html
560 index.html
GNU ls appears to use 1K blocks unless overriden with --block-size.
$ /opt/local/bin/gls -s index.html
280 index.html
printf("%lld / %d\n", buf.st_blocks, buf.st_blksize); produces 560 / 4096. The 560 "blocks" are in 512 byte chunks, but the real filesystem blocks are 4k.
The file contains 284938 bytes of data...
$ ls -l index.html
-rw-r--r-- 1 schwern staff 284938 Aug 11 2016 index.html
...but we can see it uses 280K on disk or 70 bytes.
Note that OS X further confuses the issue by using 1000 bytes for a "kilobyte" instead of the correct 1024 bytes, that's why it says 287 KB for 70 4096 KB blocks (ie. 286720 bytes) instead of 280 KB. This was done because hard drive manufacturers started using 1000 byte "kilobytes" in order to inflate their size, and Apple got tired of customers complaining about "lost" disk space.
The 4K block size can be seen by making a tiny file.

FAT32 number of files per directory limit

I'm currently trying to code a FAT system in C on a Xillinx Kintex 7 card. It is equipped with a MicroBlaze and I've already managed to create most of the required functions.
The problem I'm facing is about the total capacity of a folder, I've read on the web that in FAT32 a folder should be able to contain more than 65 000 files but with the system I've put in place I'm limited to 509 files per folder. I think it's because of my comprehension of the way FAT32 works but here's what I've made so far:
I've created a format function that writes the correct data in the MBR (sector 0) and the Volume ID (sector 2048 on my disk).
I've created a function that writes the content of the root directory (first cluster that starts on sector 124 148)
I've created a function that writes a new folder that contains N files of size X. The name of the folder is written in the root directory (sector 124148) and the filenames are written on the next cluster (sector 124212 since I've set cluster size to 64 sectors). Finally, the content of the files (a simple counter) is written on the next cluster that starts on sector 124276.
Here, the thing is that a folder has a size of 1 cluster which means that it has a capacity of 64 sectors = 32KB and I can create only 512 (minus 2) files in a directory! Then, my question is: is it possible to change the size of a folder in number of cluster? Currently I use only 1 cluster and I don't understand how to change it. Is it related to the FAT of the drive?
Thanks in advance for your help!
NOTE: My drive is recognized by Windows when I plug it in, I can access and read every file (except those that exceed the 510 limit) and I can create new files through the Windows Explorer. It obviously comes from the way I understand file creation and folder creation!
A directory in the FAT filesystem is only a special type of file. So use more clusters for this "file" just as you would with any other file.
The cluster number of the root directory is stored at offset 0x2c of the FAT32 header and is usually cluster 2. The entry in the cluster map for cluster 2 contains the value 0x0FFFFFFF (end-of-clusters) if this is the only cluster for the root directory. You can use two clusters (for example cluster 2 and 3) for the root directory if you set cluster 3 in the cluster map as the next cluster for cluster 2 (set 0x00000003 as value for the entry of cluster 2 in the cluster map). Now, cluster 3 can either be the last cluster (by setting its entry to 0x0FFFFFFF) or can point in turn to another cluster, to make the space for the root directory even bigger.
The clusters do not need to be subsequent, but it usually has a performance gain on sequential reading (that's why defragmenting a volume can largly increase performance).
The maximum number of files within a directory of a FAT file system is 65,536 if all files have short filenames (8.3 format). Short filenames are stored in a single 32-byte entry.
That means the maximum size of a direcotry (file) is 65,536 * 32 bytes, i.e. 2,097,152 bytes.
Short filenames in 8.3 format consists of 8 characters plus optional a "." followed by maximum 3 characters. The character set is limited. Short filenames that contain lower case letters are additionally stored in a Long File Name entry.
If the filename is longer (Long File Name), it is spread over multiple 32-byte long entries. Each entry contains 13 characters of the filename. If the length of the filename is not a multiple of 13, the last entry is padded.
Additionally there is one short file name entry for each Long File Name entry.
2 32-byte entries are already taken by the "." and ".." entries in each directory (except root).
1 32-byte entry is taken as end marker?
So the actual maximum number of files in a directory depends on the length of the filenames.

Header and structure of a tar format

I have a project for school which implies making a c program that works like tar in unix system. I have some questions that I would like someone to explain to me:
The dimension of the archive. I understood (from browsing the internet) that an archive has a define number of blocks 512 bytes each. So the header has 512 bytes, then it's followed by the content of the file (if it's only one file to archive) organized in blocks of 512 bytes then 2 more blocks of 512 bytes.
For example: Let's say that I have a txt file of 0 bytes to archive. This should mean a number of 512*3 bytes to use. Why when I'm doing with the tar function in unix and click properties it has 10.240 bytes? I think it adds some 0 (NULL) bytes, but I don't know where and why and how many...
The header chcksum. As I know this should be the size of the archive. When I check it with hexdump -C it appears like a number near the real size (when clicking properties) of the archive. For example 11200 or 11205 or something similar if I archive a 0 byte txt file. Is this size in octal or decimal? My bets are that is in octal because all information you put in the header it needs to be in octal. My second question at this point is what is added more from the original size of 10240 bytes?
Header Mode. Let's say that I have a file with 664, the format file will be 0, then I should put in header 0664. Why, on a authentic archive is printed 3 more 0 at the start (000064) ?
There have been various versions of the tar format, and not all of the extensions to previous formats were always compatible with each other. So there's always a bit of guessing involved. For example, in very old unix systems, file names were not allowed to have more than 14 bytes, so the space for the file name (including path) was plenty; later, with longer file names, it had to be extended but there wasn't space, so the file name got split in 2 parts; even later, gnu tar introduced the ##LongLink pseudo-symbolic links that would make older tars at least restore the file to its original name.
1) Tar was originally a *T*ape *Ar*chiver. To achieve constant througput to tapes and avoid starting/stopping the tape too much, several blocks needed to be written at once. 20 Blocks of 512 bytes were the default, and the -b option is there to set the number of blocks. Very often, this size was pre-defined by the hardware and using wrong blocking factors made the resulting tape unusable. This is why tar appends \0-filled blocks until the tar size is a multiple of the block size.
2) The file size is in octal, and contains the true size of the original file that was put into the tar. It has nothing to do with the size of the tar file.
The checksum is calculated from the sum of the header bytes, but then stored in the header as well. So the act of storing the checksum would change the header, thus invalidate the checksum. That's why you store all other header fields first, set the checksum to spaces, then calculate the checksum, then replace the spaces with your calculated value.
Note that the header of a tarred file is pure ascii. This way, In those old days, when a tar file (whose components were plain ascii) got corrupted, an admin could just open the tar file with an editor and restore the components manually. That's why the designers of the tar format were afraid of \0 bytes and used spaces instead.
3) Tar files can store block devices, character devices, directories and such stuff. Unix stores these file modes in the same place as the permission flags, and the header file mode contains the whole file mode, including file type bits. That's why the number is longer than the pure permission.
There's a lot of information at http://en.wikipedia.org/wiki/Tar_%28computing%29 as well.

How to traverse a FAT directory, file

I am trying to understand how a FAT file system works. From the attached first sector of FAT 16 partition I could understand,
Bytes per sector = 512.
Sectors per cluster = 4.
FAT 16 file system.
reserved sectors = 4.
FAT table count = 2.
Number of entries in root directory = 512.
Total sectors = 204800.
Root dir sector = 32.
Size of FAT table = 200.
First data sector = 436 (4 + 2 * 200 + 32).
Cluster count = 51091.
Root directory is at 404th sector (0x32800th byte)
Root directory at address 0x32800 is attached. The root directory has two folders named a, b and one file named file.txt. In the given image above how to distinguish between file and folder.
Doubts listed below:
1. A folder entry should start with a 0x2E but there is no such value. So how to find out whether a given entry is a file or folder?
2. As you can see each entry in the root directory occupies 64 bytes (instead of 32 bytes). There seems to be 2 32byte entries for each file and folder. For example, folder 'a' has entries at 0x32800 and 0x32820 (totally 64bytes).
3. What does the value 0x41 denote in this context? The value 0x41 appears at 0x32800, 0x32820, 0x32840, 0x32880. The values at 0x32860 and 0x328A0 are different from 0x41.
4. The offset 0x1A from address 0x32800 (0x32800 + 0x1a = 0x3281a) has value 0, offset 0x1A from address 0x32820 (0x32820 + 0x1a = 0x3283a) has value 3. Which is the correct cluster number corresponding to folder 'a'?
No, folder entries do NOT start with "." (0x2E) unless they are for the . and .. entries of subdirectories (these aren't in the root). The dirent's attributes byte has the 0x10 bit set if the dirent is a directory.
You are also looking at a directory that has long file names. The original FAT file system specification only allowed 11 character names that were all upper case and were in the OEM codepage. Windows 95 extended this. It's pretty complicated to explain on stackoverflow how this works. I suggest looking at the MSDN documentation for LFN or Long File Names.
http://technet.microsoft.com/en-us/library/cc938438.aspx
A FAT** file system saves all files as one basic size unless the file is larger than that size then it re adapts the size to hold the entire file
but the point here is that a FAT file system is mainly good if you have allot of Disk space other wise I would Recommend using an NTFS file system if possible. also the images your showing looks like a registry code for a floppy drive

How many files can I put in a directory?

Does it matter how many files I keep in a single directory? If so, how many files in a directory is too many, and what are the impacts of having too many files? (This is on a Linux server.)
Background: I have a photo album website, and every image uploaded is renamed to an 8-hex-digit id (say, a58f375c.jpg). This is to avoid filename conflicts (if lots of "IMG0001.JPG" files are uploaded, for example). The original filename and any useful metadata is stored in a database. Right now, I have somewhere around 1500 files in the images directory. This makes listing the files in the directory (through FTP or SSH client) take a few seconds. But I can't see that it has any effect other than that. In particular, there doesn't seem to be any impact on how quickly an image file is served to the user.
I've thought about reducing the number of images by making 16 subdirectories: 0-9 and a-f. Then I'd move the images into the subdirectories based on what the first hex digit of the filename was. But I'm not sure that there's any reason to do so except for the occasional listing of the directory through FTP/SSH.
FAT32:
Maximum number of files: 268,173,300
Maximum number of files per directory: 216 - 1 (65,535)
Maximum file size: 2 GiB - 1 without LFS, 4 GiB - 1 with
NTFS:
Maximum number of files: 232 - 1 (4,294,967,295)
Maximum file size
Implementation: 244 - 26 bytes (16 TiB - 64 KiB)
Theoretical: 264 - 26 bytes (16 EiB - 64 KiB)
Maximum volume size
Implementation: 232 - 1 clusters (256 TiB - 64 KiB)
Theoretical: 264 - 1 clusters (1 YiB - 64 KiB)
ext2:
Maximum number of files: 1018
Maximum number of files per directory: ~1.3 × 1020 (performance issues past 10,000)
Maximum file size
16 GiB (block size of 1 KiB)
256 GiB (block size of 2 KiB)
2 TiB (block size of 4 KiB)
2 TiB (block size of 8 KiB)
Maximum volume size
4 TiB (block size of 1 KiB)
8 TiB (block size of 2 KiB)
16 TiB (block size of 4 KiB)
32 TiB (block size of 8 KiB)
ext3:
Maximum number of files: min(volumeSize / 213, numberOfBlocks)
Maximum file size: same as ext2
Maximum volume size: same as ext2
ext4:
Maximum number of files: 232 - 1 (4,294,967,295)
Maximum number of files per directory: unlimited
Maximum file size: 244 - 1 bytes (16 TiB - 1)
Maximum volume size: 248 - 1 bytes (256 TiB - 1)
I have had over 8 million files in a single ext3 directory. libc readdir() which is used by find, ls and most of the other methods discussed in this thread to list large directories.
The reason ls and find are slow in this case is that readdir() only reads 32K of directory entries at a time, so on slow disks it will require many many reads to list a directory. There is a solution to this speed problem. I wrote a pretty detailed article about it at: http://www.olark.com/spw/2011/08/you-can-list-a-directory-with-8-million-files-but-not-with-ls/
The key take away is: use getdents() directly -- http://www.kernel.org/doc/man-pages/online/pages/man2/getdents.2.html rather than anything that's based on libc readdir() so you can specify the buffer size when reading directory entries from disk.
I have a directory with 88,914 files in it. Like yourself this is used for storing thumbnails and on a Linux server.
Listed files via FTP or a php function is slow yes, but there is also a performance hit on displaying the file. e.g. www.website.com/thumbdir/gh3hg4h2b4h234b3h2.jpg has a wait time of 200-400 ms. As a comparison on another site I have with a around 100 files in a directory the image is displayed after just ~40ms of waiting.
I've given this answer as most people have just written how directory search functions will perform, which you won't be using on a thumb folder - just statically displaying files, but will be interested in performance of how the files can actually be used.
It depends a bit on the specific filesystem in use on the Linux server. Nowadays the default is ext3 with dir_index, which makes searching large directories very fast.
So speed shouldn't be an issue, other than the one you already noted, which is that listings will take longer.
There is a limit to the total number of files in one directory. I seem to remember it definitely working up to 32000 files.
Keep in mind that on Linux if you have a directory with too many files, the shell may not be able to expand wildcards. I have this issue with a photo album hosted on Linux. It stores all the resized images in a single directory. While the file system can handle many files, the shell can't. Example:
-shell-3.00$ ls A*
-shell: /bin/ls: Argument list too long
or
-shell-3.00$ chmod 644 *jpg
-shell: /bin/chmod: Argument list too long
I'm working on a similar problem right now. We have a hierarchichal directory structure and use image ids as filenames. For example, an image with id=1234567 is placed in
..../45/67/1234567_<...>.jpg
using last 4 digits to determine where the file goes.
With a few thousand images, you could use a one-level hierarchy. Our sysadmin suggested no more than couple of thousand files in any given directory (ext3) for efficiency / backup / whatever other reasons he had in mind.
For what it's worth, I just created a directory on an ext4 file system with 1,000,000 files in it, then randomly accessed those files through a web server. I didn't notice any premium on accessing those over (say) only having 10 files there.
This is radically different from my experience doing this on ntfs a few years back.
I've been having the same issue. Trying to store millions of files in a Ubuntu server in ext4. Ended running my own benchmarks. Found out that flat directory performs way better while being way simpler to use:
Wrote an article.
The biggest issue I've run into is on a 32-bit system. Once you pass a certain number, tools like 'ls' stop working.
Trying to do anything with that directory once you pass that barrier becomes a huge problem.
It really depends on the filesystem used, and also some flags.
For example, ext3 can have many thousands of files; but after a couple of thousands, it used to be very slow. Mostly when listing a directory, but also when opening a single file. A few years ago, it gained the 'htree' option, that dramatically shortened the time needed to get an inode given a filename.
Personally, I use subdirectories to keep most levels under a thousand or so items. In your case, I'd create 256 directories, with the two last hex digits of the ID. Use the last and not the first digits, so you get the load balanced.
If the time involved in implementing a directory partitioning scheme is minimal, I am in favor of it. The first time you have to debug a problem that involves manipulating a 10000-file directory via the console you will understand.
As an example, F-Spot stores photo files as YYYY\MM\DD\filename.ext, which means the largest directory I have had to deal with while manually manipulating my ~20000-photo collection is about 800 files. This also makes the files more easily browsable from a third party application. Never assume that your software is the only thing that will be accessing your software's files.
It absolutely depends on the filesystem. Many modern filesystems use decent data structures to store the contents of directories, but older filesystems often just added the entries to a list, so retrieving a file was an O(n) operation.
Even if the filesystem does it right, it's still absolutely possible for programs that list directory contents to mess up and do an O(n^2) sort, so to be on the safe side, I'd always limit the number of files per directory to no more than 500.
ext3 does in fact have directory size limits, and they depend on the block size of the filesystem. There isn't a per-directory "max number" of files, but a per-directory "max number of blocks used to store file entries". Specifically, the size of the directory itself can't grow beyond a b-tree of height 3, and the fanout of the tree depends on the block size. See this link for some details.
https://www.mail-archive.com/cwelug#googlegroups.com/msg01944.html
I was bitten by this recently on a filesystem formatted with 2K blocks, which was inexplicably getting directory-full kernel messages warning: ext3_dx_add_entry: Directory index full! when I was copying from another ext3 filesystem. In my case, a directory with a mere 480,000 files was unable to be copied to the destination.
"Depends on filesystem"
Some users mentioned that the performance impact depends on the used filesystem. Of course. Filesystems like EXT3 can be very slow. But even if you use EXT4 or XFS you can not prevent that listing a folder through ls or find or through an external connection like FTP will become slower an slower.
Solution
I prefer the same way as #armandino. For that I use this little function in PHP to convert IDs into a filepath that results 1000 files per directory:
function dynamic_path($int) {
// 1000 = 1000 files per dir
// 10000 = 10000 files per dir
// 2 = 100 dirs per dir
// 3 = 1000 dirs per dir
return implode('/', str_split(intval($int / 1000), 2)) . '/';
}
or you could use the second version if you want to use alpha-numeric characters:
function dynamic_path2($str) {
// 26 alpha + 10 num + 3 special chars (._-) = 39 combinations
// -1 = 39^2 = 1521 files per dir
// -2 = 39^3 = 59319 files per dir (if every combination exists)
$left = substr($str, 0, -1);
return implode('/', str_split($left ? $left : $str[0], 2)) . '/';
}
results:
<?php
$files = explode(',', '1.jpg,12.jpg,123.jpg,999.jpg,1000.jpg,1234.jpg,1999.jpg,2000.jpg,12345.jpg,123456.jpg,1234567.jpg,12345678.jpg,123456789.jpg');
foreach ($files as $file) {
echo dynamic_path(basename($file, '.jpg')) . $file . PHP_EOL;
}
?>
1/1.jpg
1/12.jpg
1/123.jpg
1/999.jpg
1/1000.jpg
2/1234.jpg
2/1999.jpg
2/2000.jpg
13/12345.jpg
12/4/123456.jpg
12/35/1234567.jpg
12/34/6/12345678.jpg
12/34/57/123456789.jpg
<?php
$files = array_merge($files, explode(',', 'a.jpg,b.jpg,ab.jpg,abc.jpg,ddd.jpg,af_ff.jpg,abcd.jpg,akkk.jpg,bf.ff.jpg,abc-de.jpg,abcdef.jpg,abcdefg.jpg,abcdefgh.jpg,abcdefghi.jpg'));
foreach ($files as $file) {
echo dynamic_path2(basename($file, '.jpg')) . $file . PHP_EOL;
}
?>
1/1.jpg
1/12.jpg
12/123.jpg
99/999.jpg
10/0/1000.jpg
12/3/1234.jpg
19/9/1999.jpg
20/0/2000.jpg
12/34/12345.jpg
12/34/5/123456.jpg
12/34/56/1234567.jpg
12/34/56/7/12345678.jpg
12/34/56/78/123456789.jpg
a/a.jpg
b/b.jpg
a/ab.jpg
ab/abc.jpg
dd/ddd.jpg
af/_f/af_ff.jpg
ab/c/abcd.jpg
ak/k/akkk.jpg
bf/.f/bf.ff.jpg
ab/c-/d/abc-de.jpg
ab/cd/e/abcdef.jpg
ab/cd/ef/abcdefg.jpg
ab/cd/ef/g/abcdefgh.jpg
ab/cd/ef/gh/abcdefghi.jpg
As you can see for the $int-version every folder contains up to 1000 files and up to 99 directories containing 1000 files and 99 directories ...
But do not forget that to many directories cause the same performance problems!
Finally you should think about how to reduce the amount of files in total. Depending on your target you can use CSS sprites to combine multiple tiny images like avatars, icons, smilies, etc. or if you use many small non-media files consider combining them e.g. in JSON format. In my case I had thousands of mini-caches and finally I decided to combine them in packs of 10.
The question comes down to what you're going to do with the files.
Under Windows, any directory with more than 2k files tends to open slowly for me in Explorer. If they're all image files, more than 1k tend to open very slowly in thumbnail view.
At one time, the system-imposed limit was 32,767. It's higher now, but even that is way too many files to handle at one time under most circumstances.
What most of the answers above fail to show is that there is no "One Size Fits All" answer to the original question.
In today's environment we have a large conglomerate of different hardware and software -- some is 32 bit, some is 64 bit, some is cutting edge and some is tried and true - reliable and never changing.
Added to that is a variety of older and newer hardware, older and newer OSes, different vendors (Windows, Unixes, Apple, etc.) and a myriad of utilities and servers that go along.
As hardware has improved and software is converted to 64 bit compatibility, there has necessarily been considerable delay in getting all the pieces of this very large and complex world to play nicely with the rapid pace of changes.
IMHO there is no one way to fix a problem. The solution is to research the possibilities and then by trial and error find what works best for your particular needs. Each user must determine what works for their system rather than using a cookie cutter approach.
I for example have a media server with a few very large files. The result is only about 400 files filling a 3 TB drive. Only 1% of the inodes are used but 95% of the total space is used. Someone else, with a lot of smaller files may run out of inodes before they come near to filling the space. (On ext4 filesystems as a rule of thumb, 1 inode is used for each file/directory.)
While theoretically the total number of files that may be contained within a directory is nearly infinite, practicality determines that the overall usage determine realistic units, not just filesystem capabilities.
I hope that all the different answers above have promoted thought and problem solving rather than presenting an insurmountable barrier to progress.
I ran into a similar issue. I was trying to access a directory with over 10,000 files in it. It was taking too long to build the file list and run any type of commands on any of the files.
I thought up a little php script to do this for myself and tried to figure a way to prevent it from time out in the browser.
The following is the php script I wrote to resolve the issue.
Listing Files in a Directory with too many files for FTP
How it helps someone
I recall running a program that was creating a huge amount of files at the output. The files were sorted at 30000 per directory. I do not recall having any read problems when I had to reuse the produced output. It was on an 32-bit Ubuntu Linux laptop, and even Nautilus displayed the directory contents, albeit after a few seconds.
ext3 filesystem: Similar code on a 64-bit system dealt well with 64000 files per directory.
I respect this doesn't totally answer your question as to how many is too many, but an idea for solving the long term problem is that in addition to storing the original file metadata, also store which folder on disk it is stored in - normalize out that piece of metadata. Once a folder grows beyond some limit you are comfortable with for performance, aesthetic or whatever reason, you just create a second folder and start dropping files there...
Not an answer, but just some suggestions.
Select a more suitable FS (file system). Since from a historic point of view, all your issues were wise enough, to be once central to FSs evolving over decades. I mean more modern FS better support your issues. First make a comparison decision table based on your ultimate purpose from FS list.
I think its time to shift your paradigms. So I personally suggest using a distributed system aware FS, which means no limits at all regarding size, number of files and etc. Otherwise you will sooner or later challenged by new unanticipated problems.
I'm not sure to work, but if you don't mention some experimentation, give AUFS over your current file system a try. I guess it has facilities to mimic multiple folders as a single virtual folder.
To overcome hardware limits you can use RAID-0.
There is no single figure that is "too many", as long as it doesn't exceed the limits of the OS. However, the more files in a directory, regardless of the OS, the longer it takes to access any individual file, and on most OS's, the performance is non-linear, so to find one file out of 10,000 takes more then 10 times longer then to find a file in 1,000.
Secondary problems associated with having a lot of files in a directory include wild card expansion failures. To reduce the risks, you might consider ordering your directories by date of upload, or some other useful piece of metadata.
≈ 135,000 FILES
NTFS | WINDOWS 2012 SERVER | 64-BIT | 4TB HDD | VBS
Problem: Catastrophic hardware issues appear when a [single] specific folder amasses roughly 135,000 files.
"Catastrophic" = CPU Overheats, Computer Shuts Down, Replacement Hardware needed
"Specific Folder" = has a VBS file that moves files into subfolders
Access = the folder is automatically accessed/executed by several client computers
Basically, I have a custom-built script that sits on a file server. When something goes wrong with the automated process (ie, file spill + dam) then the specific folder gets flooded [with unmoved files]. The catastrophe takes shape when the client computers keep executing the script. The file server ends up reading through 135,000+ files; and doing so hundreds of times each day. This work-overload ends up overheating my CPU (92°C, etc.); which ends up crashing my machine.
Solution: Make sure your file-organizing scripts never have to deal with a folder that has 135,000+ files.
flawless,
flawless,
absolutely flawless :
( G. M. - RIP )
function ff () {
d=$1; f=$2;
p=$( echo $f |sed "s/$d.*//; s,\(.\),&/,g; s,/$,," );
echo $p/$f ;
}
ff _D_ 09748abcGHJ_D_my_tagged_doc.json
0/9/7/4/8/a/b/c/G/H/J/09748abcGHJ_D_my_tagged_doc.json
ff - gadsf12-my_car.json
g/a/d/s/f/1/2/gadsf12-my_car.json
and also this
ff _D_ 0123456_D_my_tagged_doc.json
0/1/2/3/4/5/6/0123456_D_my_tagged_doc.json
ff .._D_ 0123456_D_my_tagged_doc.json
0/1/2/3/4/0123456_D_my_tagged_doc.json
enjoy !

Resources