Does tmpfs automatically resize when the amount RAM changes - filesystems

If I have a tmpfs set to 50%, and later on I add or remove RAM, does tmpfs automatically adjust its partition size?
Also what if I have multiple tmpfs each set at 50%. Do multiple tmpfs compete against each other for the same 50%? How is this managed by the OS?

Related

AOSP: How to change size of file systems?

When i do df -h, I can see that /dev/block/dm-2 is mounted on vendor, /dev/block/dm-0 on /(system, i guess?) etc. as shown below.
Filesystem Size Used Avail Use% Mounted on
tmpfs 978M 816K 978M 1% /dev
tmpfs 978M 0 978M 0% /mnt
/dev/block/mmcblk2p11 11M 144K 11M 2% /metadata
/dev/block/dm-0 934M 931M 2.8M 100% /
/dev/block/dm-2 228M 227M 708K 100% /vendor
As it can be seen, both the vendor and system partitions are almost full. How can i increase the size of both the file systems?
It may have enable dynamic partitions. Have a look at: https://source.android.com/devices/tech/ota/dynamic_partitions/implement?hl=en.
With dynamic partitions, vendors no longer have to worry about the individual sizes of partitions such as system, vendor, and product. Instead, the device allocates a super partition, and sub-partitions can be sized dynamically within it. Individual partition images no longer have to leave empty space for future OTAs. Instead, the remaining free space in super is available for all dynamic partitions.

Do files in /dev/shm take up memory when grown with ftruncate but are not written?

I'm using mmap to create shared memory segments, and I'm wondering if I can pre-create all the segments I'm going to possibly use in /dev/shm without triggering any memory use. The reason I suspect this may be possible is that I know most filesystems have a concept of all-zero pages, and it's possible when you initially grow a file before you do any writes to have the file not really take up space because of these 'hole' pages. But is this true for tmpfs (filesystem for /dev/shm)? Can I go wild making large files in /dev/shm without triggering memory use as long as I don't write to them?
On Linux, the tmpfs file system supports sparse files. Just resizing the file does not allocate memory (beyond the internal tmpfs data structures). Just like with regular file systems which support sparse files (files with holes), you either have to actually write data or use fallocate to allocate backing storage. As far as I can see, this has been this way since the Linux 2.6 days.

RAMDISK data transfers in C (fread)

Right, so I'm trying to optimize a software that needs to read a huge image file (1.3 GB) in C/OpenCL in order to transfer it to the device by 40 MB blocks.
I created a RAMDISK with tmpfs to store the file but when I analyze bitrates I find that using a RAMDISK is actually a bit slower than using my SSD to read the image file.
So I'm wondering, does the open operation (using fopen) do a RAM-to-RAM transfer to store data in the buffer ? Or is it the filesystem's overhead that causes this performance issue ?

NAND jffs2 files system - binary & text files can exceeds the size of NAND

I am writing an embedded application based on the ARM 9 v5 processor, and am using 64MB NAND. My problem is that when I copy the text or binary files of size 3-4 MB, the free physical memory gets reduced by only few KB, whereas ls -l shows the file size in MB.
By repeating the same process I reached one point where df command shows me 10MB size is free and du shows the total size as 239MB.
I have only 64MB of NAND, how am I able to add files up to 239MB of size?
JFFS2 is a compressed filesystem, so it is keeping the files compressed in the disk, which leads to this conflict. du lists the disk usage and df is the available capacity as seen by the filesystem.

How much do modern filesystems reserve for each block group?

In reading about the Unix FFS, I've read that 10% of the disk space is reserved so that files' data blocks can be ensured to be in the same cylinder group. Is this still true with filesystems like ext2/ext3, is there space reserved so that files' data blocks can all be in the same block group? Is it also 10%? or does it vary? Also, is the same true for journaling filesystems as well? Thank you.
first of all i think that ext filesystems implement the same notion of a cylinder group,
they just call it block group.
to find out about it , you can fdisk the partition to find your actual block count
and blocks/group number .Then the number of block groups = block count / (block/group).
They are used in exactly the same way as FFS cgs (to speed up access times).
Now journaling IMO has nothing to do with this operation, except that it actually wastes
some more space on your disk :). As far as i understand , soft updates which is the BSD solution to the problem that a journal would solve in typical ext filesystems, don't require extra space , but are tremendously complex to implement and add new features on (like resizing).
interesting read:
ext3 overhead disclosed part 1
cheers!
My data for fresh ext2 images are:
Size Block size Bl/Gr Total bytes Free bytes Ratio
1MB 1024 8192 1048576 1009664 0.03710
10MB 1024 8192 10485760 10054656 0.04111
100MB 1024 8192 104857600 99942400 0.04688
512M 4096 32768 536870912 528019456 0.01649
1G 4096 32768 1073741824 1055543296 0.01695
10G 4096 32768 10737418240 10545336320 0.01789
So, it's quite predictable that the space efficiency of an Ext2 filesystem depends on block size due to layout described in the above answer: filesystem is a set of block groups, for each group its size is determined as count of blocks which can be described by a 1-block bitmap => for a 4096 byte block there are 8 * 4096 blocks.
Conclusion: for ext2/ext3 family of filesystems average default consumption of space depends on block size:
~ 1.6 - 1.8 % for 4096 byte blocks, ~ 4 % for 1024 ones

Resources