Background: as part of trying to ensure complete coverage in my unit tests, I'm stress-testing some overflow checks.
So, I tried to create a 16 EiB sparse file.
However, on Linux/Btrfs this seems to fail at 8 EiB (2^63 bytes) rather than 16 EiB (2^64 bytes), likely because off_t is signed.
Is there any OS/filesystem combo which actually, rather than theoretically, allows creating such files?
(-1 omitted in each of the numbers above)
This works:
dd if=/dev/zero of=sparse_file bs=1 count=0 seek=9223372036854775807
This fails:
dd if=/dev/zero of=sparse_file bs=1 count=0 seek=9223372036854775808
Related
I have a Hex file for STM32F427 that was built using GCC(gcc-arm-none-eabi) version 4.6 that had contiguous memory addresses. I wrote boot loader for loading that hex file and also added checksum capability to make sure Hex file is correct before starting the application.
Snippet of Hex file:
:1005C80018460AF02FFE07F5A64202F1D00207F5F9
:1005D8008E4303F1A803104640F6C821C2F2000179
:1005E8001A460BF053F907F5A64303F1D003184652
:1005F8000BF068F907F5A64303F1E80340F6FC1091
:10060800C2F2000019463BF087FF07F5A64303F145
:10061800E80318464FF47A710EF092FC07F5A643EA
:1006280003F1E80318460EF03DFC034607F5A64221
:1006380002F1E0021046194601F0F2FC07F56A5390
As you can see all the addresses are sequential. Then we changed the compiler to version 4.8 and i got the same type of Hex file.
But now we used compiler version 6.2 and the Hex file generated is not contiguous. It is somewhat like this:
:10016000B9BC0C08B9BC0C08B9BC0C08B9BC0C086B
:10017000B9BC0C08B9BC0C08B9BC0C08B9BC0C085B
:08018000B9BC0C08B9BC0C0865
:1001900081F0004102E000BF83F0004330B54FEA38
:1001A00041044FEA430594EA050F08BF90EA020FA5
As you can see after 0188 it is starting at 0190 means rest of 8 bytes(0189 to 018F) are 0xFF as they are not flashed.
Now boot loader is kind of dumb where we just pass the starting address and no of bytes to calculate the checksum.
Is there a way to make hex file in contiguous way as compiler 4.6 and compiler 4.8? the code is same in all the three times.
If post-processing the hex file is an option, you can consider using the IntelHex python library. This lets you manipulate hex file data (i.e. ignoring the 'markup'; record type, address, checksum etc) rather than as lines, will for instance create output with the correct line checksum.
A fast way to get this up and running could be to use the bundled convenience scripts hex2bin.py and bin2hex.py:
python hex2bin.py --pad=FF noncontiguous.hex tmp.bin
python bin2hex.py tmp.bin contiguous.hex
The first line converts the input file noncontiguous.hex to a binary file, padding it with FF where there is no data. The second line converts it the binary file back to a hex file.
The result would be
:08018000B9BC0C08B9BC0C0865
becomes
:10018000B9BC0C08B9BC0C08FFFFFFFFFFFFFFFF65
As you can see, padding bytes are added where the input doesn't have any data, equivalent to writing the input file to the device and reading it back out. Bytes that are in the input file are kept the same - and at the same address.
The checksum is also correct as changing the length byte from 0x08 to 0x10 compensates for the extra 0xFF bytes. If you padded with something else, IntelHex would output the correct checksum
You can skip the the creation of a temporary file by piping these: omit tmp.bin in the first line and replacing it with - in the second line:
python hex2bin.py --pad=FF noncontiguous.hex | python bin2hex.py - contiguous.hex
An alternative way could be to have a base file with all FF and use the hexmerge.py convenience script to merge gcc's output onto it with --overlap=replace
The longer, more flexible way, would be to implement your own tool using the IntelHex API. I've used this to good effect in situations similar to yours - tweak hex files to satisfy tools that are costly to change, but only handle hex files the way they were when the tool was written.
One of many possible ways:
Make your hex file with v6.2, e.g., foo.hex.
Postprocess it with this Perl oneliner:
perl -pe 'if(m/^:(..)(.*)$/) { my $rest=16-hex($1); $_ = ":10" . $2 . ("FF" x $rest) . "\n"; }' foo.hex > foo2.hex
Now foo2.hex will have all 16-byte lines
Note: all this does is FF-pad to 0x10 bytes. It doesn't check addresses or anything else.
Explanation
perl -pe '<some script>' <input file> runs <some script> for each line of <input file>, and prints the result. The script is:
if(m/^:(..)(.*)$/) { # grab the existing byte count into $1
my $rest=16 - hex($1); # how many bytes of 0xFF we need
$_ = ":10" . $2 . ("FF" x $rest) . "\n"; # make the new 16-byte line
# existing bytes-^^ ^^^^^^^^^^^^^^-pad bytes
}
Another solution is to change the linker script to ensure the preceding .isr_vector section ends on a 16 byte alignment, as the mapfile reveals that the following .text section is 16 byte aligned.
This will ensure there is no unprogrammed flash bytes between the two sections
You can use bincopy to fill all empty space with 0xff.
$ pip install bincopy
$ bincopy fill foo.hex
Use the -gap-fill option of objcopy, e.g.:
arm-none-eabi-objcopy --gap-fill 0xFF -O ihex firmware.elf firmware.hex
I have created the emulated device given at http://pmem.io/2016/02/22/pm-emulation.html, successfully.
It shows the device correctly:
:~/Prakash/nvml/src/examples/libpmem$ mount | grep pmem
/dev/pmem0 on /mnt/pmemd type ext4 (rw,relatime,dax,errors=continue,data=ordered)
However, when I execute the simple_copy sample given with pmem nvml, it gives this error:
amd#amd:~/Prakash/nvml/src/examples/libpmem$ ./simple_copy logs
/dev/pmem0 pmem_map_file: File exists
amd#amd:~/Prakash/nvml/src/examples/libpmem$ ./simple_copy logs
/dev/pmem0/logs pmem_map_file: Not a directory
Am I not using the program correctly?
Also, I have mounted the device as dax and I clearly see the performance advantage with
:~/Prakash/nvml/src/examples/libpmem$ sudo dd if=/dev/zero of=/dev/pmem0 bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB, 2.0 GiB) copied, 0.910729 s, 2.4 GB/s
:~/Prakash/nvml/src/examples/libpmem$ sudo dd if=/dev/zero of=/mnt/pmem0/test bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB, 2.0 GiB) copied, 6.39032 s, 336 MB/s
from the errors posted, is seems reasonable to believe:
without the appropriate option, it will not create a directory
without the appropriate option, it will not replace a file
If you open the example you are referring to, you will see the following:
if ((pmemaddr = pmem_map_file(argv[2], BUF_LEN,
PMEM_FILE_CREATE|PMEM_FILE_EXCL,
0666, &mapped_len, &is_pmem)) == NULL) {
perror("pmem_map_file");
exit(1);
}
This is the part that is giving you trouble. To understand why, let's look at the man 7 libpmem. You can find the relevant part here.
This is the paragraph we are interested in:
The pmem_map_file() function creates a new read/write mapping for a
file. If PMEM_FILE_CREATE is not specified in flags, the entire
existing file path is mapped, len must be zero, and mode is ignored.
Otherwise, path is opened or created as specified by flags and mode,
and len must be non-zero. pmem_map_file() maps the file using mmap(2),
but it also takes extra steps to make large page mappings more likely.
So, the pmem_map_file function effectively calls open(2) and then mmap(2). In the simple_copy.c example we can observe that the flags which were used are: PMEM_FILE_CREATE and PMEM_FILE_EXCL, and as we can learn from the manpage, they roughly translate to O_CREAT and O_EXCL respectively.
This means that the error messages are correct and you've received them because in your first attempt you've provided an existing file, whilst on the second attempt you tried a directory.
There's an in-depth explanation of libpmem here.
If I use
/usr/bin/time -f"%e,%P,%M,%I,%O"
I get (for the last three placeholders) the memory the process used, and if there was some input and output during it.
Obviously, it's easy to get %e or something like it using sys/time.h, but is there a way to get %M, %I and %O programmatically?
You could read and parse the files in the /proc filesystem. /proc/self refers to the process accessing the /proc filesystem.
/proc/self/statm contains information about memory usage, measured in pages. Sample output:
% cat /proc/self/statm
1115 82 63 12 0 79 0
Fields are size resident share text lib data dt; see the proc manual page for some additional details.
/proc/self/io contains the I/O for the current process. Sample output:
% cat /proc/self/io
rchar: 2012
wchar: 0
syscr: 6
syscw: 0
read_bytes: 0
write_bytes: 0
cancelled_write_bytes: 0
Unfortunately, io isn't documented in the proc manual page (at least on my Debian system). I had too check the iotop source code to see how it obtained the per process I/O information.
In working with a compressed PE (Windows console EXE) that has a file alignment and section alignment of 4 bytes, I notice that if virtual size and raw size of the sections match, then the program loads, but if virtual size of the data section, the last section, does not match then Windows refuses to load it, even though by the specification you should be able to have a virtual size larger than a raw size.
Is this some kind of hidden constraint on compressed PEs?
I have pasted a dumpbin /headers of the exe below:
Microsoft (R) COFF/PE Dumper Version 10.00.30319.01
Copyright (C) Microsoft Corporation. All rights reserved.
Dump of file ba42x.exe
PE signature found
File Type: EXECUTABLE IMAGE
FILE HEADER VALUES
14C machine (x86)
2 number of sections
50AABC14 time date stamp Mon Nov 19 18:09:08 2012
0 file pointer to symbol table
0 number of symbols
60 size of optional header
10F characteristics
Relocations stripped
Executable
Line numbers stripped
Symbols stripped
32 bit word machine
OPTIONAL HEADER VALUES
10B magic # (PE32)
2.03 linker version
BD0 size of code
5000 size of initialized data
0 size of uninitialized data
CC entry point (004000CC)
CC base of code
C9C base of data
400000 image base (00400000 to 00403FFF)
4 section alignment
4 file alignment
4.00 operating system version
0.00 image version
4.00 subsystem version
0 Win32 version
4000 size of image
CC size of headers
0 checksum
3 subsystem (Windows CUI)
0 DLL characteristics
10000 size of stack reserve
1000 size of stack commit
0 size of heap reserve
0 size of heap commit
0 loader flags
0 number of directories
SECTION HEADER #1
.text name
BD0 virtual size
CC virtual address (004000CC to 00400C9B)
BD0 size of raw data
CC file pointer to raw data (000000CC to 00000C9B)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
E0000020 flags
Code
Execute Read Write
SECTION HEADER #2
.data name
3102 virtual size
C9C virtual address (00400C9C to 00403D9D)
3102 size of raw data
C9C file pointer to raw data (00000C9C to 00003D9D)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
C0000040 flags
Initialized Data
Read Write
Summary
3104 .data
BD0 .text
For example if you change the virtual size of the above .data section to 3106 the program will not load, even though the size of initialized data (0x5000) is more than enough to accomodate the additional memory.
No, there are not special constraints related to compressed images, since as long as your image is PE compliant, the loader does not care about the compression. Compression is handled by the stub, not the loader.
Can you provide your image for further analysis?
Just by looking at the output of dumpbin, the image looks unusual..There are no directory at all, pretty strange. It looks like the issue with the loader is not directly related to the alignement, but malformation of the image file. Did you try to have a look at your image file using other PE tools (e.g. PeStudio, CFF Explorer..?)
I'd like to create a sparse file such that all-zero blocks don't take up actual disk space until I write data to them. Is it possible?
There seems to be some confusion as to whether the default Mac OS X filesystem (HFS+) supports holes in files. The following program demonstrates that this is not the case.
#include <stdio.h>
#include <string.h>
#include <fcntl.h>
#include <unistd.h>
void create_file_with_hole(void)
{
int fd = open("file.hole", O_WRONLY|O_TRUNC|O_CREAT, 0600);
write(fd, "Hello", 5);
lseek(fd, 99988, SEEK_CUR); // Make a hole
write(fd, "Goodbye", 7);
close(fd);
}
void create_file_without_hole(void)
{
int fd = open("file.nohole", O_WRONLY|O_TRUNC|O_CREAT, 0600);
write(fd, "Hello", 5);
char buf[99988];
memset(buf, 'a', 99988);
write(fd, buf, 99988); // Write lots of bytes
write(fd, "Goodbye", 7);
close(fd);
}
int main()
{
create_file_with_hole();
create_file_without_hole();
return 0;
}
The program creates two files, each 100,000 bytes in length, one of which has a hole of 99,988 bytes.
On Mac OS X 10.5 on an HFS+ partition, both files take up the same number of disk blocks (200):
$ ls -ls
total 400
200 -rw------- 1 user staff 100000 Oct 10 13:48 file.hole
200 -rw------- 1 user staff 100000 Oct 10 13:48 file.nohole
Whereas on CentOS 5, the file without holes consumes 88 more disk blocks than the other:
$ ls -ls
total 136
24 -rw------- 1 user nobody 100000 Oct 10 13:46 file.hole
112 -rw------- 1 user nobody 100000 Oct 10 13:46 file.nohole
As in other Unixes, it's a feature of the filesystem. Either the filesystem supports it for ALL files or it doesn't. Unlike Win32, you don't have to do anything special to make it happen. Also unlike Win32, there is no performance penalty for using a sparse file.
On MacOS, the default filesystem is HFS+ which does not support sparse files.
Update: MacOS used to support UFS volumes with sparse file support, but that has been removed. None of the currently supported filesystems feature sparse file support.
This thread becomes a comprehensive source of info about the sparse files. Here is the missing part for Win32:
Decent article with examples
Tool that estimates if it makes sense to make file as sparse
Regards
hdiutil can handle sparse images and files but unfortunately the framework it links against is private.
You could try defining external symbols as defined by the DiskImages framework below but this is most likely not acceptable for production code, plus since the framework is private you'd have to reverse engineer its use cases.
cristi:~ diciu$ otool -L /usr/bin/hdiutil
/usr/bin/hdiutil:
/System/Library/PrivateFrameworks/DiskImages.framework/Versions/A/DiskImages (compatibility version 1.0.8, current version 194.0.0)
[..]
cristi:~ diciu$ nm /System/Library/PrivateFrameworks/DiskImages.framework/Versions/A/DiskImages | awk -F' ' '{print $3}' | c++filt | grep -i sparse
[..]
CSparseFile::sector2Band(long long)
CSparseFile::addIndexNode()
CSparseFile::readIndexNode(long long, SparseFileIndexNode*)
CSparseFile::readHeaderNode(CBackingStore*, SparseFileHeaderNode*, unsigned long)
[... cut for brevity]
Later Edit
You could use hdiutil as an external process and have it create an sparse disk image for you. From the C process you would then create a file in the (mounted) sparse disk image.
If you seek (fseek, ftruncate, ...) to past the end, the file size will be increased without allocating blocks until you write to the holes. But there's no way to create a magic file that automatically converts blocks of zeroes to holes. You have to do it yourself.
This may be helpful to look at (the OpenBSD cp command inserts holes instead of writing zeroes).
patch
If you want portability, the last resort is to write your own access function so that you manage an index and a set of blocks.
In essence you manage a single file as the OS manages the disk keeping the chain of the blocks that are part of the file, the bitmap of allocated/free blocks etc.
Of course this will lead to a non optimized and slower access, I would reccomend this apprach only if the requirement to save space is absolutely critical and you have enough time to write a robust set of access functions.
And even in that case, I would first investigate if your problem is in need of a different solution. Probably you should store your data differently?
It looks like OS X supports sparse files on UDF volumes. I tried titaniumdecoy's test program on OS X 10.9 and it did generate a sparse file on a UDF disk image. Also, not that UFS is no longer supported in OS X, so if you need sparse files, UDF is the only natively supported file system that supports them.
I also tried the program on SMB shares. When the server is Ubuntu (ext4 filesystem) the program creates a sparse file, but 'ls -ls' through SMB doesn't show that. If you do 'ls -ls' on the Ubuntu host itself it does show the file is sparse. When the server is Windows XP (NTFS filesystem) the program does not generate a sparse file.