IDAT Chunk of PNG File Format - c

I am currently developing a proprietary file format based on the png file format. I am done so far, except it doesn't work :-p The deflate decompressor I implemented works like a charm but the png decoder doesn't want to perform nicely so I took a look at the original png file.
The standard says that after a IDAT header, the compressed data is following immediatly. So as the data is a deflate stream the first char after IDAT is 0x78 == 01111000 which means, a mode one block (uncompressed) and its not the final one.
Strange though - its hard for me to imagine that a PNG encoder doesn't use dynamic huffman coding for compressing the filtered raw image data. The deflate standard says that the rest of the current byte is skipped in mode one.
So the next four bytes indicate the size of the uncompressed block and its one complement.
But 0x59FD is not the one complement of 0xECDA. Even if I screwed up the byte ordering: 0xFD59 is not the one complement of 0xDAEC either.
Well, the knockout byte just follows. 0x97 is considered to be the first byte of the uncompressed but still filtered raw png image data and as such must be the filtertype. But 0x97 == 10010111 is not a valid filter type. Event if I screwed up bit packing order 11101001 == 0xe9 is still no valid filter type.
I didn't focus on RFC 1951 much anymore as I am able to inflate all kind of files so far using my implementation of the deflate decompressor, so I suspect some misunderstanding on my part concering the PNG standard.
I read the RFC 2083 over and over again but the data I see here don't match the RFC, it doesn't make sense to me, there must be a missing piece!
When I look at the following bytes, I can actually not see a valid filter type byte anywhere near which makes me think that the filtered png data stream is nevertheless compressed after all.
It would make sense if 0x78 (the first byte after IDAT) would be read from MSB to LSB but RFC 1951 says otherwise. Another idea (more likely to me) is that there some data between the IDAT string and the start of the compressed deflate stream but RFC 2083 says otherwise. The Layout is clear
4Bytes Size
4Bytes ChunkName (IDAT)
[Size] Bytes (compressed deflate stream)
4Bytes CRC Checksum
So the first byte after IDAT must be the first byte of the compressed deflate stream - which indicates a mode 1 uncompressed data block. Which means that 0x97 must be the first byte of uncompressed but filtered png image data - which means 0x97 is the filtertype for the first row - which is invalid...
I just don't get it, am I stupid or what??
Summary:
Possibility 1:
There is some other data between IDAT and the effective start of the compressed deflate stream which, if renders to be true, is not meantioned in the RFC2083 nor in any book I read about image compression.
Possibility 2:
The number 0x78 is interpreted MSB -> LSB which would indicate a mode 3 block (dynamic huffman coding), but this contradicts with RF1951 which is very clear about Bit packing: (LSB -> MSB)
I know already, the missing piece must be something very stupid and I will feel the urgend need to sell my soul if there was only a delete button in Stack Overflow :-p

Two corrections which may help you get you on your way:
The number of zlib bytes in the flags is 2, not 1 -- see RFC 1950. The first is CMF, the next FLG.
In your data:
78 DA
---CMF--- ---FLG---
0111.1000 1101.0101
CINF -CM- +-||
| |+- FCHECK
| +-- FDICT
+---- FLEVEL
CINF is 7, indicating the standard 32Kb compression window.
CM is 8, indicating the compression algorithm is, indeed, DEFLATE.
FCHECK is just a checksum; I didn't check if it's correct (but I'd bet it is).
FDICT is clear, meaning there is no preset dictionary stored.
FLEVEL is 3, indicating Maximum Compression.
See also Trying to understand zlib/deflate in PNG files, esp. dr. Adler's answer.
LEN and NLEN are only set for uncompressed blocks; that's why you didn't find them. (Also, partially, because you were looking at the wrong byte(s).)
The next byte in the stream is EC; bitwise, this is 1110 1100 but remember to read bits from low to high. So the next bit read is 0, meaning not FINAL, and the next 2 bits read are 10 (in that order!), indicating a regular dynamic Huffman encoded data block.

Related

Why most files like jpeg or pdf don't use just ASCII characters for encoding? [migrated]

This question was migrated from Stack Overflow because it can be answered on Super User.
Migrated 19 days ago.
Whenever we try to open jpeg or pdf file with any text editor we find strange symbols other than ASCII. Isn't Ascii most efficient because of less space consumption by limited number of possible characters available.
I was working with a database file in linux with plocate and I found something similar.
Isn't Ascii most efficient because of less space consumption by limited number of possible characters available.
Not at all. Where did you get that idea from?
ASCII chars are 7bits long, but hardware doesn't support storing 7bits items, so ASCII is stored with 8bits, the first bit being always 0. Furthermore, ASCII includes a number of control characters that can cause issues in some situation. Therefore, the most prominent ASCII encoding (base 64) uses only 6bits. This mean that in order to encode 3 bytes (38 = 24 bits) of data you need 4 ASCII characters (4 6 = 24). Those 4 ASCII characters are then stored using 4 bytes on disk. Hence, converting a file to ASCII increases disk usage by 33%.
You can test this with the base64 command:
base64 pic.jpg > b64_jpeg.txt
ls -lh pic.jpg b64_jpeg.txt
Of course, you could try to use another ASCII encoding than the standard base64 and use all 7 bits available in ASCII. You would still get only 7bits of data per bytes on disk, thus have a +14% disk usage increase for the same data.
All modern storage uses 8-bit bytes. ASCII is an obsolete 7 bits standard, so it would take 8/7th as much storage (+14%).
It is nothing to do with number of bits as such, all binary files are the same 2 bits (true or false) what makes an image or PDF look different to Ascii text is that each single byte of bits is compressed in groups for optimal efficiency. Those symbolic strings are perhaps ASCII but compressed to about 10%.
Take a pdf of a graph as follows
ASCII = 394,132 bytes
ZIP    =   88,367 bytes
PDF   =   75,753 bytes
DocX =   32,940 bytes its text and lines (there are no images)
Take an image
PNG = 265,490 bytes
ZIP = 265,028 bytes
PDF = 220,152 bytes
PDF as ASCII = 3,250,970 bytes
3 0 obj
<</Length 3120001/Type/XObject/Subtype/Image/Width 640/Height 800/BitsPerComponent 8/SMask 4 0 R/ColorSpace/DeviceRGB/Filter/ASCIIHexDecode>>
stream
9cb6c79cb6c79cb6c79cb6c79db7c89db7c89db7c89fb7c9a0b8caa1b8caa1b8
caa1b8caa2b9cba2b9cba2b9cba2b9cba3bacba3bacaa4bbcba4bbcba6bccca7
...to infinity and beyond
So why is as ASCII image bigger than all the rest is because those 9cb6c7 can be tokenised as 4 x 9cb6c7 , 3 x 9db7c8 , etc , that's roughly how RunLengthEncoding would work, but zip is better than that.
So PARTS of a pdf may be compressed (needing slower decompression to view) in a zip style of coding (used for lossless fonts and bitmaps), whilst others may keep their optimal native photographic lossy compression (like jpeg). Overall for PDF parsing a higher percentage needs to be 8 bit ANSI (compatible with uni-coding or variable per platform) or 7bit ASCII for simplistic parsing.
Short answer compression is the means to reduce time of transmission or amount of storage resources. However decompression adds an overhead so is slower than RAW ASCII to display as graphics. Avoid exotic wavelets in a PDF where most objects need fast decompression.

MS-XCA decompression metadata points outside of the compressed byte array

I need to decompress a data model file embedded in xlsx file. The file is supposed to use the MS-XLDM file format and should consist of 3 sections (Spreadsheet Data Model Header, Files and Virtual Directory) and only the middle one is compressed. The first and last section are xml with unicode/utf-16 encoding presumably (every other byte is 0x00 and the content is preceded by 0xFF and 0xFE ). The middle file is preceded by a small chunk of xml. More detail about the file structure.
Now according to the documentation the file should be compressed using Xpress compression specified here which uses LZ77 compression and DIRECT2 encoding.
Now to get to the point. From my understanding, there should always be a 4 byte bitmask which indicates if byte in corresponding position should be a 1:1 data or metadata.
For example, given a hypothetical 8-bit bitmask, the string "ABCABCDEF" is compressed as (0,0)A(0,0)B(0,0)C(3,3)D(0,0)E(0,0)F. Its bitmask would be b'00010001' (0x11).
If given position is supposed to be metadata, at least 2 bytes should be read. Out of the 16 bits the first 13 is offset and the last 3 are the length (unless the last bit is 1, then another byte must be read).
So now onto the concrete example that I struggle with. The first 2 chunks are easy.
First one is:
....<Load xmlns="http://schemas.micr
The first 4 bytes (the dots) are 0x00 thus the 32 bytes that follow are uncompressed. Next chunk is similar:
....osoft.com/analysisservices/2003/
Now the 3rd chunk is where I get lost
w±engine":ddl27/"/2G_W?%g100gO8eðg_‡_)§è.Õ®]›‡o
I'm not sure where does the chunk exactly end because when I started counting every 36 bytes after those first ones after a while I would reach a portion of the byte stream which should be uncompressed and it didn't line up.
So back to the 3rd chunk. The bitmask for this one is 0x77 0xB1 0x04 0x01.
Or in binary 01110111 10110001 00000100 00000001. I tried to line it up with the bytes and it didn't make any sense. Clearly the word engine" is uncompressed and it fits to the previous chunks because a quick google search revealed to me a result with namespace "http://schemas.microsoft.com/analysisservices/2003/engine".
01110111 10110001 00000100 00000001
engine" :ddl27 /"/2G_W ?%g100gO8eðg_‡_)
This made me think that maybe the bytes if the bitmask are in reverse order. This made more sense to me.
00000001
engine"
If this was true, then the metadata should be 0x0B 0x02.
Or in binary 00001011 00000010. So if I split it up, the first 13 bits make up the offset of the metadata. And the length is 010 + constant offset 3 = 2+3=5.
Before 0000101100000
Invert 1111010011111
Decimal -353
But looking 353 bytes back it lands in the uncompressed partition xml section and should return the characters in parentheses (a.m.e). This doesn't make sense to me and is probably wrong.
Here is the file I tried to decompress.

Can I use the zlib header as a delimiter?

I have multiple blocks of data compressed with zlib. I want to concatenate these blocks of data and store that in one file.
Obviously, I could use something like JSON or XML to separate the zlib data blocks, but I'm wondering if, to save space, I can just search for the next 78 01, 78 9C or 78 DA?
Basically my question is, can, theoretically, these byte combinations exist in a zlib data stream, or can I be sure that when I find one of these byte combinations, a new zlib data block is started, and the end is at the found position minus one?
I know the uncompressed data blocks are always 1024 bytes or less in length, so the compressed stream will never be > 1024 bytes.
No, you can't. Any byte sequence can appear in the compressed data. At any byte position, there is a probability of 1/1024 of finding a valid zlib header. So you will find a lot of valid zlib headers in a long compressed stream that are not actually zlib headers.
You could create your own byte stuffing scheme that wraps around arbitrary data, including zlib streams or anything else, that assures that certain sequences cannot occur unless they really are delimiters. Such schemes can incur an arbitrarily small expansion of the data. For example if you find three 0xff's in a row in the data, then insert a 0x00 byte. Then 0xff 0xff 0xff 0xff can be a delimiter, since it will never appear in the data. This will only expand the stream, on average, by about 0.000006%.

WAV file data recovery

I have a situation where there is a corrupt WAV file from which I'm trying to recover data.
My colleagues have sliced up the large WAV file into smaller WAV files with proper headers. This has produced some interesting results.
Sliced into 1MB segments we get these results:
The first wave file segment is all noise.
The second wave file segment is distorted.
The third wave file segment is clear.
This pattern is repeated for the entire length of the file (after it's been broken into smaller files).
For 20MB slices:
The first wave file segment is all noise.
The second wave file segment is clear.
The third wave file segment is distorted.
Again, this pattern is repeated for the entire length of the file (after it's been broken into smaller files).
Would anyone know why this is occurring?
Assuming the WAV contains uncompressed (raw) samples, recovery should be easy. You need to know the sample format. For example: 16 bits, two channels, 44100 Hz (which is cd quality). Because one of the segments is okay, then you can look at this to figure out what the right values are.
Then just open the WAV using these values in, e.g., Adobe Audition (formerly Cool Edit), or any other wave editor that supports import of raw data.
Edit: Okay, now to answer your question. Some segments are clear, because then the alignment is right. Take the cd quality again, as I described before. The bytes of one sample look like this:
left_channel_high | left_channel_low | right_channel_high | right_channel_low
(I'm not sure about the ordering here! But it's just an example.) So the first data byte had better be the most significant byte of the left channel, or else you'll end up with fragments of two samples being interpreted as one whole sample:
left_channel_low | right_channel_high | right_channel_low || left_channel_high
-------------------part of first sample------------------ || --second sample--
You can see that everything "shifted" here, which happens because the size of your file slices is not a multiple of the sample size in bytes.
If you're lucky, this just causes the channels to be swapped. If you're unlucky, high and low bytes get swapped. Interestingly, this does lead to kind-of recognizable, but severely distorted audio.
What puzzles me is that the pattern you report repeats in blocks of three. From the above, I'd expect either two or four. Perhaps you are using an unusual sample format, such as 24-bits (3 bytes)?

reading 16-bit greyscale TIFF

I'm trying to read a 16-bit greyscale TIFF file (BitsPerSample=16) using a small C program to convert into an array of floating point numbers for further analysis. The pixel data are, according to the header information, in a single strip of 2048x2048 pixels. Encoding is little-endian.
With that header information, I was expecting to be able to read a single block of 2048x2048x2 bytes and interpret it as 2048x2048 2-byte integers. What I in fact get is a picture split into four quadrants of 1024x1024 pixels each, the lower two of which contain only zeros. Each of the top two quadrants look like I expected the whole picture to look: alt text http://users.aber.ac.uk/ruw/unlinked/15_inRT_0p457.png
If I read the same file into Gimp or Imagemagick, both tell me that they have to reduce to 8-bit (which doesn't help me - I need the full range), but the pixels turn up in the right places: alt text http://users.aber.ac.uk/ruw/unlinked/15_inRT_0p457_gimp.png
This would suggest that my idea about how the data are arranged within the one strip is wrong. On the other hand, the file must be correctly formatted in terms of the header information as otherwise Gimp wouldn't get it right. Where am I going wrong?
Output from tiffdump:
15_inRT_0p457.tiff:
Magic: 0x4949 Version: 0x2a
Directory 0: offset 8 (0x8) next 0 (0)
ImageWidth (256) LONG (4) 1<2048>
ImageLength (257) LONG (4) 1<2048>
BitsPerSample (258) SHORT (3) 1<16>
Compression (259) SHORT (3) 1<1>
Photometric (262) SHORT (3) 1<1>
StripOffsets (273) LONG (4) 1<4096>
Orientation (274) SHORT (3) 1<1>
RowsPerStrip (278) LONG (4) 1<2048>
StripByteCounts (279) LONG (4) 1<8388608>
XResolution (282) RATIONAL (5) 1<126.582>
YResolution (283) RATIONAL (5) 1<126.582>
ResolutionUnit (296) SHORT (3) 1<3>
34710 (0x8796) LONG (4) 1<0>
(Tag 34710 is camera information; to make sure this doesn't somehow make any difference, I've zeroed the whole range from the end of the image file directory to the start of data at 0x1000, and that in fact doesn't make any difference.)
I've found the problem - it was in my C program...
I had allocated memory for an array of longs and used fread() to read in the data:
#define PPR 2048;
#define BPP 2;
long *pix;
pix=malloc(PPR*PPR*sizeof(long));
fread(pix,BPP,PPR*PPR,in);
But since the data come in 2-byte chunks (BPP=2) but sizeof(long)=4, fread() packs the data densely inside the allocated memory rather than packing them into long-sized parcels. Thus I end up with two rows packed together into one and the second half of the picture empty.
I've changed it to loop over the number of pixels and read two bytes each time and store them in the allocated memory instead:
for (m=0;m<PPR*PPR;m++) {
b1=fgetc(in);
b2=fgetc(in);
*(pix+m)=256*b1+b2;
}
You understand that if StripOffsets is an array, it is an offset to an array of offsets, right? You might not be doing that dereference properly.
What's your platform? What are you trying to do? If you're willing to work in .NET on Windows, my company sells an image processing toolkit that includes a TIFF codec that works on pretty much anything you can throw at it and will return 16 bpp images. We also have many tools that operate natively on 16bpp images.

Resources