I've read through numerous articles on GIF LZW decompression, but I'm still confused as to how it works or how to solve, in terms of coding, the more fiddly bits of coding.
As I understand it, when I get to the byte stream in the GIF for the LZW compressed data, the stream tells me:
Minimum code size, AKA number of bits the first byte starts off with.
Now, as I understand it, I have to either add one to this for the clear code, or add two for clear code and EOI code. But I'm confused as to which of these it is?
So say I have 3 colour codes (01, 10, 11), with EOI code assumed (as 00) will the byte that follows the minimum code size (of 2) be 2 bits, or will it be 3 bits factoring in the clear code? Or is the clear code/EOI code both already factored into the minimum size?
The second question is, what is the easiest way to read in dynamically sized bits from a file? Because reading an odd numbers of bits (3 bits, 12 bits etc) from an even numbered byte (8) sounds like it could be messy and buggy?
To start with your second question: yes you have to read the dynamically sized bits from an 8bit bytestream. You have to keep track of the size you are reading, and the number of unused bits left from previous read operations (used for correctly putting the 'next byte' from the file).
IIRC there is a minimum code size of 8 bits, which would give you a clear code of 256 (base 10) and an End Of Input of 257. The first stored code is then 258.
I am not sure why you did not looked up the source of one of the public domain graphics libraries. I know I did not because back in 1989 (!) there were no libraries to use and no internet with complete descriptions. I had to implement a decoder from an example executable (for MS-DOS from Compuserve) that could display images and a few GIF files, so I know that can be done (but it is not the most efficient way of spending your time).
Related
I am working on a project in which I have to merge two 8bits .wav files using C and i still have no clue how to do it.
I have read about wav files and I want to start by reading one of the files.
There's one thing i didn't understand:
Let's say i have an 8bit WAV audio file, And i was able to read (even tho I am still trying to) the Data that starts after the 44 byte, I will get numbers between 0 and 255 logically.
My question is:
What do those numbers mean?
If I get 255 or 0 what do they mean?
Are they samples from the wave?
Can anyone please explain?
Thanks in advance
Assuming we're not dealing with file format issues, getting values between 0 and 255 means that the audio samples are of unsigned eight-bit format, as you have put it.
One way of merging data would consist of reading data from files into buffers, arrays a and b and summing them value by value: c[i] = a[i] + b[i]. By doing so, you'd have to take care of the following:
length of the files may not be equal
on summing the unsigned 8-bit buffers, such as yours will almost certainly overflow
This is usually achieved using a for loop. You first get the sizes of the chunks. Your for loop has to be written in such a way that it neither reads past the array boundary, nor ignores what can be read. For preventing overflows you can either:
divide values by two on reading
or
read (convert) into a format which wouldn't overflow, then normalize and convert the merged data back into the original format or whichever format desired.
For all particulars of reading from and writing to a .wav format file you may use some of the existing audio file libraries, or write your own routine. Dealing with audio file format is not a trivial thing, though. Here's a reference on .wav format.
Here are few audio file APIs worth of looking at:
libsndfile
sndlib
Hope this can help.
See any good guide to WAVE for information on the format of samples in the data chunk, such as this one I found: http://www.neurophys.wisc.edu/auditory/riff-format.txt
Relevant excerpts:
In a single-channel WAVE file, samples are stored consecutively. For
stereo WAVE files, channel 0 represents the left channel, and channel
1 represents the right channel. The speaker position mapping for more
than two channels is currently undefined. In multiple-channel WAVE
files, samples are interleaved.
Data Format of the Samples
Each sample is contained in an integer i. The size of i is the
smallest number of bytes required to contain the specified sample
size. The least significant byte is stored first. The bits that
represent the sample amplitude are stored in the most significant bits
of i, and the remaining bits are set to zero.
For example, if the sample size (recorded in nBitsPerSample) is 12
bits, then each sample is stored in a two-byte integer. The least
significant four bits of the first (least significant) byte is set to
zero.
The data format and maximum and minimums values for PCM waveform
samples of various sizes are as follows:
Sample Size Data Format Maximum Value Minimum Value
One to Unsigned 255 (0xFF) 0
eight bits integer
Nine or Signed Largest Most negative more bits
integer i positive value of i
value of i
N.B.: Even if the file has >8 bits of audio resolution, you should read the file as an array of unsigned char and reconstitute the larger samples manually as per the above spec. Don't try to do anything like reading the samples directly over an array of native C ints, as their layout and size is platform-dependent and therefore should not be relied upon in any code.
Note also that the header is not guaranteed to be 44 bytes long: How can I detect whether a WAV file has a 44 or 46-byte header? You need to read the length and process the header based on that, not any assumption.
this idea had been flowing in my head for 3 years and i am having problems to apply it
i wanted to create a compression algorithm that cuts the file size in half
e.g. 8 mb to 4 mb
and with some searching and experience in programming i understood the following.
let's take a .txt file with letters (a,b,c,d)
using the IO.File.ReadAllBytes function , it gives the following array of bytes : ( 97 | 98 | 99 | 100 ) , which according to this : https://en.wikipedia.org/wiki/ASCII#ASCII_control_code_chart is the decimal value of the letter.
what i thought about was : how to mathematically cut this 4-membered-array to only 2-membered-array by combining each 2 members into a single member but you can't simply mathematically combine two numbers and simply reverse them back as you have many possibilities,e.g.
80 | 90 : 90+80=170 but there is no way to know that 170 was the result of 80+90 not like 100+70 or 110+60.
and even if you could overcome that , you would be limited by the maximum value of bytes (255 bytes) in a single member of the array.
i understand that most of the compression algorithms use the binary compression and they were successful,but imagine cutting a file size in half , i would like to hear your ideas on this.
Best Regards.
It's impossible to make a compression algorithm that makes every file shorter. The proof is called the "counting argument", and it's easy:
There are 256^L possible files of length L.
Lets say there are N(L) possible files with length < L.
If you do the math, you find that 256^L = 255*N(L)+1
So. You obviously cannot compress every file of length L, because there just aren't enough shorter files to hold them uniquely. If you made a compressor that always shortened a file of length L, then MANY files would have to compress to the same shorter file, and of course you could only get one of them back on decompression.
In fact, there are more than 255 times as many files of length L as there are shorter files, so you can't even compress most files of length L. Only a small proportion can actually get shorter.
This is explained pretty well (again) in the comp.compression FAQ:
http://www.faqs.org/faqs/compression-faq/part1/section-8.html
EDIT: So maybe you're now wondering what this compression stuff is all about...
Well, the vast majority of those "all possible files of length L" are random garbage. Lossless data compression works by assigning shorter representations (the output files) to the files we actually use.
For example, Huffman encoding works character by character and uses fewer bits to write the most common characters. "e" occurs in text more often than "q", for example, so it might spend only 3 bits to write "e"s, but 7 bits to write "q"s. bytes that hardly ever occur, like character 131 may be written with 9 or 10 bits -- longer than the 8-bit bytes they came from. On average you can compress simple English text by almost half this way.
LZ and similar compressors (like PKZIP, etc) remember all the strings that occur in the file, and assign shorter encodings to strings that have already occurred, and longer encodings to strings that have not yet been seen. This works even better since it takes into account more information about the context of every character encoded. On average, it will take fewer bits to write "boy" than "boe", because "boy" occurs more often, even though "e" is more common than "y".
Since it's all about predicting the characteristics of the files you actually use, it's a bit of a black art, and different kinds of compressors work better or worse on different kinds of data -- that's why there are so many different algorithms.
I'm implementing a version of lzw. Let's say I start off with 10 bit codes and increase whenever I max out on codes. For example after 1024 codes, I'll need 11 bits to represent 1025. Issue is in expressing the shift.
How do I tell decode that I've changed the code size? I thought about using 00, but the program can't distinguish between 00 as an increment and 00 as just two instances of code zero.
Any suggestions?
You don't. You shift to a new size when the dictionary is full. The decoder's dictionary is built synchronized with the encoder's dictionary, so they'll both be full at the same time, and the decoder will shift to the new size exactly when the encoder does.
The time you have to send a code to signal a change is when you've filled the dictionary completely -- you've used all of the largest codes available. In this case, you generally want to continue using the dictionary until/unless the compression rate starts to drop, then clear the dictionary and start over. You do need to put some marker in to tell when that happens. Typically, you reserve the single largest code for this purpose, but any code you don't use for any other purpose will work.
Edit: as an aside, note that you normally want to start with codes exactly one bit larger than the codes for the input, so if you're compressing 8-bit bytes, you want to start with 9 bit codes.
This is part of the LZW algorithm.
When decompressing you automatically build up the code dictionary again. When a new code exactly fills the current number of bits, the code size has to be increased.
For the details see Wikipedia.
You increase the number of bits when you create the code for 2n-1. So when you create the code 1023, increase the bit size immediately. You can get a better description from the GIF compression scheme. Note that this was a patented scheme (which partly caused the creation of PNG). The patent has probably expired by now.
Since the decoder builds the same table as the compressor, its table is full on reaching the last element (so 1023 in your example), and as a consequence, the decoder knows that the next element will be 11 bits.
I have a situation where there is a corrupt WAV file from which I'm trying to recover data.
My colleagues have sliced up the large WAV file into smaller WAV files with proper headers. This has produced some interesting results.
Sliced into 1MB segments we get these results:
The first wave file segment is all noise.
The second wave file segment is distorted.
The third wave file segment is clear.
This pattern is repeated for the entire length of the file (after it's been broken into smaller files).
For 20MB slices:
The first wave file segment is all noise.
The second wave file segment is clear.
The third wave file segment is distorted.
Again, this pattern is repeated for the entire length of the file (after it's been broken into smaller files).
Would anyone know why this is occurring?
Assuming the WAV contains uncompressed (raw) samples, recovery should be easy. You need to know the sample format. For example: 16 bits, two channels, 44100 Hz (which is cd quality). Because one of the segments is okay, then you can look at this to figure out what the right values are.
Then just open the WAV using these values in, e.g., Adobe Audition (formerly Cool Edit), or any other wave editor that supports import of raw data.
Edit: Okay, now to answer your question. Some segments are clear, because then the alignment is right. Take the cd quality again, as I described before. The bytes of one sample look like this:
left_channel_high | left_channel_low | right_channel_high | right_channel_low
(I'm not sure about the ordering here! But it's just an example.) So the first data byte had better be the most significant byte of the left channel, or else you'll end up with fragments of two samples being interpreted as one whole sample:
left_channel_low | right_channel_high | right_channel_low || left_channel_high
-------------------part of first sample------------------ || --second sample--
You can see that everything "shifted" here, which happens because the size of your file slices is not a multiple of the sample size in bytes.
If you're lucky, this just causes the channels to be swapped. If you're unlucky, high and low bytes get swapped. Interestingly, this does lead to kind-of recognizable, but severely distorted audio.
What puzzles me is that the pattern you report repeats in blocks of three. From the above, I'd expect either two or four. Perhaps you are using an unusual sample format, such as 24-bits (3 bytes)?
I'm trying to read a 16-bit greyscale TIFF file (BitsPerSample=16) using a small C program to convert into an array of floating point numbers for further analysis. The pixel data are, according to the header information, in a single strip of 2048x2048 pixels. Encoding is little-endian.
With that header information, I was expecting to be able to read a single block of 2048x2048x2 bytes and interpret it as 2048x2048 2-byte integers. What I in fact get is a picture split into four quadrants of 1024x1024 pixels each, the lower two of which contain only zeros. Each of the top two quadrants look like I expected the whole picture to look: alt text http://users.aber.ac.uk/ruw/unlinked/15_inRT_0p457.png
If I read the same file into Gimp or Imagemagick, both tell me that they have to reduce to 8-bit (which doesn't help me - I need the full range), but the pixels turn up in the right places: alt text http://users.aber.ac.uk/ruw/unlinked/15_inRT_0p457_gimp.png
This would suggest that my idea about how the data are arranged within the one strip is wrong. On the other hand, the file must be correctly formatted in terms of the header information as otherwise Gimp wouldn't get it right. Where am I going wrong?
Output from tiffdump:
15_inRT_0p457.tiff:
Magic: 0x4949 Version: 0x2a
Directory 0: offset 8 (0x8) next 0 (0)
ImageWidth (256) LONG (4) 1<2048>
ImageLength (257) LONG (4) 1<2048>
BitsPerSample (258) SHORT (3) 1<16>
Compression (259) SHORT (3) 1<1>
Photometric (262) SHORT (3) 1<1>
StripOffsets (273) LONG (4) 1<4096>
Orientation (274) SHORT (3) 1<1>
RowsPerStrip (278) LONG (4) 1<2048>
StripByteCounts (279) LONG (4) 1<8388608>
XResolution (282) RATIONAL (5) 1<126.582>
YResolution (283) RATIONAL (5) 1<126.582>
ResolutionUnit (296) SHORT (3) 1<3>
34710 (0x8796) LONG (4) 1<0>
(Tag 34710 is camera information; to make sure this doesn't somehow make any difference, I've zeroed the whole range from the end of the image file directory to the start of data at 0x1000, and that in fact doesn't make any difference.)
I've found the problem - it was in my C program...
I had allocated memory for an array of longs and used fread() to read in the data:
#define PPR 2048;
#define BPP 2;
long *pix;
pix=malloc(PPR*PPR*sizeof(long));
fread(pix,BPP,PPR*PPR,in);
But since the data come in 2-byte chunks (BPP=2) but sizeof(long)=4, fread() packs the data densely inside the allocated memory rather than packing them into long-sized parcels. Thus I end up with two rows packed together into one and the second half of the picture empty.
I've changed it to loop over the number of pixels and read two bytes each time and store them in the allocated memory instead:
for (m=0;m<PPR*PPR;m++) {
b1=fgetc(in);
b2=fgetc(in);
*(pix+m)=256*b1+b2;
}
You understand that if StripOffsets is an array, it is an offset to an array of offsets, right? You might not be doing that dereference properly.
What's your platform? What are you trying to do? If you're willing to work in .NET on Windows, my company sells an image processing toolkit that includes a TIFF codec that works on pretty much anything you can throw at it and will return 16 bpp images. We also have many tools that operate natively on 16bpp images.