I have an array of run length encoded binary data (Pbm image). One color Bit + 7 Bit length. How could you decode this into the original bitmap with Multithreading?
Does every thread decode it into a separate array and later on they are combined to one? Problem is, you would iterate over each value when combining the data. Is there a more efficient approach?
Thanks in advance!
Related
I'm trying to figure out how FFmpeg saves data in an AVFrame after the audio has been decoded.
Basically, if I print the data in the AVFrame->data[] array I get a number of unsigned 8 bit integers that is the audio in raw format.
From what I can understand from the FFmpeg doxygen, the format of the data is expressed in the enum AVSampleFormat and there are 2 main categories: interleaved and planar. In the interleaved type, the data is all kept in the first row of the AVFrame->data array with size AVFrame->linesize[0] while in the planar type each channel of the audio file is kept in a separate row of the AVFrame->data array and the arrays have as size AVFrame->linesize[0].
Is there a guide/tutorial that explains what do the numbers in the array mean for each of the formats?
Values in each of the data arrays (planes) are actual audio samples according to specified format. E.g. if format is AV_SAMPLE_FMT_S16P it means that data arrays actually are arrays of int16_t PCM data. If we have deal with mono signal - only data[0] is valid, if it is stereo - data[0] and data[1] are valid, so on.
I'm not sure that there is any guide that can help you to explain each particular case but anyway described approach is quite simple and is easy to understand. You should just play a bit with it and thing should become clear.
I am working on a project in which I have to merge two 8bits .wav files using C and i still have no clue how to do it.
I have read about wav files and I want to start by reading one of the files.
There's one thing i didn't understand:
Let's say i have an 8bit WAV audio file, And i was able to read (even tho I am still trying to) the Data that starts after the 44 byte, I will get numbers between 0 and 255 logically.
My question is:
What do those numbers mean?
If I get 255 or 0 what do they mean?
Are they samples from the wave?
Can anyone please explain?
Thanks in advance
Assuming we're not dealing with file format issues, getting values between 0 and 255 means that the audio samples are of unsigned eight-bit format, as you have put it.
One way of merging data would consist of reading data from files into buffers, arrays a and b and summing them value by value: c[i] = a[i] + b[i]. By doing so, you'd have to take care of the following:
length of the files may not be equal
on summing the unsigned 8-bit buffers, such as yours will almost certainly overflow
This is usually achieved using a for loop. You first get the sizes of the chunks. Your for loop has to be written in such a way that it neither reads past the array boundary, nor ignores what can be read. For preventing overflows you can either:
divide values by two on reading
or
read (convert) into a format which wouldn't overflow, then normalize and convert the merged data back into the original format or whichever format desired.
For all particulars of reading from and writing to a .wav format file you may use some of the existing audio file libraries, or write your own routine. Dealing with audio file format is not a trivial thing, though. Here's a reference on .wav format.
Here are few audio file APIs worth of looking at:
libsndfile
sndlib
Hope this can help.
See any good guide to WAVE for information on the format of samples in the data chunk, such as this one I found: http://www.neurophys.wisc.edu/auditory/riff-format.txt
Relevant excerpts:
In a single-channel WAVE file, samples are stored consecutively. For
stereo WAVE files, channel 0 represents the left channel, and channel
1 represents the right channel. The speaker position mapping for more
than two channels is currently undefined. In multiple-channel WAVE
files, samples are interleaved.
Data Format of the Samples
Each sample is contained in an integer i. The size of i is the
smallest number of bytes required to contain the specified sample
size. The least significant byte is stored first. The bits that
represent the sample amplitude are stored in the most significant bits
of i, and the remaining bits are set to zero.
For example, if the sample size (recorded in nBitsPerSample) is 12
bits, then each sample is stored in a two-byte integer. The least
significant four bits of the first (least significant) byte is set to
zero.
The data format and maximum and minimums values for PCM waveform
samples of various sizes are as follows:
Sample Size Data Format Maximum Value Minimum Value
One to Unsigned 255 (0xFF) 0
eight bits integer
Nine or Signed Largest Most negative more bits
integer i positive value of i
value of i
N.B.: Even if the file has >8 bits of audio resolution, you should read the file as an array of unsigned char and reconstitute the larger samples manually as per the above spec. Don't try to do anything like reading the samples directly over an array of native C ints, as their layout and size is platform-dependent and therefore should not be relied upon in any code.
Note also that the header is not guaranteed to be 44 bytes long: How can I detect whether a WAV file has a 44 or 46-byte header? You need to read the length and process the header based on that, not any assumption.
I've read through numerous articles on GIF LZW decompression, but I'm still confused as to how it works or how to solve, in terms of coding, the more fiddly bits of coding.
As I understand it, when I get to the byte stream in the GIF for the LZW compressed data, the stream tells me:
Minimum code size, AKA number of bits the first byte starts off with.
Now, as I understand it, I have to either add one to this for the clear code, or add two for clear code and EOI code. But I'm confused as to which of these it is?
So say I have 3 colour codes (01, 10, 11), with EOI code assumed (as 00) will the byte that follows the minimum code size (of 2) be 2 bits, or will it be 3 bits factoring in the clear code? Or is the clear code/EOI code both already factored into the minimum size?
The second question is, what is the easiest way to read in dynamically sized bits from a file? Because reading an odd numbers of bits (3 bits, 12 bits etc) from an even numbered byte (8) sounds like it could be messy and buggy?
To start with your second question: yes you have to read the dynamically sized bits from an 8bit bytestream. You have to keep track of the size you are reading, and the number of unused bits left from previous read operations (used for correctly putting the 'next byte' from the file).
IIRC there is a minimum code size of 8 bits, which would give you a clear code of 256 (base 10) and an End Of Input of 257. The first stored code is then 258.
I am not sure why you did not looked up the source of one of the public domain graphics libraries. I know I did not because back in 1989 (!) there were no libraries to use and no internet with complete descriptions. I had to implement a decoder from an example executable (for MS-DOS from Compuserve) that could display images and a few GIF files, so I know that can be done (but it is not the most efficient way of spending your time).
I'm implementing a version of lzw. Let's say I start off with 10 bit codes and increase whenever I max out on codes. For example after 1024 codes, I'll need 11 bits to represent 1025. Issue is in expressing the shift.
How do I tell decode that I've changed the code size? I thought about using 00, but the program can't distinguish between 00 as an increment and 00 as just two instances of code zero.
Any suggestions?
You don't. You shift to a new size when the dictionary is full. The decoder's dictionary is built synchronized with the encoder's dictionary, so they'll both be full at the same time, and the decoder will shift to the new size exactly when the encoder does.
The time you have to send a code to signal a change is when you've filled the dictionary completely -- you've used all of the largest codes available. In this case, you generally want to continue using the dictionary until/unless the compression rate starts to drop, then clear the dictionary and start over. You do need to put some marker in to tell when that happens. Typically, you reserve the single largest code for this purpose, but any code you don't use for any other purpose will work.
Edit: as an aside, note that you normally want to start with codes exactly one bit larger than the codes for the input, so if you're compressing 8-bit bytes, you want to start with 9 bit codes.
This is part of the LZW algorithm.
When decompressing you automatically build up the code dictionary again. When a new code exactly fills the current number of bits, the code size has to be increased.
For the details see Wikipedia.
You increase the number of bits when you create the code for 2n-1. So when you create the code 1023, increase the bit size immediately. You can get a better description from the GIF compression scheme. Note that this was a patented scheme (which partly caused the creation of PNG). The patent has probably expired by now.
Since the decoder builds the same table as the compressor, its table is full on reaching the last element (so 1023 in your example), and as a consequence, the decoder knows that the next element will be 11 bits.
I have a situation where there is a corrupt WAV file from which I'm trying to recover data.
My colleagues have sliced up the large WAV file into smaller WAV files with proper headers. This has produced some interesting results.
Sliced into 1MB segments we get these results:
The first wave file segment is all noise.
The second wave file segment is distorted.
The third wave file segment is clear.
This pattern is repeated for the entire length of the file (after it's been broken into smaller files).
For 20MB slices:
The first wave file segment is all noise.
The second wave file segment is clear.
The third wave file segment is distorted.
Again, this pattern is repeated for the entire length of the file (after it's been broken into smaller files).
Would anyone know why this is occurring?
Assuming the WAV contains uncompressed (raw) samples, recovery should be easy. You need to know the sample format. For example: 16 bits, two channels, 44100 Hz (which is cd quality). Because one of the segments is okay, then you can look at this to figure out what the right values are.
Then just open the WAV using these values in, e.g., Adobe Audition (formerly Cool Edit), or any other wave editor that supports import of raw data.
Edit: Okay, now to answer your question. Some segments are clear, because then the alignment is right. Take the cd quality again, as I described before. The bytes of one sample look like this:
left_channel_high | left_channel_low | right_channel_high | right_channel_low
(I'm not sure about the ordering here! But it's just an example.) So the first data byte had better be the most significant byte of the left channel, or else you'll end up with fragments of two samples being interpreted as one whole sample:
left_channel_low | right_channel_high | right_channel_low || left_channel_high
-------------------part of first sample------------------ || --second sample--
You can see that everything "shifted" here, which happens because the size of your file slices is not a multiple of the sample size in bytes.
If you're lucky, this just causes the channels to be swapped. If you're unlucky, high and low bytes get swapped. Interestingly, this does lead to kind-of recognizable, but severely distorted audio.
What puzzles me is that the pattern you report repeats in blocks of three. From the above, I'd expect either two or four. Perhaps you are using an unusual sample format, such as 24-bits (3 bytes)?