I have a problem with decompressing some gzip data. I have an array with pointers to dynamically allocated char strings. Each element of this array is one part of the gzip file that I want to uncompress.
The first thing which comes to my mind is to concatenate those strings to one, and then decompress data, but I want to avoid this method because of a lot of copying.
So the question is: Is there any way to decompress data divided into few parts, using zlib library ? I was trying to do it, but when I decompress the first part I get Z_DATA_ERROR - and it's normal, because the data is not complete. Is there any way to "wait" for the rest of data to decompress?
Yes. You can simply call inflate() successively with each of the strings in the appropriate order. for each call of inflate(), you can provide a different pointer and length for the compressed data. Each time, make sure that you first consume all of the uncompressed data generated and that avail_in is zero before moving on to the next chunk of input.
If you are getting a Z_DATA_ERROR that means that either you are not reassembling the original stream correctly, or that the original stream is not a gzip stream.
Note that to decompress a gzip stream, you need to initialize with inflateInit2() and set the parameters appropriately to request gzip decompression.
Related
Do popular compressors such as gzip, 7z, or others using deflate, detect random data strings and skip attempting to compress said strings for sake of speed?
If so, can I switch off this setting?
Otherwise, how can I implement deflate to attempt to compress a random data string?
I've found zlib deflate, and it does not mention the word "random" in the source, however, I'm concerned that higher up in zlib that it detects a random block of bits/bytes and skips over it, overriding deflate.
How can I be sure that a compressor, such as zlib, attempts to compress a block of random data?
Can you give an example command-line expression or code?
Unless you request level 0 (no compression), zlib always attempts to compress the data. For every deflate block, it compares the size of that block using dynamic codes, static codes, and stored (no compression), and emits the smallest of the three. That is all.
There is no detection of "random" data, even if such a thing were possible. (It's not possible, of course. For example, encrypted data is definitely not random, but is indistinguishable from random data if you don't know how to decrypt it.)
I am trying to use encoding/gob to store data to a file and load it later. I want to be able to append new data to the file and load all saved data later, e.g. after restarting my application. While storing to the file using Encode() there are no problems, but when reading it seems I always get only the item which was first stored, not the succinctly stored items.
Here is a minimal example: https://play.golang.org/p/patGkKDLhM
As you see, it works to write two times to an encoder and then read it back. But when closing the file and reopening it again in append mode, writing seems to work, but reading works only for the first two elements (which have been written previously). The two newly added structs cannot be retrieved, I get the error:
panic: extra data in buffer
I am aware of Append to golang gob in a file on disk and I also read https://groups.google.com/forum/#!topic/golang-nuts/bn6vjC5Abd8
Finally, I also found https://gist.github.com/kjk/8015952 which seems to demonstrate that what I am trying to do does not work. Why? What does this error mean?
I have not used the encoding/gob package yet (looks cool, I might have to find a project for it). But reading the godoc, it would seem to me that each encoding is a single record expected to be decoded from beginning to end. That is, once you Encode a stream, the resulting bytes is a complete set respecting the entire stream from start to finish - not able to be appended to later by encoding again.
The godoc states that an encoded gob is self-descriptive. At the beginning of the encoded stream, it describes the entire data set struct, types, etc that will be following including the field names. Then what follows in the byte stream is the the size and byte representation of the value of those Exported fields.
Then one could assume that what is omitted from the docs is since the stream self-describes itself at the very beginning, including each field that is about to be passed, that is all that the Decoder will care about. The Decoder will not know of any successive bytes added after what has been described as it only sees what was described at the beginning. Therefore, that error message panic: extra data in buffer is accurate.
In your Playground example, you are encoding twice to the same encoder instance and then closing the file. Since you are passing exactly two records in, and encoding two records, that may work as the single instance of the encoder may see the two Encode calls as a single encoded stream. Then when you close the file io's stream, the gob is now complete - and the stream is treated as a single record (even though you sent in two types).
And the same in the decoding function, you are reading X number of times from the same stream. But, you are writing a single record when closing the file - that actually has two types in that one single record. Hence why it works when reading 2, and EXACTLY 2. But fails if reading more than 2.
A solution, if you want to store this in a single file, is that you will need to create your own index of each complete "write" or encoder instance/session. Some form your own Block method that allows you to wrap or define each entry written to disk with a "begin" and "end" marker. That way, when reading back the file, you know exactly what buffer to allocate because of the begin/end markers. Once you have a single record in a buffer, then you use gob's Decoder to decode it. And close the file after each write.
The pattern I use for such markers is something like:
uint64:uint64
uint64:uint64
...
The first being the beginning byte number, and the second entry separated by a colon being its length. I usually store this in another file though, called appropriately indexes. That way it can be quickly read into memory, and then I can stream the large file knowing exactly where each start and end address is in the byte stream.
Another option is just to store each gob in its own file, using the file system directory structure to organize as you see fit (or one could even use the directories to define types, for example). Then the existence of each file is a single record. This is how I use my rendered json from Event Sourcing techniques, storing millions of files organized in directories.
In summary, it would seem to me that a gob of data is a complete set of data from beginning to end - a single "record" have you. If you want to store multiple encodings/multiple gobs, then to will need to create your own index to track the start and size/end of each gob bytes as you store them. Then, you will want to Decode each entry separately.
I'm trying to figure out if there's a way to calculate a minimum required size for an output buffer, based on the size of the input buffer.
This question is similar to zlib, deflate: How much memory to allocate?, but not the same. I am asking about each chunk in isolation, rather than the entire stream.
So suppose we have two buffers: INPUT and OUTPUT, and we have a BUFFER_SIZE, which is - say, 4096 bytes. (Just a convenient number, no particular reason I choose this size.)
If I deflate using:
deflate(stream, Z_PARTIAL_FLUSH)
so that each chunk is compressed, and immediately flushed to the output buffer, is there a way I can guarantee I'll have enough storage in the output buffer without needing to reallocate?
Superficially, we'd assume that the DEFLATED data will always be larger than the uncompressed input data (assuming we use a compression level that is greater than 0.)
Of course, that's not always the case - especially for small values. For example, if we deflate a single byte, the deflated data will obviously be larger than the uncompressed data, due to the overhead of things like headers and dictionaries in the LZW stream.
Thinking about how LZW works, it would seem if our input data is at least 256 bytes (meaning that worst case scenario, every single byte is different and we can't really compress anything), we should realize that input size LESS than 256 bytes + zlib headers could potentially require a LARGER output buffer.
But, generally, realworld applications aren't going to be compressing small sizes like that. So assuming an input/output buffer of something more like 4K, is there some way to GUARANTEE that the output compressed data will be SMALLER than the input data?
(Also, I know about deflateBound, but would rather avoid it because of the overhead.)
Or, to put it another way, is there some minimum buffer size that I can use for input/output buffers that will guarantee that the output data (the compressed stream) will be smaller than the input data? Or is there always some pathological case that can cause the output stream to be larger than the input stream, regardless of size?
Though I can't quite make heads or tails out of your question, I can comment on parts of the question in isolation.
is there some way to GUARANTEE that the output compressed data will be
SMALLER than the input data?
Absolutely not. It will always be possible for the compressed output to be larger than some input. Otherwise you wouldn't be able to compress other input.
(Also, I know about deflateBound, but would rather avoid it because of
the overhead.)
Almost no overhead. We're talking a fraction of a percent larger than the input buffer for reasonable sizes.
By the way, deflateBound() provides a bound on the size of the entire output stream as a function of the size of the entire input stream. It can't help you when you are in the middle of a bunch of deflate() calls with incomplete input and insufficient output space. For example, you may still have deflate output pending and delivered by the next deflate() call, without providing any new input at all. Then the expansion ratio is infinite for that isolated call.
due to the overhead of things like headers and dictionaries in the LZW
stream.
deflate is not LZW. The approach it uses is called LZ77. It is very different from LZW, which is now obsolete. There are no "dictionaries" stored in compressed deflate data. The "dictionary" is simply the uncompressed data that precedes the data currently being compressed or decompressed.
Or, to put it another way, is there some minimum buffer size that I
can use for input/output buffers ...
The whole idea behind the zlib interface is for you to not have to worry about what will fit in the buffers. You just keep calling deflate() or inflate() with more input data and more output space until you're done, and all will be well. It does not matter if you need to make more than one call to consume one buffer of input, or more than one call to fill one buffer of output. You just have loops to make more calls, provide more input when needed, and disposition the output when needed and provide fresh output space.
Information theory dictates that there must always be pathological cases which "compress" to something larger.
This page starts off with the worst case encoding sizes for zlib - looks like the worst case growth is 6 bytes, plus 5 bytes per started 16KB block. So if you always flush after less than 16KB, having buffers which are 11 bytes plus your flush interval should be safe.
Unless you have tight control over the type of data you're compressing, finding pathological cases isn't hard. Any random number generator will find you some pretty quickly.
This a question specific to the DEFLATE algorithm, but relates to gzip and zlib.
Suppose I have a gzip file that I know has several flush points in the file. Some of which are made with Z_SYNC_FLUSH and other Z_FULL_FLUSH. If I scan through the file, I can find all the flush points because they immediately follow a pattern of 0000ffff.
I know that I can resume decompression at a Z_FULL_FLUSH points because all the information needed to decompress is available (IE: The dictionary is reset). However, if I try to decompress from a Z_SYNC_FLUSH, I usually get a "zlib.error: Error -3 while decompressing: invalid distance too far back" error.
The question is this: If I try to decompress from a Z_SYNC_FLUSH point, am I guaranteed to either:
Properly decompress that block and subsequent blocks
Fail with "distance too far" error
In other words, am I guaranteed that I will never silently decompress with bad data (I'm not talking about the CRC32 check at the end of the gzip, but whether zlib will loudly complain)?
Assumptions:
Assume that I am able to identify flush points perfectly. Let's pretend that I don't mis-identify random bits as the sync marker nor that the pattern so happens to appear in a type 0 block. This is unrealistic, but just assume it's true.
Assume the file is never corrupted and is always a legitimate gzip file.
If a Z_SYNC_FLUSH results in a subsequent stream that does not give a distance-too-far error, then it is, by accident, equivalent to and indistinguishable from a Z_FULL_FLUSH.
I would not expect this to happen very often.
If you decompress data with zlib that isn't compressed, does anything happen?
If it does in fact change the data, how do you check if data is zlib zipped in the first place?
There would need to be a valid header. Extremely unlikely that this could ever happen unless it was an accurately structured (compressed) data stream, so it would be invalid data to inflate.