In what situation would compressed data be larger than input? - c

I need to handle compression of data that's largely UTF-8 HTML content in a utility I'm working on. The utility uses zLib and the deflate algorithm to compress data. Is it safe to assume that if the input data size is over 1 kB, compressed data will always be smaller than uncompressed input? (Input data below 1 kB will not be compressed.)
I'm trying to see situations where this assumption would break but apart from near-perfect random input, it seems a safe assumption to me.
Edit: the reason I'm wondering about this assumption is because I already have a buffer allocated that's as big as the input data. If my assumption holds, I can reuse this same buffer and avoid another memory allocation.

No. You can never assume that the compressed data will always be smaller. In fact, if any sequence is compressed by the algorithm, then you are guaranteed that some other sequence is expanded.
You can use zlib's deflate() function to compress as much as it can into your 1K buffer. Do whatever you need to with that result, then continue with another deflate() call writing into that same buffer.
Alternatively you can allocate a buffer big enough for the largest expansion. The deflateBound() or compressBound() functions will tell you how much that is. It's only a small amount more.

As far as I know, a sequence of 128 bytes with values 0, 1, 2, ..., 127 will not be compressed by zLib. Technically, it's possible to intentionally create an HTML page that will break your compression scheme, but with normal innocent HTML data you should be almost perfectly safe.
But almost perfectly is not perfectly. If you already have a buffer of that size, I'd advise to attempt the compression with this buffer, and if it turns out that the buffer is not enough (I suppose zLib has means of indicating that), then allocate a larger buffer or simply store an uncompressed version. And make sure you are writing these cases into some log so you could see if it ever fires :)

Related

How to decide a buffer's size

I have a program which it's purpose is to read from some input text file,filter all chars which are printable (i.e., ASCII between 32 and 126) into some other output text file.
I also get as an argument "DataAmount"-which means whats the amount of data I need to read-May be 1B,1K,1M,1G,80000B, etc.(Any natural number can be before the unit).
It is NOT the size of the input file,it is how much I need to read from the input file.And if the input file is smaller than the DataAmount,I need to re read the file,untill I read exactly DataAmount bytes.
For the filtering,I read from the input file into some buffer.I filter from the buffer into some other buffer the printable chars,and write from that buffer to the output file(both buffers are in the same size).
Ther question is,how can I decide what size is the best for those two buffers,so there will be a minimal calls for read() and write()?
(NOTE: I won't write the whole data in one time since it may be too big,and I won't write each byte at a time.I write from the outbuff to the output file only when the buffer is full).
At the moment,I build the buffer size only depends on the unit:
If it's B or K,the size will be 1024.
If it's M or G,the size will be 4096.
This is not good at all,since for 1B and 100000B I'll have the same size of the buffer.
How can I improve this?
My personal experience is that the buffer size does not matter much as long as you are using a few kilobytes.
As you noted in your question, there is overhead in doing system calls, so doing I/O one character at a time is not terribly efficient, and you can cut that overhead down by reading and writing larger blocks. However, there are other things that take time, and any reasonable amount of buffering will drop your system call overhead down to the point where it is the other other things that are taking most of the time. At that point larger buffers do not make the program significantly faster. There are also problems with making a buffer too large, so you can err in that direction too.
I would not make the buffer size dynamic as you are doing. It introduces needless complexity into the program. You can verify that by running your program with different buffer sizes, and see what kind of difference it makes.
As for the actual value to use, the stdio.h header file defines the macro BUFSIZ which is the default size for stdio buffers. That macro is a reasonable size to use.
Also note that if you are using the stdio functions to do your I/O, they already provide buffering (if you're not calling the system calls read() and write() directly, you're using stdio.) There isn't really a reason to buffer the data twice, so you can either do the I/O one character at a time and let the stdio buffers take care of it for you, or disable the stdio buffering with setvbuf().
If you know the input previously you can to some statistics and get the average, so it's not a fixed size but an approximation.
But I recommend to you: don't worry about read and close syscalls. If you read a very little data from the imput and your buffer is high, you waste some bytes. If you get a big input and have a little buffer, you only have to do some extra iterations.
A medium size for the buffer would be good. For example, 512.
Once you decide on the unit, then decide if the number of units needs extra buffer size. Thus, once you have found the B, check the value. That way you would not have to split the smaller units.
You can do a switch statement on the unit indicators, and then process within each case, based on the numeric value of that unit. As an example, for the B do an integer divide of the maximum and set the actual buffer size based on the result (again in a switch/case sequence).

Zlib minimum deflate size

I'm trying to figure out if there's a way to calculate a minimum required size for an output buffer, based on the size of the input buffer.
This question is similar to zlib, deflate: How much memory to allocate?, but not the same. I am asking about each chunk in isolation, rather than the entire stream.
So suppose we have two buffers: INPUT and OUTPUT, and we have a BUFFER_SIZE, which is - say, 4096 bytes. (Just a convenient number, no particular reason I choose this size.)
If I deflate using:
deflate(stream, Z_PARTIAL_FLUSH)
so that each chunk is compressed, and immediately flushed to the output buffer, is there a way I can guarantee I'll have enough storage in the output buffer without needing to reallocate?
Superficially, we'd assume that the DEFLATED data will always be larger than the uncompressed input data (assuming we use a compression level that is greater than 0.)
Of course, that's not always the case - especially for small values. For example, if we deflate a single byte, the deflated data will obviously be larger than the uncompressed data, due to the overhead of things like headers and dictionaries in the LZW stream.
Thinking about how LZW works, it would seem if our input data is at least 256 bytes (meaning that worst case scenario, every single byte is different and we can't really compress anything), we should realize that input size LESS than 256 bytes + zlib headers could potentially require a LARGER output buffer.
But, generally, realworld applications aren't going to be compressing small sizes like that. So assuming an input/output buffer of something more like 4K, is there some way to GUARANTEE that the output compressed data will be SMALLER than the input data?
(Also, I know about deflateBound, but would rather avoid it because of the overhead.)
Or, to put it another way, is there some minimum buffer size that I can use for input/output buffers that will guarantee that the output data (the compressed stream) will be smaller than the input data? Or is there always some pathological case that can cause the output stream to be larger than the input stream, regardless of size?
Though I can't quite make heads or tails out of your question, I can comment on parts of the question in isolation.
is there some way to GUARANTEE that the output compressed data will be
SMALLER than the input data?
Absolutely not. It will always be possible for the compressed output to be larger than some input. Otherwise you wouldn't be able to compress other input.
(Also, I know about deflateBound, but would rather avoid it because of
the overhead.)
Almost no overhead. We're talking a fraction of a percent larger than the input buffer for reasonable sizes.
By the way, deflateBound() provides a bound on the size of the entire output stream as a function of the size of the entire input stream. It can't help you when you are in the middle of a bunch of deflate() calls with incomplete input and insufficient output space. For example, you may still have deflate output pending and delivered by the next deflate() call, without providing any new input at all. Then the expansion ratio is infinite for that isolated call.
due to the overhead of things like headers and dictionaries in the LZW
stream.
deflate is not LZW. The approach it uses is called LZ77. It is very different from LZW, which is now obsolete. There are no "dictionaries" stored in compressed deflate data. The "dictionary" is simply the uncompressed data that precedes the data currently being compressed or decompressed.
Or, to put it another way, is there some minimum buffer size that I
can use for input/output buffers ...
The whole idea behind the zlib interface is for you to not have to worry about what will fit in the buffers. You just keep calling deflate() or inflate() with more input data and more output space until you're done, and all will be well. It does not matter if you need to make more than one call to consume one buffer of input, or more than one call to fill one buffer of output. You just have loops to make more calls, provide more input when needed, and disposition the output when needed and provide fresh output space.
Information theory dictates that there must always be pathological cases which "compress" to something larger.
This page starts off with the worst case encoding sizes for zlib - looks like the worst case growth is 6 bytes, plus 5 bytes per started 16KB block. So if you always flush after less than 16KB, having buffers which are 11 bytes plus your flush interval should be safe.
Unless you have tight control over the type of data you're compressing, finding pathological cases isn't hard. Any random number generator will find you some pretty quickly.

How to determine the actual usage of a malloc'ed buffer

I have some compressed binary data and an API call to decompress it which requires a pre-allocated target buffer. There is not any means via the API that tells me the size of the decompressed data. So I can malloc an oversized buffer to decompress into but I would like to then resize (or copy this to) a memory buffer of the correct size. So, how do I (indeed can I) determine the actual size of the decompressed binary data in the oversized buffer?
(I do not control the compression of the data so I do not know in advance what size to expect and I cannot write a header for the file.)
As others have said, there is no good way to do this if your API doesn't provide it.
I almost don't want to suggest this for fear that you'll take this suggestion and have some mission-critical piece of your application depend on it, but...
A heurstic would be to fill your buffer with some 'poison' pattern before decompressing into it. Then, after decompression, scan the buffer for the first occurrence of the poison pattern.
This is a heuristic because it's perfectly conceivable that the decompressed data could just happen to have an occurrence of your poison pattern. Unless you have exact domain knowledge of what the data will be, and can choose a pattern specifically that you know cannot exist.
Even still, an imperfect solution at best.
Usually this information is supplied at compression time (take a look at 7-zips LZMA SDK for example).
There is no way to know the actual size of the decompressed data (or the size of the part that is actually in use) with the information you are giving now.
If the decompression step doesn't give you the decompressed size as a return value or "out" parameter in some way, you can't.
There is no way to determine how much data was written in the buffer (outside of debugger/valgrind-type checks).
A complex way to answer this problem is by decompressing twice into an over-sized buffer.
In both cases, you need a "random pattern". Starting from the end, you count the number of bytes which correspond to the pattern, and detect the end of decompressed sequence where it differs.
Or does it ? Maybe, by chance, one of the final byte of the decompressed sequence corresponds to the random byte at this exact position. So the final decompressed size might be larger than the detected one. If your pattern is truly random, it should not be more than a few bytes.
You need to fill again the buffer with a random pattern, but a different one. Ensure that, at each position, the new random pattern has a different value than the old random pattern. For faster speed, you are not obliged to fill the full buffer : you may limit the new pattern to a few bytes before and some more bytes after the 1st detected end. 32 bytes shall be enough, since it is improbable that so many bytes does correspond by chance to the first generated random pattern.
Decompress a second time. Detect again where the pattern differ. Take the larger of the two values between the first and second end detection. It is your decompressed size.
you should check how free works for your compiler/os
and do the same.
free doesn't take the size of the malloced data, but it somehow knows how much to free right ;)
usually the size is stored before the allocated buffer, don't know though exactly how maby bytes before again depending on the os/arch/compiler

openssl aes256 encryption of a file

I'd like to encrypt a file with aes256 using OpenSSL with C.
I did find a pretty nice example here.
Should I first read the whole file into a memory buffer and than do the aes256, or should I do it partial with a ~16K buffer?
Any snippets or hints?
Loading the whole file in a buffer can get inefficient to impossible on larger files - do this only if all your files are below some size limit.
OpenSSL's EVP API (which is also used by the example you linked) has an EVP_EncryptUpdate function, which can be called multiple times, each time providing some more bytes to encrypt. Use this in a loop together with reading in the plaintext from a file into a buffer, and writing out the ciphertext to another file (or the same one). (Analogously for decryption.)
Of course, instead of inventing a new file format (which you are effectively doing here), think about implementing the OpenPGP Message format (RFC 4880). There are less chances to make mistakes which might destroy your security – and as an added bonus, if your program somehow ceases to work, your users can always use the standard tools (PGP or GnuPG) to decrypt the file.
It's better to reuse a fixed buffer, unless you know you'll always process small files - but I don't think that fits your backup files definition.
I said better in a non-cryptographic way :-) There won't be any difference at the end (for the encrypted file) but your computer might not like (or even be able) to load several MB (or GB) into memory.
Crypto-wise the operations are done in block, for AES it's 128 bits (16 bytes). So, for simplicity, you better use a multiple of 16 bytes for your buffer. Otherwise the choice is yours. I would suggest between 4kb to 16kb buffers but, to be honest, I would test several values.

LZO Decompression Buffer Size

I am using MiniLZO on a project for some really simple compression tasks. I am compressing with one program, and decompressing with another. I'd like to know how much space to allocate for the decompression buffer. I am fine with over-allocating space, if it can save me the trouble of having to annotate my output file with an integer declaring how much space the decompressed data should take. How would I figure out how much space it could possibly take?
After some consideration, I think this question boils down to the following: What is the maximum compression ratio of lzo1x compression?
Since you control both the compressor and the decompressor, I suggest you compress the input in fixed-sized blocks. In my application I compress up to 64KB in each block, then emit the size of the compressed block and the compressed data itself, so the compressed stream actually looks like a series of compressed blocks:
length_of_block_1
block_1
length_of_block_2
block_2
...
The decompressor just reads each compressed block and decompresses it into a 64KB buffer, since I know the block was produced by compressing a 64KB block.
Hope that helps,
Eric Melski
The max size of the decompressed data is clearly the same as the max size of the data you compressed in the first place.
If there is an upper bound on your input size then I guess you can use it, but I have to say the usual way of doing this is to add a header to your compressed buffer which specifies the uncompressed size.

Resources