Compressing and decompressing very large files using System.IO.Compressing.Gzip - c#

My problem can be described with following statements:
I would like my program to be able to compress and decompress selected files
I have very large files (20 GB+). It is safe to assume that the size will never fit into the memory
Even after compression the compressed file might still not fit into the memory
I would like to use System.IO.Compression.GzipStream from .NET Framework
I would like my application to be parallel
As I am a newbie to compression / decompression I had following idea on how to do it:
I could use split the files into chunks and compress each of them separately. Then merge them back into a whole compressed file.
Question 1 about this approach - Is compressing multiple chunks and then merging them back together going to give me the proper result i.e. if I were to reverse the process (starting from compressed file, back to decompressed) will I receive the same original input?
Question 2 about this approach - Does this approach make sense to you? Perhaps you could direct me towards some good lecture about the topic? Unfortunately I could not find anything myself.

You do not need to chunk the compression just to limit memory usage. gzip is designed to be a streaming format, and requires on the order of 256KB of RAM to compress. The size of the data does not matter. The input could be one byte, 20 GB, or 100 PB -- the compression will still only need 256KB of RAM. You just read uncompressed data in, and write compressed data out until done.
The only reason to chunk the input as you diagram is to make use of multiple cores for compression. Which is a perfectly good reason for your amount of data. Then you can do exactly what you describe. So long as you combine the output in the correct order, the decompression will then reproduce the original input. You can always concatenate valid gzip streams to make a valid gzip stream. I would recommend that you make the chunks relatively large, e.g. megabytes, so that the compression is not noticeably impacted by the chunking.
Decompression cannot be chunked in this way, but it is much faster so there would be little to no benefit even if you could. The decompression is usually i/o bound.

Related

Add data to a zip archive until it reaches a given size

I'm trying to zip some data, but also break up the data set into multiple archives so that no single zip file ends up being larger than some maximum.
Since my data is not sourced from the file system it seems a good idea to use a streaming approach. I thought I could simply write one atomic piece of data at a a time while keeping track of the stream position prior to writing each piece. Once I exceed the limit, I truncate the stream to the position before writing the piece that can't fit, and move on to create the next archive.
I've attempted with the classes in System.IO.Compression - create an archive, create an entry, use ZipArchiveEntry.Open to get a stream, and write to that stream. The problem is it seems impossible to get from this how large the archive is at any point.
I can read the stream's position, but this is tracking uncompressed bytes. Truncating the stream works fine too, so I have this working as intended now with the important exception that the limit applies to how much uncompressed data there is per archive rather than how large the compressed archive becomes.
The data is part compressible text and various blobs (attachments from end users) that will sometimes be very compressible and sometimes not at all.
My questions:
1) Is there something about the deflate algorithm that inherently conflicts with my approach? I know it is a block-based compression scheme and I imagine the algorithm may not decide how to encode the compressed data until the entire archive has been specified.
2) If the answer to (1) above is "yes", what is a good strategy that doesn't introduce far too much overhead?
One idea I have is to assume that compressed data won't be larger than uncompressed data. I can then write to the stream until uncompressed data exceeds the threshold, and then save the archive, calculate the difference between the threshold and the current size, and repeat until full.
In case that wasn't clear, say the limit is 1MB. I write 1 MB of uncompressed data and save the archive. I then see that the resulting archive is 0.3MB. I open the archive (and its only entry) again and start over with a new limit of 0.7 MB, since I know I am able to add at least that much uncompressed data to it without overshooting. I imagine this approach is relatively simple to implement, and will test it, but am interested to hear if anyone has better ideas.
You can find out a lower bound on how big the compressed data is by looking at the Length or Position of the underlying FileStream. You can then decide to stop adding entries. The ZIP stream classes tend to buffer not too much. Probably on the order of 64KB.
Truncating the archive at a certain point should be possible. Try flushing the ZIP stream before measuring the Position of the base stream. This is always possible in theory but the actual library you are using might not support it. Test it or look at the source.

Concatenate gzipped byte arrays in C#

I have gzipped data which is stored in DB.
Is there a way to concatenate say 50 separate gzipped data into one gzipped output which can be uncompressed? The result should be same as decompressing that 50 items, concatenating them and then gzipping them.
I would like to avoid decompression phase.
Is there also some performance benefit of merging already gzipped data instead gzipping whole byte array?
I would assume that merely concatenating any file in a zipped format would prove disastrous as the zipping algorithm has been run on the specific content per file. I think that you would have to manually unzip all, concatenate, then zip again.
Yes, you can concatenate gzip streams, which when decompressed give you the same thing as if you had concatenated the uncompressed data and gzipped it all at once. Specifically:
gzip a
gzip b
cat a.gz b.gz > c.gz
gunzip c.gz
will give you the same c as:
cat a b > c
However compression will be degraded as compared to gzipping the whole thing at once, especially if each of your 50 pieces are small, e.g. less than several 10's of K bytes. The compressed result will always be different, and a little or a lot larger depending on the size of the pieces.
The comment in another answer about GZIPStream should be heeded. I also recommend that you use DotNetZip instead.
GZip is buggy, moreso decompressing a gzip file which itself has multiple gzip members is buggy... Not all of gzips bugs have been ironed out even in .net 4.5
Furthermore consider what machine each gzip was created on, i.e. is it a BGZF "Blocked GNU Zip Format"? which complicates the issue at hand.
Furthermore the resulting gzip file can be bigger than if you had concatenated all the uncompressed individual files together (gzip isn't a very good compression algorithm set).
I recommend you use DotNetZip instead if it isn't too late.
GZipStream is not really built to handle multiple files, however you can use System.IO.BinaryWriter and System.IO.BinaryReader to gain full control, although it can get messy. DotNetZip just works! It is designed to handle multiple files.
P.S. GZipStream works for file sizes up to 8GB with .Net 4, although earlier versions have a lower limit, e.g. GZipStream works for file sizes up to 4GB with .Net 3.5

Does DeflateStream "skip" decompression if the data was not originally compressed?

I'm not familiar with the internals of DeflateStream, but I need to store files in a Vendor's DB system that uses DeflateStream on binary attachments. The first thing I noticed was that all of my files were 10-50% BIGGER after compression, but I attribute that to a less sophisticated compression algo on top of files that are already highly compressed (in this case they were all PDFs). My question however relates to the fact that when I just wrote the original file into the BLOB the Vendor's application had no problem opening it (it opened the attachments I compressed with deflate as well). Is there a header on the compressed data that tells DeflateStream that the data's not compressed and basically pass it on as-is? This is the specification; can anyone familiar with it point where this is defined - or am I off base and the vendor is doing some magic behind the scenes?
no, there is no such magic in DeflateStream.
The built-in deflateStream exhibits a compression anomaly in which previously-compressed data actually increases in size. This has been reported to Microsoft previously, but they declined to fix the problem. it has to do with a naive implementation in DeflateStream of the DEFLATE protocol.
Ways that I know of to avoid the problem:
use an alternative deflateStream that does not exhibit this problem. See DotNetZip for one example.
It includes a DeflateStream that just works.
use the broken DeflateStream, compress the stream, compare sizes, and if the "compressed" stream is larger, then fallback to using the "uncompressed" stream.
If you choose the former case, you still have the condition where you are compressing stuff that has already been compressed. In other words, unnecessary double-compression. so you may want to look into avoiding that, regardless what you choose.
Stream compression is different from file compression. When compressing a file, it's generally possible to make multiple passes over the entire file and determine which compression scheme to use before having to commit to one. When compressing a stream, it's often necessary to start outputting data before the compression routine has processed enough data to know what compression method is going to be optimal.
This effect can be somewhat mitigated by dividing data into blocks, deciding for each block how to represent the data, and including a header at the start of each block identifying how it is stored. Unfortunately, the extra block headers will add to the size of the resulting stream. Further, many compression schemes improve in effectiveness as they process a stream; it may well be that every 1k block in a file would expand if "compressed" individually, even if compressing the whole file would result in a considerable space savings (since the compresser could e.g. build up a dictionary of common byte sequences). It would be possible to design a compress/uncompress pair so that a block of data which would expand would be written out verbatim by the compresser (with a header byte indicating that's what it was), and have the uncompresser process that block the same way the compresser could have done, so as to add to the dictionary the same byte sequences that would have been added had the block been stored in "compressed" form. Such an approach would probably be a good one, though it would add considerably to the complexity of the uncompresser.
I suspect the biggest problem for DeflateStream, though, is that there may not be any way to improve the worst-case "compression" performance without producing compressed data that is incompatible with the existing "uncompress" code. Suppose one has a string of bytes Q, and one needs a sequence of bytes which, when fed to the "uncompress" code that shipped with .net 2.0, will yield that same sequence. It may well be that for some possible values of Q, there are no such input sequences which aren't a lot bigger than Q. If that's the case, there's no way Microsoft could "fix" the problem without a time machine.
It all depends on how the DEFLATE stream was created.
DEFLATE supports a "non-compressed block" (BTYPE=00) and all data in this block, should it be utilized, is stored verbatim with no compression -- just the block header, length, and raw data. However, a stream can be a valid DEFLATE stream and contain zero (or not enough) "non-compressed" blocks even if this resulted in a sub-par compression ratio.
The overall compression ratio will depend upon the data, compressor algorithm/implementation, and amount of effort it puts into performing the compression.
Happy coding.

Read whole file in memory VS read in chunks

I'm relatively new to C# and programming, so please bear with me. I'm working an an application where I need to read some files and process those files in chunks (for example data is processed in chunks of 48 bytes).
I would like to know what is better, performance-wise, to read the whole file at once in memory and then process it or to read file in chunks and process them directly or to read data in larger chunks (multiple chunks of data which are then processed).
How I understand things so far:
Read whole file in memory
pros:
-It's fast, because the most time expensive operation is seeking, once the head is in place it can read quite fast
cons:
-It consumes a lot of memory
-It consumes a lot of memory in very short time ( This is what I am mainly afraid of, because I do not want that it noticeably impacts overall system performance)
Read file in chunks
pros:
-It's easier (more intuitive) to implement
while(numberOfBytes2Read > 0)
read n bytes
process read data
-It consumes very little memory
cons:
-It could take much more time, if the disk has to seek the file again and move the head to the appropriate position, which in average costs around 12ms.
I know that the answer depends on file size (and hardware). I assume it is better to read the whole file at once, but for how large files is this true, what is the maximum recommended size to read in memory at once (in bytes or relative to the hardware - for example % of RAM)?
Thank you for your answers and time.
It is recommended to read files in buffers of 4K or 8K.
You should really never read files all at once if you want to write it back to another stream. Just read to a buffer and write the buffer back. This is especially through for web programming.
If you have to load the whole file since your operation (text-processing, etc) needs the whole content of the file, buffering does not really help, so I believe it is preferable to use File.ReadAllText or File.ReadAllBytes.
Why 4KB or 8KB?
This is closer to the underlying Windows operating system buffers. Files in NTFS are normally stored in 4KB or 8KB chuncks on the disk although you can choose 32KB chuncks
Your chunk needs to be just large enougth, 48 bytes is of course to small, 4K is reasonable.

Compressing/decompressing audio data

i am using the win32 waveform api's in a C# app to make a voip system. all is going well, however i need some way of compressing the audio data on the fly.
so basically the audio data comes into a 'record' buffer of size 150 bytes, and then this buffer is sent over udp, and at the remote end, the 150 bytes are received and put into a 'play' buffer.
so i need some way of compressing/decompressing the data just before the udp->send and just after the udp->recv. normal compression algorithms dont work with audio, including the .NET GZip class.
does anyone know of a library that i can use that will help me do this ?
thanks in advance...
150 bytes is an unbelievably small buffer for audio data--less than 5 milliseconds for e.g. 16 KHz mono. I'm no expert but I think regardless of the compression scheme you choose, your compression ratio will suffer greatly for using such a small buffer. Besides that there is significant overhead for each packet you send.
That said, if you are sending speech data, take a look at Speex for lossy compression (I have found it very effective at compressing speech, but the sound quality is terrible for music.)
I would think you'd want to batch up those 150-byte chunks to get better compression.
Although, even at small buffer sizes like that, you can still get some compression.
If the built-in GZipStream isn't working you could try the GZipStream that is included in DotNetZip. There is also a ZlibCodec class available in DotNetZip that implements the Codec pattern - this may facilitate compressing in 150-byte blocks.
The component you're looking for is more well-known as a coder/decoder, or codec, and there are many options when it comes to picking one.
As suggested above, I'd look into Speex. It's well supported, and now the defacto standard for Flash Player.
I assume that by the size you are setting your buffers that latency is an issue (the bigger the buffer, the bigger the latency), so don't go for a codec that has a high decompressed frame size, because it introduces high latency. This more or less rules out MP3... for voice at 5khz output sample rate (it wouldn't serve much purpose going higher), the minimum decompressed frame size is 576 samples, or ~100ms of data that must be encoded prior to send. This means a bothway latency of over 200ms before you've even considered the network part of the problem.

Categories