Simple 2-color differential image compression - c#

Is there an efficient, quick and simple example of doing differential b/w image compression? Or even better, some simple (but lossless - jagged 1bpp images don't look very convincing when compressed using lossy compression) streaming technique which could accept a number of frames as input?
I have a simple b/w image (320x200) stream, displaying something similar to a LED display, which is updated about once a second using AJAX. Images are pretty similar most of the time, so if I subtracted them, result would compress pretty well (even with simple RLE). Is something like this available?

I don't know of any library that already exists that can do what you're asking other than just running it through gzip or some other lossless compression algorithm. However, since you know that the frames are highly correlated, you could XOR the frames like Conspicuous Compiler suggested and then run gzip on that. If there are few changes between frames, the result of the XOR should have a great deal less entropy than the original frame. This will allow gzip or another lossless compression algorithm to achieve a higher compression ratio.
You would also want to send a key(non-differential) frame every once in a while so you can resynchronize in the event of errors.
If you are just interested in learning about compression, you could try implementing the RLE after XORing the frames. Check out the bit-level RLE discussed here about half way down the page. It should be pretty easy to implement as it just stores in each byte a 7 bit length and a one bit value so it could achieve a best-case compression ratio of 128/8=16 if there are no changes between frames.
Another thought is that if there are very few changes, you may want to just encode the bit positions that flipped between frames. You could address the 320x200 image with a 16-bit integer. For instance, if only 100 pixels change, you can just store 100 16 bit integers representing those positions (1600 bits) where the RLE discussed above would take 64000/16=4000 bits at the minimum (it would probably be quite a bit higher). You could actually switch between this method and RLE depending on the frame content.
If you wanted to go beyond simple methods, I would suggest using variable-length codes to represent the possible runs during the run-length encoding. You could then assign shorter codes to the runs with the highest probability. This would be similar to the RLE used in JPEG or MPEG after the lossy part of the compression is performed (DCT and quantization).

Related

Compressing and decompressing very large files using System.IO.Compressing.Gzip

My problem can be described with following statements:
I would like my program to be able to compress and decompress selected files
I have very large files (20 GB+). It is safe to assume that the size will never fit into the memory
Even after compression the compressed file might still not fit into the memory
I would like to use System.IO.Compression.GzipStream from .NET Framework
I would like my application to be parallel
As I am a newbie to compression / decompression I had following idea on how to do it:
I could use split the files into chunks and compress each of them separately. Then merge them back into a whole compressed file.
Question 1 about this approach - Is compressing multiple chunks and then merging them back together going to give me the proper result i.e. if I were to reverse the process (starting from compressed file, back to decompressed) will I receive the same original input?
Question 2 about this approach - Does this approach make sense to you? Perhaps you could direct me towards some good lecture about the topic? Unfortunately I could not find anything myself.
You do not need to chunk the compression just to limit memory usage. gzip is designed to be a streaming format, and requires on the order of 256KB of RAM to compress. The size of the data does not matter. The input could be one byte, 20 GB, or 100 PB -- the compression will still only need 256KB of RAM. You just read uncompressed data in, and write compressed data out until done.
The only reason to chunk the input as you diagram is to make use of multiple cores for compression. Which is a perfectly good reason for your amount of data. Then you can do exactly what you describe. So long as you combine the output in the correct order, the decompression will then reproduce the original input. You can always concatenate valid gzip streams to make a valid gzip stream. I would recommend that you make the chunks relatively large, e.g. megabytes, so that the compression is not noticeably impacted by the chunking.
Decompression cannot be chunked in this way, but it is much faster so there would be little to no benefit even if you could. The decompression is usually i/o bound.

Faster Image Compression/Decompression - Latency Compared to Microsoft's Jpeg

May be a bit subjective.
But quite a straightforward question.
What is the fastest image compression/decompression (both together)?
And i mean available in c#.
I am pretty sure myself that it´s Jpeg.
But then again, jpeg has been following a standard since many years back, and must abide certain rules so it doesn´t break compatibility.
So perhaps there is something better that i don´t know off?
And when i say Fastest, i mean Latency and performance.
Meaning, let´s say, PNG for example, to compress a 1080p file, it takes 1 sec.
and decompressing that takes, 30ms, then from the bitmap source to the second bitmap, it will be a 1.030 sec delay.
Jpeg is Alot faster than png for many reasons, and it´s extremely fast on decompression as well. And as many other things, the encoder/decoder does most of the job, meaning a bad encoder will produce worse results even if the standard itself can produce much better.
I am currently limited to the inbuilt jpeg encoder/decoder as i have not fully grasped how to P/invoke from other encoders/decoders (libjpeg etc), but that´s off topic to this.
So hopefully this is a valid question, though i think it may be on the edge of that.
EDIT: noticed that i had asked this before but in another term or what to call it. Though now i have written more specifically about it. But i think it´s pretty much a duplicate.
I leave it in your hands Moderators.
PNG is extremely slow, as you say. For a 10,000 x 10,000 pixel RGB image I see:
$ time vips copy wtc.jpg x.jpg
real 0m0.915s
user 0m1.652s
sys 0m0.052s
$ time vips copy wtc.png x.png
real 0m28.808s
user 0m32.448s
sys 0m0.272s
That's time to decompress and recompress again, real is wall clock time, so it's about 30x slower.
Most of that is spent in Deflate decompress and recompress. PNG has an option to set the compression level, with 0 being no compression, ie. deflate turned off. It's still far slower than JPEG.
$ time vips copy wtc0.png x.png[compression=0]
real 0m6.552s
user 0m8.528s
sys 0m0.440s
About 7x slower, and of course with compression turned off the file will be much larger. I've no idea why libpng is so incredibly slow, it would be great if someone could make a libpng-turbo.
TIFF is probably the fastest widely-used format. I see:
$ time vips copy wtc.tif x.tif
real 0m0.637s
user 0m0.432s
sys 0m0.344s
So about 50% faster than JPEG, though again the file on disc will be much larger since the image is not compressed.
Formats like PPM are even faster. They are a simple dump of the image data with a small header giving dimensions. I see:
$ time vips copy wtc.ppm x.ppm
real 0m0.336s
user 0m0.196s
sys 0m0.296s
So almost 3x faster than JPEG. Again, the downside is that file will be huge, since there's no compression.

Does DeflateStream "skip" decompression if the data was not originally compressed?

I'm not familiar with the internals of DeflateStream, but I need to store files in a Vendor's DB system that uses DeflateStream on binary attachments. The first thing I noticed was that all of my files were 10-50% BIGGER after compression, but I attribute that to a less sophisticated compression algo on top of files that are already highly compressed (in this case they were all PDFs). My question however relates to the fact that when I just wrote the original file into the BLOB the Vendor's application had no problem opening it (it opened the attachments I compressed with deflate as well). Is there a header on the compressed data that tells DeflateStream that the data's not compressed and basically pass it on as-is? This is the specification; can anyone familiar with it point where this is defined - or am I off base and the vendor is doing some magic behind the scenes?
no, there is no such magic in DeflateStream.
The built-in deflateStream exhibits a compression anomaly in which previously-compressed data actually increases in size. This has been reported to Microsoft previously, but they declined to fix the problem. it has to do with a naive implementation in DeflateStream of the DEFLATE protocol.
Ways that I know of to avoid the problem:
use an alternative deflateStream that does not exhibit this problem. See DotNetZip for one example.
It includes a DeflateStream that just works.
use the broken DeflateStream, compress the stream, compare sizes, and if the "compressed" stream is larger, then fallback to using the "uncompressed" stream.
If you choose the former case, you still have the condition where you are compressing stuff that has already been compressed. In other words, unnecessary double-compression. so you may want to look into avoiding that, regardless what you choose.
Stream compression is different from file compression. When compressing a file, it's generally possible to make multiple passes over the entire file and determine which compression scheme to use before having to commit to one. When compressing a stream, it's often necessary to start outputting data before the compression routine has processed enough data to know what compression method is going to be optimal.
This effect can be somewhat mitigated by dividing data into blocks, deciding for each block how to represent the data, and including a header at the start of each block identifying how it is stored. Unfortunately, the extra block headers will add to the size of the resulting stream. Further, many compression schemes improve in effectiveness as they process a stream; it may well be that every 1k block in a file would expand if "compressed" individually, even if compressing the whole file would result in a considerable space savings (since the compresser could e.g. build up a dictionary of common byte sequences). It would be possible to design a compress/uncompress pair so that a block of data which would expand would be written out verbatim by the compresser (with a header byte indicating that's what it was), and have the uncompresser process that block the same way the compresser could have done, so as to add to the dictionary the same byte sequences that would have been added had the block been stored in "compressed" form. Such an approach would probably be a good one, though it would add considerably to the complexity of the uncompresser.
I suspect the biggest problem for DeflateStream, though, is that there may not be any way to improve the worst-case "compression" performance without producing compressed data that is incompatible with the existing "uncompress" code. Suppose one has a string of bytes Q, and one needs a sequence of bytes which, when fed to the "uncompress" code that shipped with .net 2.0, will yield that same sequence. It may well be that for some possible values of Q, there are no such input sequences which aren't a lot bigger than Q. If that's the case, there's no way Microsoft could "fix" the problem without a time machine.
It all depends on how the DEFLATE stream was created.
DEFLATE supports a "non-compressed block" (BTYPE=00) and all data in this block, should it be utilized, is stored verbatim with no compression -- just the block header, length, and raw data. However, a stream can be a valid DEFLATE stream and contain zero (or not enough) "non-compressed" blocks even if this resulted in a sub-par compression ratio.
The overall compression ratio will depend upon the data, compressor algorithm/implementation, and amount of effort it puts into performing the compression.
Happy coding.

Fast CRC of a Drawing.Bitmap

On application startup I build a cache of icons (24x24 32bbp pre-multiplied argb bitmaps). this cache contains roughly a thousand items. I don't want to store the same image multiple times in this cache, for both memory and performance reasons. I figured the best way would be to create some sort of crc from each bitmap as it goes into the cache, and to compare new bitmaps against this list of crcs.
What is a good (and fast) way to create a crc from a bitmap which is only loaded in memory?
Or am I totally on the wrong track and is there a better way to build a bitmap-cache?
While I would echo what Hans has said, I believe that you can do this but CRC is a bad algorithm to use.
You can instead create an MD5 hash of the bytes of the generated bitmap. By my calculations your images must be a minimum of 2Kb in size. To generate a hash you can either calculate it across the whole bitmap, or you can be sneaky and do it on every n th byte - which would be faster on the hash side but probably heavier on memory usage as you'll have to extract those bytes into a new array.
If you were going to skip every nth byte, I would use 4 or 2 - using 4 means you read one component from each consecutive pixel, using two means you read two components from each consecutive pixel.
However, MD5 is very fast and you might find (and I would benchmark this in a unit test) that just hashing across the whole bitmap will be faster.
The only thing is, I can't see how you can check in advance whether you should generate a given bitmap without in advance knowing it's hash and the only way you can know it's hash is to generate it. In which case by that point you might as well just save the new image out. An extra element in your image cache array isn't going to break the universe.
What you really need to be able to do here to actually save space and startup time is to know in advance of generating an image whether it's going to be the same as another. Given that these images are generated dynamically is it the case that, when two identical images are generated, they are generated by the same method call with the same parameters?
If so, you could instead look at tagging each generated image with one or more hashcodes (using object.GetHashCode()) for the MethodInfo of the method that generates the image (you can get that inside the method itself by calling MethodBase.GetCurrentMethod(), along with each hashcode for each parameter that was passed in. The hashcode for the method is quite reliable, as it uses the runtime's method handle (which is unique for each method) - the only hash code compression that can occur there is on 64 bit machines where the handle is 64 bit, but the hash code is 32. However, in practise such a collision rarely occurs since you'd have to have a huge amount of code in the application to cause the first 32 bits of two separate method handles to bee the same.
The hash codes of the individual parameters, of course, are far less reliable unless those parameter types have good hash code functions.
This solution would by no means be perfect (at worst you'd still get some duplicates), but I reckon it would speed things up. Like I say, though, it relies on your duplicated images always being generated by the same calls.
A CRC has the same flaw as any hashing function: an equal CRC value does not proof that the images are identical. Your program will randomly, but infrequently, display the wrong image.
You need something else. Like the filename from which you retrieved the bitmap.

Compressing/decompressing audio data

i am using the win32 waveform api's in a C# app to make a voip system. all is going well, however i need some way of compressing the audio data on the fly.
so basically the audio data comes into a 'record' buffer of size 150 bytes, and then this buffer is sent over udp, and at the remote end, the 150 bytes are received and put into a 'play' buffer.
so i need some way of compressing/decompressing the data just before the udp->send and just after the udp->recv. normal compression algorithms dont work with audio, including the .NET GZip class.
does anyone know of a library that i can use that will help me do this ?
thanks in advance...
150 bytes is an unbelievably small buffer for audio data--less than 5 milliseconds for e.g. 16 KHz mono. I'm no expert but I think regardless of the compression scheme you choose, your compression ratio will suffer greatly for using such a small buffer. Besides that there is significant overhead for each packet you send.
That said, if you are sending speech data, take a look at Speex for lossy compression (I have found it very effective at compressing speech, but the sound quality is terrible for music.)
I would think you'd want to batch up those 150-byte chunks to get better compression.
Although, even at small buffer sizes like that, you can still get some compression.
If the built-in GZipStream isn't working you could try the GZipStream that is included in DotNetZip. There is also a ZlibCodec class available in DotNetZip that implements the Codec pattern - this may facilitate compressing in 150-byte blocks.
The component you're looking for is more well-known as a coder/decoder, or codec, and there are many options when it comes to picking one.
As suggested above, I'd look into Speex. It's well supported, and now the defacto standard for Flash Player.
I assume that by the size you are setting your buffers that latency is an issue (the bigger the buffer, the bigger the latency), so don't go for a codec that has a high decompressed frame size, because it introduces high latency. This more or less rules out MP3... for voice at 5khz output sample rate (it wouldn't serve much purpose going higher), the minimum decompressed frame size is 576 samples, or ~100ms of data that must be encoded prior to send. This means a bothway latency of over 200ms before you've even considered the network part of the problem.

Categories