How to decompress a string in javascript, compressed in C#? [duplicate] - c#

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
ZLIB Decompression - Client Side
I'll try to be clear and I'm sorry for my bad english. This is the question:
In my web application i received a string that represent an image compressed with this algorithm, written in C#:
public static class Compression
{
public static string Compress(string text)
{
byte[] buffer = Encoding.UTF8.GetBytes(text);
MemoryStream ms = new MemoryStream();
using (GZipStream zip = new GZipStream(ms, CompressionMode.Compress, true))
{
zip.Write(buffer, 0, buffer.Length);
}
ms.Position = 0;
MemoryStream outStream = new MemoryStream();
byte[] compressed = new byte[ms.Length];
ms.Read(compressed, 0, compressed.Length);
byte[] gzBuffer = new byte[compressed.Length + 4];
System.Buffer.BlockCopy(compressed, 0, gzBuffer, 4, compressed.Length);
System.Buffer.BlockCopy(BitConverter.GetBytes(buffer.Length), 0, gzBuffer, 0, 4);
return Convert.ToBase64String(gzBuffer);
}
public static string Decompress(string compressedText)
{
byte[] gzBuffer = Convert.FromBase64String(compressedText);
using (MemoryStream ms = new MemoryStream())
{
int msgLength = BitConverter.ToInt32(gzBuffer, 0);
ms.Write(gzBuffer, 4, gzBuffer.Length - 4);
byte[] buffer = new byte[msgLength];
ms.Position = 0;
using (GZipStream zip = new GZipStream(ms, CompressionMode.Decompress))
{
zip.Read(buffer, 0, buffer.Length);
}
return Encoding.UTF8.GetString(buffer);
}
}
}
The Decompress method is used in the Server side application. I receive an xml file with the string that represent the image compressed with the Compress method and I want to be able to decompress the string I received in javascript within my web app. Is there a way to do that? Are there other solutions? Thank's to everyone!!

The best solution might be to translate the decompression function from C# to Javascript. You could use one that's already available in Javascript such as this one, but you would need to change the source of the image or uncompress-recompress at the server, unless it happens to be compatible with the compression you're using.
Another option would be to convert the image in to .jpg or .png before you use it, again at the server. This would give you more flexibility in the long run, but might put a load on the server depending on traffic and image size.

You can use JSXCompressor library to do decompression (deflate, unzip).
But if your web server support compression at http level I think you can skip compression and decompression.

Related

How to compress data in C# to be decompressed in zlib python

I have a python zlib decompressor that takes default parameters as follows, where data is string:
import zlib
data_decompressed = zlib.decompress(data)
But, I don't know how I can compress a string in c# to be decompressed in python. I've tray the next piece of code but when I trie to decompresse 'incorrect header check' exception is trown.
static byte[] ZipContent(string entryName)
{
// remove whitespace from xml and convert to byte array
byte[] normalBytes;
using (StringWriter writer = new StringWriter())
{
//xml.Save(writer, SaveOptions.DisableFormatting);
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
normalBytes = encoding.GetBytes(writer.ToString());
}
// zip into new, zipped, byte array
using (Stream memOutput = new MemoryStream())
using (ZipOutputStream zipOutput = new ZipOutputStream(memOutput))
{
zipOutput.SetLevel(6);
ZipEntry entry = new ZipEntry(entryName);
entry.CompressionMethod = CompressionMethod.Deflated;
entry.DateTime = DateTime.Now;
zipOutput.PutNextEntry(entry);
zipOutput.Write(normalBytes, 0, normalBytes.Length);
zipOutput.Finish();
byte[] newBytes = new byte[memOutput.Length];
memOutput.Seek(0, SeekOrigin.Begin);
memOutput.Read(newBytes, 0, newBytes.Length);
zipOutput.Close();
return newBytes;
}
}
Anyone could help me please?
Thank you.
UPDATE 1:
I've tried with defalte function as Shiraz Bhaiji has posted:
public static byte[] Deflate(byte[] data)
{
if (null == data || data.Length < 1) return null;
byte[] compressedBytes;
//write into a new memory stream wrapped by a deflate stream
using (MemoryStream ms = new MemoryStream())
{
using (DeflateStream deflateStream = new DeflateStream(ms, CompressionMode.Compress, true))
{
//write byte buffer into memorystream
deflateStream.Write(data, 0, data.Length);
deflateStream.Close();
//rewind memory stream and write to base 64 string
compressedBytes = new byte[ms.Length];
ms.Seek(0, SeekOrigin.Begin);
ms.Read(compressedBytes, 0, (int)ms.Length);
}
}
return compressedBytes;
}
The problem is that to work properly in python code I've to add the "-zlib.MAX_WBITS" argument to decompress as follows:
data_decompressed = zlib.decompress(data, -zlib.MAX_WBITS)
So, my new question is: is it possible to code a deflate method in C# which compression result could be decompressed with zlib.decompress(data) as defaults?
In C# the DeflateStream class supports zlib. See:
https://learn.microsoft.com/en-us/dotnet/api/system.io.compression.deflatestream?view=netframework-4.8
As you described with your edit, zlib.decompress(data, -zlib.MAX_WBITS) is the correct way to decompress data from C#'s DeflateStream. There are two formats at play here:
deflate - as in specification RFC 1951 - this is what's C# is producing
zlib - as in specification RFC 1950 - this is what's Python is expecting by default
What is the difference between the two? It's small, really:
zlib = [compression flag byte] + [flags byte] + deflate + [adler checksum]
(there are also optional dictionary bytes but we don't have to worry about them)
Therefore, to get zlib format from deflate, we need to prepend two bytes of flags, and append Adler-32 checksum. Luckily we have an answer on stackoverflow for the flags, see What does a zlib header look like? and implementing Adler-32 is not that hard. So suppose you have your MemoryStream ms, we would first write the two flag bytes
ms.Write(new byte[] {0x78,0x9c});
...then we would do exactly what's in your answer
using (DeflateStream deflateStream = new DeflateStream(ms, CompressionMode.Compress, true))
{
deflateStream.Write(data, 0, data.Length);
deflateStream.Close();
}
and, at last, compute the checksum and append it to the end of the stream:
uint a = 0;
uint b = 0;
for(int i = 0; i < data.Length; ++i)
{
a = (a + data[i]) % 65521;
b = (b + a) % 65521;
}
Sadly, I don't know a pretty way of writing uints into the stream. This is an ugly way:
ms.Write(new byte[] { (byte)(b>>8),
(byte)b,
(byte)(a>>8),
(byte)a
});

protobuf-net returns null when calling Deserialize

My end goal is to use protobuf-net and GZipStream in an attempt to compress a List<MyCustomType> object to store in a varbinary(max) field in SQL Server. I'm working on unit tests to understand how everything works and fits together.
Target .NET framework is 3.5.
My current process is:
Serialize the data with protobuf-net (good).
Compress the serialized data from #1 with GZipStream (good).
Convert the compressed data to a base64 string (good).
At this point, the value from step #3 will be stored in a varbinary(max) field. I have no control over this. The steps resume with needing to take a base64 string and deserialize it to a concrete type.
Convert a base 64 string to a byte[] (good).
Decompress the data with GZipStream (good).
Deserialize the data with protobuf-net (bad).
Can someone assist with why the call to Serializer.Deserialize<string> returns null? I'm stuck on this one and hopefully a fresh set of eyes will help.
FWIW, I tried another version of this using List<T> where T is a custom class I created and I Deserialize<> still returns null.
FWIW 2, data.txt is a 4MB plaintext file residing on my C:.
[Test]
public void ForStackOverflow()
{
string data = "hi, my name is...";
//string data = File.ReadAllText(#"C:\Temp\data.txt");
string serializedBase64;
using (MemoryStream protobuf = new MemoryStream())
{
Serializer.Serialize(protobuf, data);
using (MemoryStream compressed = new MemoryStream())
{
using (GZipStream gzip = new GZipStream(compressed, CompressionMode.Compress))
{
byte[] s = protobuf.ToArray();
gzip.Write(s, 0, s.Length);
gzip.Close();
}
serializedBase64 = Convert.ToBase64String(compressed.ToArray());
}
}
byte[] base64byteArray = Convert.FromBase64String(serializedBase64);
using (MemoryStream base64Stream = new MemoryStream(base64byteArray))
{
using (GZipStream gzip = new GZipStream(base64Stream, CompressionMode.Decompress))
{
using (MemoryStream plainText = new MemoryStream())
{
byte[] buffer = new byte[4096];
int read;
while ((read = gzip.Read(buffer, 0, buffer.Length)) > 0)
{
plainText.Write(buffer, 0, read);
}
// why does this call to Deserialize return null?
string deserialized = Serializer.Deserialize<string>(plainText);
Assert.IsNotNull(deserialized);
Assert.AreEqual(data, deserialized);
}
}
}
}
Because you didn't rewind plainText after writing to it. Actually, that entire Stream is unnecessary - this works:
using (MemoryStream base64Stream = new MemoryStream(base64byteArray))
{
using (GZipStream gzip = new GZipStream(
base64Stream, CompressionMode.Decompress))
{
string deserialized = Serializer.Deserialize<string>(gzip);
Assert.IsNotNull(deserialized);
Assert.AreEqual(data, deserialized);
}
}
Likewise, this should work for the serialize:
using (MemoryStream compressed = new MemoryStream())
{
using (GZipStream gzip = new GZipStream(
compressed, CompressionMode.Compress, true))
{
Serializer.Serialize(gzip, data);
}
serializedBase64 = Convert.ToBase64String(
compressed.GetBuffer(), 0, (int)compressed.Length);
}

GZipping Javascript from .ashx returns decoding error in browser

Background
I'm setting up a generic handler to:
Combine & compress Javascript and CSS files
Cache a GZip version & a Non-GZip version
Serve the appropriate version based on the request
I'm working in MonoDevelop v2.8.2 on OSX 10.7.2
Problem
Since I want to Cache the GZipped version, I need to GZip without using a response filter
Using this code, I can compress and decompress a string on the server successfully, but when I serve it to the client I get:
Error 330 (net::ERR_CONTENT_DECODING_FAILED): Unknown error. (Chrome)
Cannot decode raw data (Safari)
The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression. (Firefox)
Relevant Code
string sCompiled =null;
if(bCanGZip)
{
context.Response.AddHeader("Content-Encoding", "gzip");
bHasValue = CurrentCache.CompiledScripts.TryGetValue(context.Request.Url.ToString() + "GZIP", out sCompiled);
}
//...
//Process files if bHasVale is false
//Compress result of file concatination/minification
//Compression method
public static string CompressString(string text)
{
UTF8Encoding encoding = new UTF8Encoding(false);
byte[] buffer = encoding.GetBytes(text);
using(MemoryStream memoryStream = new MemoryStream()){
using (GZipStream gZipStream = new GZipStream(memoryStream, CompressionMode.Compress, true))
{
gZipStream.Write(buffer, 0, buffer.Length);
}
memoryStream.Position = 0;
byte[] compressedData = new byte[memoryStream.Length];
memoryStream.Read(compressedData, 0, compressedData.Length);
byte[] gZipBuffer = new byte[compressedData.Length + 4];
Buffer.BlockCopy(compressedData, 0, gZipBuffer, 4, compressedData.Length);
Buffer.BlockCopy(BitConverter.GetBytes(buffer.Length), 0, gZipBuffer, 0, 4);
return Convert.ToBase64String(gZipBuffer);
}
}
//...
//Return value
switch(Type){
case FileType.CSS:
context.Response.ContentType = "text/css";
break;
case FileType.JS:
context.Response.ContentType = "application/javascript";
break;
}
context.Response.AddHeader("Content-Length", sCompiled.Length.ToString());
context.Response.Clear();
context.Response.Write(sCompiled);
Attempts to Resolve
Since I'm not sure what the lines:
byte[] gZipBuffer = new byte[compressedData.Length + 4];
Buffer.BlockCopy(compressedData, 0, gZipBuffer, 4, compressedData.Length);
Buffer.BlockCopy(BitConverter.GetBytes(buffer.Length), 0, gZipBuffer, 0, 4);
are accomplishing, I tried removing them.
I tried playing with different Encodings/options.
At this point I'm really not sure how to attack the problem since I don't know the source of the error (Encoding/Compression/other).
Any help would be very appreciated!
Other Resources I've found on the subject
http://beta.blogs.microsoft.co.il/blogs/mneiter/archive/2009/03/24/how-to-compress-and-decompress-using-gzipstream-object.aspx
http://madskristensen.net/post/Compress-and-decompress-strings-in-C.aspx
http://www.codeproject.com/KB/files/GZipStream.aspx
http://www.codeproject.com/KB/aspnet/HttpCombine.aspx
http://webreflection.blogspot.com/2009/01/quick-tip-c-gzip-content.html
http://www.dominicpettifer.co.uk/Blog/17/gzip-compress-your-websites-html-css-script-in-code
This is one of those things where once you explain you problem, you quickly find the answer.
I need to write out the response as Binary. So modifying the compression algorithum to return a byte array:
public static byte[] CompressStringToArray(string text){
UTF8Encoding encoding = new UTF8Encoding(false);
byte[] buffer = encoding.GetBytes(text);
using(MemoryStream memoryStream = new MemoryStream()){
using (GZipStream gZipStream = new GZipStream(memoryStream, CompressionMode.Compress, true))
{
gZipStream.Write(buffer, 0, buffer.Length);
}
memoryStream.Position = 0;
byte[] compressedData = new byte[memoryStream.Length];
memoryStream.Read(compressedData, 0, compressedData.Length);
return compressedData;
}
}
and then calling:
//Writes a byte buffer without encoding the response stream
context.Response.BinaryWrite(GZipTools.CompressStringToArray(sCompiled));
Solves the issue. Hopefully this helps others who will face the same problem.

Copy between two streams in .net 2.0

I have been using the following code to Compress data in .Net 4.0:
public static byte[] CompressData(byte[] data_toCompress)
{
using (MemoryStream outFile = new MemoryStream())
{
using (MemoryStream inFile = new MemoryStream(data_toCompress))
using (GZipStream Compress = new GZipStream(outFile, CompressionMode.Compress))
{
inFile.CopyTo(Compress);
}
return outFile.ToArray();
}
}
However, in .Net 2.0 Stream.CopyTo method is not available. So, I tried making a replacement:
public static byte[] CompressData(byte[] data_toCompress)
{
using (MemoryStream outFile = new MemoryStream())
{
using (MemoryStream inFile = new MemoryStream(data_toCompress))
using (GZipStream Compress = new GZipStream(outFile, CompressionMode.Compress))
{
//inFile.CopyTo(Compress);
Compress.Write(inFile.GetBuffer(), (int)inFile.Position, (int)(inFile.Length - inFile.Position));
}
return outFile.ToArray();
}
}
The compression fails, though, when using the above attempt - I get an error saying:
MemoryStream's internal buffer cannot be accessed.
Could anyone offer any help on this issue? I'm really not sure what else to do here.
Thank you,
Evan
This is the code straight out of .Net 4.0 Stream.CopyTo method (bufferSize is 4096):
byte[] buffer = new byte[bufferSize];
int count;
while ((count = this.Read(buffer, 0, buffer.Length)) != 0)
destination.Write(buffer, 0, count);
Since you have access to the array already, why don't you do this:
using (MemoryStream outFile = new MemoryStream())
{
using (GZipStream Compress = new GZipStream(outFile, CompressionMode.Compress))
{
Compress.Write(data_toCompress, 0, data_toCompress.Length);
}
return outFile.ToArray();
}
Most likely in the sample code you are using inFile.GetBuffer() will throw an exception since you do not use the right constructor - not all MemoryStream instances allow you access to the internal buffer - you have to look for this in the documentation:
Initializes a new instance of the MemoryStream class based on the
specified region of a byte array, with the CanWrite property set as
specified, and the ability to call GetBuffer set as specified.
This should work - but is not needed anyway in the suggested solution:
using (MemoryStream inFile = new MemoryStream(data_toCompress,
0,
data_toCompress.Length,
false,
true))
Why are you constructing a memory stream with an array and then trying to pull the array back out of the memory stream?
You could just do Compress.Write(data_toCompress, 0, data_toCompress.Length);
If you need to replace the functionality of CopyTo, you can create a buffer array of some length, read data from the source stream and write that data to the destination stream.
You can try
infile.WriteTo(Compress);
try to replace the line:
Compress.Write(inFile.GetBuffer(), (int)inFile.Position, (int)(inFile.Length - inFile.Position));
with:
Compress.Write(data_toCompress, 0, data_toCompress.Length);
you can get rid of this line completely:
using (MemoryStream inFile = new MemoryStream(data_toCompress))
Edit: find an example here: Why does gzip/deflate compressing a small file result in many trailing zeroes?
You should manually read and write between these 2 streams:
private static void CopyStream(Stream from, Stream to)
{
int bufSize = 1024, count;
byte[] buffer = new byte[bufSize];
count = from.Read(buffer, 0, bufSize);
while (count > 0)
{
to.Write(buffer, 0, count);
count = from.Read(buffer, 0, bufSize);
}
}
The open-source NuGet package Stream.CopyTo implements Stream.CopyTo for all versions of the .NET Framework.
Available on GitHub and via NuGet (Install-Package Stream.CopyTo)

How to determine size of string, and compress it

I'm currently developing an application in C# that uses Amazon SQS
The size limit for a message is 8kb.
I have a method that is something like:
public void QueueMessage(string message)
Within this method, I'd like to first of all, compress the message (most messages are passed in as json, so are already fairly small)
If the compressed string is still larger than 8kb, I'll store it in S3.
My question is:
How can I easily test the size of a string, and what's the best way to compress it?
I'm not looking for massive reductions in size, just something nice and easy - and easy to decompress the other end.
To know the "size" (in kb) of a string we need to know the encoding. If we assume UTF8, then it is (not including BOM etc) like below (but swap the encoding if it isn't UTF8):
int len = Encoding.UTF8.GetByteCount(longString);
Re packing it; I would suggest GZIP via UTF8, optionally followed by base-64 if it has to be a string:
using (MemoryStream ms = new MemoryStream())
{
using (GZipStream gzip = new GZipStream(ms, CompressionMode.Compress, true))
{
byte[] raw = Encoding.UTF8.GetBytes(longString);
gzip.Write(raw, 0, raw.Length);
gzip.Close();
}
byte[] zipped = ms.ToArray(); // as a BLOB
string base64 = Convert.ToBase64String(zipped); // as a string
// store zipped or base64
}
Give unzip bytes to this function.The best I could come up with was
public static byte[] ZipToUnzipBytes(byte[] bytesContext)
{
byte[] arrUnZipFile = null;
if (bytesContext.Length > 100)
{
using (var inFile = new MemoryStream(bytesContext))
{
using (var decompress = new GZipStream(inFile, CompressionMode.Decompress, false))
{
byte[] bufferWrite = new byte[4];
inFile.Position = (int)inFile.Length - 4;
inFile.Read(bufferWrite, 0, 4);
inFile.Position = 0;
arrUnZipFile = new byte[BitConverter.ToInt32(bufferWrite, 0) + 100];
decompress.Read(arrUnZipFile, 0, arrUnZipFile.Length);
}
}
}
return arrUnZipFile;
}

Categories