JSON truncated when compressing HTTP responses - c#

When I apply gzip or deflate compression to my HTTP responses, I seem to be losing the last bracket in my JSON structures. For example:
Result without compression:
{"alist":{"P_1":0,"P_2":0,"P_3":0}}
Result with compression as received by the browser:
{"alist":{"P_1":0,"P_2":0,"P_3":0}
When writing the response without compression I am doing the following:
byte[] buffer = Encoding.UTF8.GetBytes(responseContent);
context.Response.ContentLength64 = buffer.Length;
context.Response.ContentType = ContentTypeJson;
Stream outputStream = context.Response.OutputStream;
outputStream.Write(buffer, 0, buffer.Length);
outputStream.Close();
Alternatively, when the caller provides an Accept-Encoding request header, I try and write the response with compression as follows;
byte[] buffer = Encoding.UTF8.GetBytes(responseContent);
byte[] compressedBuffer;
using (var memoryStream = new MemoryStream())
{
using (Stream compressionStream = new DeflateStream(memoryStream, CompressionMode.Compress, false))
{
compressionStream.Write(buffer, 0, buffer.Length);
compressedBuffer = memoryStream.ToArray();
compressionStream.Close();
}
memoryStream.Close();
}
context.Response.ContentLength64 = compressedBuffer.Length;
context.Response.ContentType = ContentTypeJson;
Stream outputStream = context.Response.OutputStream;
outputStream.Write(compressedBuffer, 0, compressedBuffer.Length);
outputStream.Close();
If it helps, I am using an System.Net.HttpListener which is why I have to do this myself. Does anyone have any idea why this truncation may be occuring?

DeflateStream doesn't write to its output stream everything immediately after you write into it, but you can be sure it has done so after you close it. So the following will work:
compressionStream.Write(buffer, 0, buffer.Length);
compressionStream.Close();
compressedBuffer = memoryStream.ToArray();

Related

Return stream reader from FTP response is good practice or not

I have a method for FTP download file, but I do not save file locally rather I parse the file in memory through ftp response. My question is, is returning stream reader after getting ftp response stream a good practice? Because do not want to do parsing and other stuff in the same method.
var uri = new Uri(string.Format("ftp://{0}/{1}/{2}", "somevalue", remotefolderpath, remotefilename));
var request = (FtpWebRequest)FtpWebRequest.Create(uri);
request.Credentials = new NetworkCredential(userName, password);
request.Method = WebRequestMethods.Ftp.DownloadFile;
var ftpResponse = (FtpWebResponse)request.GetResponse();
/* Get the FTP Server's Response Stream */
ftpStream = ftpResponse.GetResponseStream();
return responseStream = new StreamReader(ftpStream);
For me there are 2 disadvantages of using the stream directly, if you can live with them, you shouldn't waste memory or disk space.
In this stream you can not seek to a specific position, you can only read the contents as it comes in;
Your internet connection could suddenly drop and you will get an exception while parsing and processing your file, either split the parsing and processing or make sure your processing routine can handle the case that a file is processed for a second time (after a failure halfway through the first attempt).
To work around these issues, you could copy the stream to a MemoryStream:
using (var ftpStream = ftpResponse.GetResponseStream())
{
var memoryStream = new MemoryStream()
while ((bytesRead = ftpStream.Read(buffer, 0, buffer.Length)) > 0)
{
memoryStream.Write(buffer, 0, bytesRead);
}
memoryStream.Flush();
memoryStream.Position = 0;
return memoryStream;
}
If you are working with larger files I prefer writing it to a file, this way you minimize the memory footprint of your application:
using (var ftpStream = ftpResponse.GetResponseStream())
{
var fileStream = new FileStream(Path.GetTempFileName(), FileMode.CreateNew)
while ((bytesRead = ftpStream.Read(buffer, 0, buffer.Length)) > 0)
{
fileStream.Write(buffer, 0, bytesRead);
}
fileStream.Flush();
fileStream.Position = 0;
return fileStream;
}
I see more practical returning a responseStream when you are performing an HttpWebRequest. If you are using FtpWebRequest it means you are working with files. I would read the responseStream to byte[] and return the byte file content of the downloaded file, so you can easily work with the System.IO.Fileclasses to handle the file.
Thanks Carlos it was really helpful . I just return the byte[]
byte[] buffer = new byte[16 * 1024];
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = ftpStream.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
memoryStream=ms;
}
return memoryStream.ToArray();
and used byte[] in the method like this
public async Task ParseReport(byte[] bytesRead)
{
Stream stream = new MemoryStream(bytesRead);
using (StreamReader reader = new StreamReader(stream))
{
string line = null;
while (null != (line = reader.ReadLine()))
{
string[] values = line.Split(';');
}
}
stream.Close();
}

GZipping Javascript from .ashx returns decoding error in browser

Background
I'm setting up a generic handler to:
Combine & compress Javascript and CSS files
Cache a GZip version & a Non-GZip version
Serve the appropriate version based on the request
I'm working in MonoDevelop v2.8.2 on OSX 10.7.2
Problem
Since I want to Cache the GZipped version, I need to GZip without using a response filter
Using this code, I can compress and decompress a string on the server successfully, but when I serve it to the client I get:
Error 330 (net::ERR_CONTENT_DECODING_FAILED): Unknown error. (Chrome)
Cannot decode raw data (Safari)
The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression. (Firefox)
Relevant Code
string sCompiled =null;
if(bCanGZip)
{
context.Response.AddHeader("Content-Encoding", "gzip");
bHasValue = CurrentCache.CompiledScripts.TryGetValue(context.Request.Url.ToString() + "GZIP", out sCompiled);
}
//...
//Process files if bHasVale is false
//Compress result of file concatination/minification
//Compression method
public static string CompressString(string text)
{
UTF8Encoding encoding = new UTF8Encoding(false);
byte[] buffer = encoding.GetBytes(text);
using(MemoryStream memoryStream = new MemoryStream()){
using (GZipStream gZipStream = new GZipStream(memoryStream, CompressionMode.Compress, true))
{
gZipStream.Write(buffer, 0, buffer.Length);
}
memoryStream.Position = 0;
byte[] compressedData = new byte[memoryStream.Length];
memoryStream.Read(compressedData, 0, compressedData.Length);
byte[] gZipBuffer = new byte[compressedData.Length + 4];
Buffer.BlockCopy(compressedData, 0, gZipBuffer, 4, compressedData.Length);
Buffer.BlockCopy(BitConverter.GetBytes(buffer.Length), 0, gZipBuffer, 0, 4);
return Convert.ToBase64String(gZipBuffer);
}
}
//...
//Return value
switch(Type){
case FileType.CSS:
context.Response.ContentType = "text/css";
break;
case FileType.JS:
context.Response.ContentType = "application/javascript";
break;
}
context.Response.AddHeader("Content-Length", sCompiled.Length.ToString());
context.Response.Clear();
context.Response.Write(sCompiled);
Attempts to Resolve
Since I'm not sure what the lines:
byte[] gZipBuffer = new byte[compressedData.Length + 4];
Buffer.BlockCopy(compressedData, 0, gZipBuffer, 4, compressedData.Length);
Buffer.BlockCopy(BitConverter.GetBytes(buffer.Length), 0, gZipBuffer, 0, 4);
are accomplishing, I tried removing them.
I tried playing with different Encodings/options.
At this point I'm really not sure how to attack the problem since I don't know the source of the error (Encoding/Compression/other).
Any help would be very appreciated!
Other Resources I've found on the subject
http://beta.blogs.microsoft.co.il/blogs/mneiter/archive/2009/03/24/how-to-compress-and-decompress-using-gzipstream-object.aspx
http://madskristensen.net/post/Compress-and-decompress-strings-in-C.aspx
http://www.codeproject.com/KB/files/GZipStream.aspx
http://www.codeproject.com/KB/aspnet/HttpCombine.aspx
http://webreflection.blogspot.com/2009/01/quick-tip-c-gzip-content.html
http://www.dominicpettifer.co.uk/Blog/17/gzip-compress-your-websites-html-css-script-in-code
This is one of those things where once you explain you problem, you quickly find the answer.
I need to write out the response as Binary. So modifying the compression algorithum to return a byte array:
public static byte[] CompressStringToArray(string text){
UTF8Encoding encoding = new UTF8Encoding(false);
byte[] buffer = encoding.GetBytes(text);
using(MemoryStream memoryStream = new MemoryStream()){
using (GZipStream gZipStream = new GZipStream(memoryStream, CompressionMode.Compress, true))
{
gZipStream.Write(buffer, 0, buffer.Length);
}
memoryStream.Position = 0;
byte[] compressedData = new byte[memoryStream.Length];
memoryStream.Read(compressedData, 0, compressedData.Length);
return compressedData;
}
}
and then calling:
//Writes a byte buffer without encoding the response stream
context.Response.BinaryWrite(GZipTools.CompressStringToArray(sCompiled));
Solves the issue. Hopefully this helps others who will face the same problem.

How to compress a HttpWebRequest POST

I am trying to post data to server that accepts compressed data. The code below works just fine, but it is uncompressed. I have not worked with compression or Gzip beofre, so any help is appriciated.
HttpWebRequest request = WebRequest.Create(uri) as HttpWebRequest;
request.Timeout = 600000;
request.Method = verb; // POST
request.Accept = "text/xml";
if (!string.IsNullOrEmpty(data))
{
request.ContentType = "text/xml";
byte[] byteData = UTF8Encoding.UTF8.GetBytes(data);
request.ContentLength = byteData.Length;
// Here is where I need to compress the above byte array using GZipStream
using (Stream postStream = request.GetRequestStream())
{
postStream.Write(byteData, 0, byteData.Length);
}
}
XmlDocument xmlDoc = new XmlDocument();
HttpWebResponse response = null;
StreamReader reader = null;
try
{
response = request.GetResponse() as HttpWebResponse;
reader = new StreamReader(response.GetResponseStream());
xmlDoc.LoadXml(reader.ReadToEnd());
}
Do I gzip the entire byte array? Do I need to add other headers or remove the one that is already there?
Thanks!
-Scott
To answer the question you asked, to POST compressed data, all you need to do is wrap the request stream with a gzip stream
using (Stream postStream = request.GetRequestStream())
{
using(var zipStream = new GZipStream(postStream, CompressionMode.Compress))
{
zipStream.Write(byteData, 0, byteData.Length);
}
}
This is completely different than requesting a gzip response, which is a much more common thing to do.
I also received the "Cannot close stream until all bytes are written" error using code similar to tnyfst's. The problem was that I had:
request.ContentLength = binData.Length;
where binData is my raw data before the compression. Obviously the length of the compressed content would be different, so I just removed this line and ended up with this code:
using (GZipStream zipStream = new GZipStream(request.GetRequestStream(), CompressionMode.Compress))
{
zipStream.Write(binData, 0, binData.Length);
}
In Page_Load event:
Response.AddHeader("Content-Encoding", "gzip");
And for making compressed requests:
HttpWebRequest and GZip Http Responses by Rick Strahl
Try this extension method.
The stream will be left open (see the GZipStream constructor).
The stream position is set to 0 after the compression is done.
public static void GZip(this Stream stream, byte[] data)
{
using (var zipStream = new GZipStream(stream, CompressionMode.Compress, true))
{
zipStream.Write(data, 0, data.Length);
}
stream.Position = 0;
}
You can use the following test:
[Test]
public void Test_gzip_data_is_restored_to_the_original_value()
{
var stream = new MemoryStream();
var data = new byte[]{1,2,3,4,5,6,7,8,9,10};
stream.GZip(data);
var decompressed = new GZipStream(stream, CompressionMode.Decompress);
var data2 = new byte[10];
decompressed.Read(data2,0,10);
Assert.That(data, Is.EqualTo(data2));
}
For more information see: http://msdn.microsoft.com/en-us/library/hh158301(v=vs.110).aspx

How to convert Video to byte Array in C#?

I am using c# .net compact framework 3.5 and I want to convert a video file to byte array so that I may upload it on the server.
In the similar manner I am doing the image uploading which is getting the success result.
HttpWebRequest request;
request.ContentType = "image/jpeg";
request.ContentLength = byteArray.Length;
request.Method = "PUT";
imageToByteArray(img).CopyTo(byteArray, 0);
using (Stream requestStream = request.GetRequestStream())
{
requestStream.Write(byteArray, 0, (int)Fs.Length);
requestStream.Flush();
requestStream.Close();
}
public byte[] imageToByteArray(Image imageIn)
{
MemoryStream ms = new MemoryStream();
imageIn.Save(ms,System.Drawing.Imaging.ImageFormat.Jpeg);
return ms.ToArray();
}
How to do this for the video files?
You should copy the stream one block at a time instead of reading the entire file into an array. Otherwise, you'll use a potentially very large amount of memory as video files can grow quite big.
For example:
HttpWebRequest request;
request.Method = "PUT";
using(Stream requestStream = request.GetRequestStream())
using(Stream video = File.OpenRead("Path")) {
byte[] buffer = new byte[4096];
while(true) {
int bytesRead = video.Read(buffer, 0, buffer.Length);
if (bytesRead == 0) break;
requestStream.Write(buffer, 0, bytesRead);
}
}

Writing to the compression stream is not supported. Using System.IO.GZipStream

I get an exception when trying to decompress a (.gz) file using the GZipStream class that is included in the .NET framework. I am using the MSDN documentation. This is the exception:
Writing to the compression stream is not supported.
Here is the application source:
try
{
var infile = new FileStream(#"C:\TarDecomp\TarDecomp\TarDecomp\bin\Debug\nick_blah-2008.tar.gz", FileMode.Open, FileAccess.Read, FileShare.Read);
byte[] buffer = new byte[infile.Length];
// Read the file to ensure it is readable.
int count = infile.Read(buffer, 0, buffer.Length);
if (count != buffer.Length)
{
infile.Close();
Console.WriteLine("Test Failed: Unable to read data from file");
return;
}
infile.Close();
MemoryStream ms = new MemoryStream();
// Use the newly created memory stream for the compressed data.
GZipStream compressedzipStream = new GZipStream(ms, CompressionMode.Decompress, true);
Console.WriteLine("Decompression");
compressedzipStream.Write(buffer, 0, buffer.Length); //<<Throws error here
// Close the stream.
compressedzipStream.Close();
Console.WriteLine("Original size: {0}, Compressed size: {1}", buffer.Length, ms.Length);
} catch {...}
The exception is thrown at the compressedZipStream.write().
Any ideas? What is this exception telling me?
It is telling you that you should call Read instead of Write since it's decompression! Also the memory stream should be constructed with the data, or rather you should pass the file stream directly to the GZipStream constructor.
Example of how it should have been done (haven't tried to compile it):
Stream inFile = new FileStream(#"C:\TarDecomp\TarDecomp\TarDecomp\bin\Debug\nick_blah-2008.tar.gz", FileMode.Open, FileAccess.Read, FileShare.Read);
Stream decodedStream = new MemoryStream();
byte[] buffer = new byte[4096];
using (Stream inGzipStream = new GZipStream(inFile, CompressionMode.Decompress))
{
int bytesRead;
while ((bytesRead = inGzipStream.Read(buffer, 0, buffer.Length)) > 0)
decodedStream.Write(buffer, 0, bytesRead);
}
// Now decodedStream contains the decoded data
The compression code doesn't work like encryption - you can't decompress from one stream to another by writing the compressed data. You have to provide a stream which contains the compressed data already and let GZipStream read from it. Something like this:
using (Stream file = File.OpenRead(filename))
using (Stream gzip = new GZipStream(file, CompressionMode.Decompress))
using (Stream memoryStream = new MemoryStream())
{
CopyStream(gzip, memoryStream);
return memoryStream.ToArray();
}
CopyStream is a simple utility method to read from one stream and copy all the data to another. Something like this:
static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, bytesRead);
}
}
How compression streams work can be puzzling at first.
Reading takes compressed data and writing takes uncompressed data. All in all, the stream ensures you only "see" uncompressed data at all times.
The proper way to achieve what you are trying to do, is to read using the GZipStream and then write using the GZipStream also.

Categories