DotNetZip fails with "stream does not support seek operations" - c#

I am using DotNetZip in C# to unzip from a stream as follows:
public static void unzipFromStream(Stream stream, string outdir)
{ //omit try catch block
using (ZipFile zip = ZipFile.Read(stream)){
foreach (ZipEntry e in zip){
e.Extract(outdir, ExtractExistingFileAction.OverwriteSilently);
}
}
}
stream is obtained using
WebClient client = new WebClient();
Stream fs = client.OpenRead(url);
However, I got the following exception
exception during extracting zip from stream System.NotSupportedException: This stream does not support seek operations.
at System.Net.ConnectStream.get_Position()
at Ionic.Zip.ZipFile.Read(Stream zipStream, TextWriter statusMessageWriter, Encoding encoding, EventHandler`1 readProgress)
On the server side(ASP.NET MVC 4), returning FilePathResult or FileStreamResult both caused this exception.
Should I obtain the stream differently on the client side? Or how to make server return a "seekable" stream? Thanks!

You'll have to download the data to a file or to memory, and then create a FileStream or a MemoryStream, or some other stream type that supports seeking. For example:
WebClient client = new WebClient();
client.DownloadFile(url, filename);
using (var fs = File.OpenRead(filename))
{
unzipFromStream(fs, outdir);
}
File.Delete(filename);
Or, if the data will fit into memory:
byte[] data = client.DownloadData(url);
using (var fs = new MemoryStream(data))
{
unzipFromStream(fs, outdir);
}

Related

.Net : Unzipping Gz file : Insufficient memory to continue the execution of the program

So I have been using the method below to unzip .gz files and its been working really well.
It uses SharpZip.
Now I am using larger files and it seems to be trying to unzip everything in memory giving me : Insufficient memory to continue the execution of the program..
Should I be reading each line instead of using ReadToEnd()?
public static void DecompressGZip(String fileRoot, String destRoot)
{
using FileStream fileStram = new FileStream(fileRoot, FileMode.Open, FileAccess.Read);
using GZipInputStream zipStream = new GZipInputStream(fileStram);
using StreamReader sr = new StreamReader(zipStream);
var data = sr.ReadToEnd();
File.WriteAllText(destRoot, data);
}
Like #alexei-levenkov suggests, CopyTo will chunk the copy without using up all the memory.
Bonus points, use the Async version for the threading goodness.
public static async Task DecompressGZipAsync(String fileRoot, String destRoot)
{
using (Stream zipFileStream = File.OpenRead(fileRoot))
using (Stream outputFileStream = File.Create(destRoot))
using (Stream zipStream = new GZipInputStream(zipFileStream))
{
await zipStream.CopyToAsync(outputFileStream);
}
}

Downloading objects from Amazon S3 using AWS SDK - resultant file is corrupt

I have a .Net Core 3.1 Web API which downloads an object (a PDF) from Amazon S3 to disk, using the AWS SDK library.
using Amazon.S3;
using Amazon.S3.Model;
using System.IO;
private void DownloadObject()
{
BasicAWSCredentials awsCredentials = new Amazon.Runtime.BasicAWSCredentials("MyAccessKey", "MySecretKey");
IAmazonS3 client = new Amazon.S3.AmazonS3Client(awsCreden‌​tials, Amazon.RegionEndpoint.USEast1);
GetObjectRequest request = new GetObjectRequest
{
BucketName = "mybucket",
Key = "test.pdf"
};
using (GetObjectResponse response = await client.GetObjectAsync(request))
{
using (Stream responseStream = response.ResponseStream)
{
using (StreamReader reader = new StreamReader(responseStream))
{
string responseBody = await reader.ReadToEndAsync();
File.WriteAllText("C:\\test.pdf", responseBody);
}
}
}
}
When the PDF downloads, the file size is wrong (too big) and if I open the PDF, all the pages are blank. This happens with other file types too. If I download a JPEG for example, I cannot open it - it's corrupt. Is it an encoding issue?
String encoding is not round-trippable for arbitrary binary data. That is to say, treating an abritrary byte[] array as UTF8, ASCII, etc. encoded text, converting byte -> string -> byte will often result in a different array of bytes than you started with. Presumably your PDF file contains binary data.
I recommend that you instead copy directly from one stream to another:
using (GetObjectResponse response = await client.GetObjectAsync(request))
{
using (Stream responseStream = response.ResponseStream)
using (FileStream outFile = File.Create("C:\\test.pdf"))
{
responseStream.CopyTo(outFile);
}
}

Can I copy compressed GZipStream to another stream?

I am writing a proxy for some site using ASP.NET Core 2.0. Proxy works fine if all it does is just re-translating HttpResponseMessage to the browser. My proxy based on this example. But I need to make some changes site content, for instance, some of the href contains an absolute reference. So when I click them from my proxy, I get to the original site, and it is a problem.
I get access to target page content, using way I find here. But when I try to copy changed content to HttpResponse.Body I run into NotSupportedException with message GZipStream does not support reading. My code is bellow:
public static async Task CopyProxyHttpResponse(this HttpContext context, HttpResponseMessage responseMessage)
{
if (responseMessage == null)
{
throw new ArgumentNullException(nameof(responseMessage));
}
var response = context.Response;
response.StatusCode = (int)responseMessage.StatusCode;
//work with headers
using (var responseStream = await responseMessage.Content.ReadAsStreamAsync())
{
string str;
using (var gZipStream = new GZipStream(responseStream, CompressionMode.Decompress))
using (var streamReader = new StreamReader(gZipStream))
{
str = await streamReader.ReadToEndAsync();
//some stings changes...
}
var bytes = Encoding.UTF8.GetBytes(str);
using (var msi = new MemoryStream(bytes))
using (var mso = new MemoryStream())
{
using (var gZipStream = new GZipStream(mso, CompressionMode.Compress))
{
await msi.CopyToAsync(gZipStream);
await gZipStream.CopyToAsync(response.Body, StreamCopyBufferSize, context.RequestAborted);
}
}
//next string works, but I don't change content this way
//await responseStream.CopyToAsync(response.Body, StreamCopyBufferSize, context.RequestAborted);
}
}
After some search, I find out, after compression into GZipStream gZipStream.CanRead is false, it seems to be false always if CompressionMode is Compressed. I also tried to copy msi into response.Body, it doesn't throw exceptions, but in the browser I get an empty page (document Response in Network in browser console is also empty).
Is it possible to copy compressed GZipStream to another Stream or my way is entirely wrong?
GZipStream is not meant to be copied from directly. Your mso Stream is holding the compressed data.
But you can drop the mso stream entirely and copy from your msi stream to the response.Body:
using (var msi = new MemoryStream(bytes))
{
using (var gZipStream = new GZipStream(response.Body, CompressionMode.Compress)) //<-- declare your response.Body as the target for the compressed data
{
await msi.CopyToAsync(gZipStream, StreamCopyBufferSize, context.RequestAborted); //copy the msi stream to the response.Body through the gZipStream
}
}

send GZIP stream over WCF

Below is my code.
I set the content-encoding header. Then write the file stream, to memory stream, using gzip encoding. Then finally return the memory stream.
However, the android, IOS, and webbrowser all recieve corrupt copies of the stream. None of them are able to fully read through the decompressed stream on the other side. Which vital part am I missing?
public Stream GetFileStream(String path, String basePath)
{
FileInfo fi = new FileInfo(basePath + path);
//WebOperationContext.Current.OutgoingResponse.ContentType = "application/x-gzip";
WebOperationContext.Current.OutgoingResponse.Headers.Add("Content-Encoding","gzip");
MemoryStream ms = new MemoryStream();
GZipStream CompressStream = new GZipStream(ms, CompressionMode.Compress);
// Get the stream of the source file.
FileStream inFile = fi.OpenRead();
// Prevent compressing hidden and already compressed files.
if ((File.GetAttributes(fi.FullName) & FileAttributes.Hidden)
!= FileAttributes.Hidden & fi.Extension != ".gz")
{
// Copy the source file into the compression stream.
inFile.CopyTo(CompressStream);
Log.d(String.Format("Compressed {0} from {1} to {2} bytes.",
fi.Name, fi.Length.ToString(), ms.Length.ToString()));
}
ms.Position = 0;
inFile.Close();
return ms;
}
I'd strongly recommend sending a byte array. Then on client side create a zip stream from the received byte array.

unable to save dynamically created MemoryStream with rebex sftp

I'm using StreamWriter to generate a dynamic file and holding it in a MemoryStream. Everything appears to be alright until I go to save the file using rebex sftp.
The example they give on their site works fine:
// upload a text using a MemoryStream
string message = "Hello from Rebex FTP for .NET!";
byte[] data = System.Text.Encoding.Default.GetBytes(message);
System.IO.MemoryStream ms = new System.IO.MemoryStream(data);
client.PutFile(ms, "message.txt");
However the code below does not:
using (var stream = new MemoryStream())
{
using (var writer = new StreamWriter(stream))
{
writer.AutoFlush = true;
writer.Write("test");
}
client.PutFile(stream, "test.txt");
}
The file "test.txt" is saved, however it is empty. Do I need to do more than just enable AutoFlush for this to work?
After writing to the MemoryStream, the stream is positioned at the end. The PutFile method reads from the current position to the end. That's exactly 0 bytes.
You need to position the stream at the beginning before passing it to PutFile:
...
}
stream.Seek(0, SeekOrigin.Begin);
client.PutFile(stream, "test.txt");
You may also need to prevent the StreamWriter from disposing the MemoryStream:
var writer = new StreamWriter(stream);
writer.Write("test");
writer.Flush();
stream.Seek(0, SeekOrigin.Begin);
client.PutFile(stream, "test.txt");

Categories