Adding large files to IO.Compression.ZipArchiveEntry throws OutOfMemoryException Exception - c#

I am trying to add a large video file(~500MB) to an ArchiveEntry by using this code:
using (var zipFile = ZipFile.Open(outputZipFile, ZipArchiveMode.Update))
{
var zipEntry = zipFile.CreateEntry("largeVideoFile.avi");
using (var writer = new BinaryWriter(zipEntry.Open()))
{
using (FileStream fs = File.Open(#"largeVideoFile.avi", FileMode.Open))
{
var buffer = new byte[16 * 1024];
using (var data = new BinaryReader(fs))
{
int read;
while ((read = data.Read(buffer, 0, buffer.Length)) > 0)
{
writer.Write(buffer, 0, read);
}
}
}
}
}
I am getting the error
System.OutOfMemoryException
when writer.Write is called, alltought I used a intermediate buffer....
Any idea how to solve this?

Build the application as any CPU and execute it in a x64 machine. This should fix the issue. (Or directly build the application as x64).
Videos normally cannot be compressed a lot and the zip file probably remains in memory until the are completely created.

Related

Invalid C# Zip File After Compressing

I am writing a code to compress a ZIP file in C# using the built in .NET library:
using System.IO.Compression;
using System.IO;
But, however, when the compression finishes, the code outputs an invalid zip file. It seems like somewhere down the line in the code, the file either did not write properly or close fully. I have used dispose and close to release the resources.
public bool CompressFile(FileInfo theFile)
{
StringBuilder compressSuccess = new StringBuilder();
FileStream sourceFile = File.OpenRead(theFile.FullName.ToString());
FileStream destinationFile = File.Create(theFile.FullName + ".zip");
byte[] buffer = new byte[sourceFile.Length];
sourceFile.Read(buffer, 0, buffer.Length);
using (GZipStream output = new GZipStream(destinationFile,
CompressionMode.Compress, true))
{
output.Write(buffer, 0, buffer.Length);
}
sourceFile.Dispose();
destinationFile.Dispose();
sourceFile.Close();
destinationFile.Close();
return true;
}
What would I been doing wrong? Is it because I am forcing an extension ".zip"?
Following the link suggested by FrankJames, this is an example of code to create a zip file:
var zipFile = #"e:\temp\outputFile.zip";
var theFile = #"e:\temp\sourceFile.txt";
using (var zipToCreate = new FileStream(zipFile, FileMode.Create))
{
using (var archive = new ZipArchive(zipToCreate, ZipArchiveMode.Create))
{
var fileEntry = archive.CreateEntry("FileNameInsideTheZip.txt");
using (var sourceStream = File.OpenRead(theFile))
using (var destStream = fileEntry.Open())
{
var buffer = new byte[sourceStream.Length];
sourceStream.Read(buffer, 0, buffer.Length);
destStream.Write(buffer, 0, buffer.Length);
destStream.Flush();
}
}
}
I see you never used the Flush method that force to write to the stream.
I rewrite your code using a better style (using instead of the dispose).
Try it and let us know:
public bool CompressFile(FileInfo theFile)
{
using (var sourceStream = File.OpenRead(theFile.FullName))
using (var destStream = File.Create(theFile.FullName + ".zip"))
{
var buffer = new byte[sourceStream.Length];
sourceStream.Read(buffer, 0, buffer.Length);
using (var zipStream = new GZipStream(destStream, CompressionMode.Compress, true))
{
zipStream.Write(buffer, 0, buffer.Length);
zipStream.Flush();
}
destStream.Flush();
}
return true;
}

c# Compress File System out of memory

I'm developing a service to compress some files and I have been doing tests to the service and it is getting a major failure in bigger files. I'm using an outlook file with 6GB to test and I get an out of memory error after compressing 500Mb.
This is my code:
using (FileStream zipToOpen = new FileStream(#dir + ZipName, FileMode.Open))
{
using (ZipArchive archive = new ZipArchive(zipToOpen, ZipArchiveMode.Update))
{
foreach (string file in files)
{
if (File.GetCreationTime(#dir + file).AddSeconds(FileAge) < DateTime.Now)
{
ZipArchiveEntry fileEntry = archive.CreateEntry(file);
using (BinaryWriter writer = new BinaryWriter(fileEntry.Open()))
{
using (FileStream sr = new FileStream(#dir + file, FileMode.Open, FileAccess.Read))
{
byte[] block = new byte[1024];
int bytesRead = 0;
while ((bytesRead = sr.Read(block,0, block.Length)) >0)
{
writer.Write(block, 0, bytesRead);
}
}
}
File.Delete(#dir + file);
}
}
}
}
Any ideia how I can solve it?
Thank you in advance

Cannot create DotNetZip ZipFile from file download via HTTP Response

I am try to download a zip file via a url to extract files from. I would rather not have to save it a temp file (which works fine) and rather keep it in memory - it is not very big. For example, if I try to download this file:
http://phs.googlecode.com/files/Download%20File%20Test.zip
using this code:
using Ionic.Zip;
...
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(URL);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if (response.ContentLength > 0)
{
using (MemoryStream zipms = new MemoryStream())
{
int bytesRead;
byte[] buffer = new byte[32768];
using (Stream stream = response.GetResponseStream())
{
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
zipms.Write(buffer, 0, bytesRead);
ZipFile zip = ZipFile.Read(stream); // <--ERROR: "This stream does not support seek operations. "
}
using (ZipFile zip = ZipFile.Read(zipms)) // <--ERROR: "Could not read block - no data! (position 0x00000000) "
using (MemoryStream txtms = new MemoryStream())
{
ZipEntry csentry= zip["Download File Test.cs"];
csentry.Extract(txtms);
txtms.Position = 0;
using (StreamReader reader = new StreamReader(txtms))
{
string csentry = reader.ReadToEnd();
}
}
}
}
...
Note where i flagged the errors I am receiving. With the first one, it does not like the System.Net.ConnectStream. If I comment that line out and allow it to hit the line where I note the second error, it does not like the MemoryStream. I did see this posting: https://stackoverflow.com/a/6377099/1324284 but I am having the same issues that others mention about not having more then 4 overloads of the Read method so I cannot try the WebClient.
However, if I do everything via a FileStream and save it to a temp location first, then point ZipFile.Read at that temp location, everything works including extracting any contained files into a MemoryStream.
Thanks for any help.
You need to Flush() your MemoryStream and set the Position to 0 before you read from it, otherwise you are trying to read from the current position (where there is nothing).
For your code:
ZipFile zip;
using (Stream stream = response.GetResponseStream())
{
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
zipms.Write(buffer, 0, bytesRead);
zipms.Flush();
zipms.Position = 0;
zip = ZipFile.Read(zipms);
}

C# write an uploaded file to a UNC with FileStream, read it later sometimes doesn't work

I've got a rare case where a file cannot be read from a UNC path immediately after it was written. Here's the workflow:
plupload sends a large file in chunks to a WebAPI method
Method writes the chunks to a UNC path (a storage server). This loops until the file is completely uploaded.
After a few other operations, the same method tries to read the file again and sometimes it cannot find the file
It only seems to happen after our servers have been idle for a while. If I repeat the upload a few times, it starts to work.
I thought it might be a network configuration issue, or something to do with the file not completely closing before being read again.
Here's part of the code that writes the file (is the filestream OK in this case?)
SaveStream(stream, new FileStream(fileName, FileMode.Append, FileAccess.Write));
Here's SaveStream definition:
private static void SaveStream(Stream stream, FileStream fileStream)
{
using (var fs = fileStream)
{
var buffer = new byte[1024];
var l = stream.Read(buffer, 0, 1024);
while (l > 0)
{
fs.Write(buffer, 0, l);
l = stream.Read(buffer, 0, 1024);
}
fs.Flush();
fs.Close();
}
}
Here's the code that reads the file:
var fileInfo = new FileInfo(fileName);
var exists = fileInfo.Exists;
It's the fileInfo.Exists that is returning false.
Thank you
These kind of errors are mostly due to files not closed yet.
Try passing the fileName to SaveStream and then use it as follows:
private static void SaveStream(Stream stream, string fileName)
{
using (var fs = new FileStream(fileName, FileMode.Append, FileAccess.Write))
{
var buffer = new byte[1024];
var l = stream.Read(buffer, 0, 1024);
while (l > 0)
{
fs.Write(buffer, 0, l);
l = stream.Read(buffer, 0, 1024);
}
fs.Flush();
} // end of using will close and dispose fs properly
}

IsolatedStorage causes Memory to run out

hey.
I'm reading an image from Isolated Storage when the user clicks on an item like this:
using (IsolatedStorageFile currentIsolatedStorage = IsolatedStorageFile.GetUserStoreForApplication())
{
using (var img = currentIsolatedStorage.OpenFile(fileName, FileMode.Open))
{
byte[] buffer = new byte[img.Length];
imgStream = new MemoryStream(buffer);
//read the imagestream into the byte array
int read;
while ((read = img.Read(buffer, 0, buffer.Length)) > 0)
{
img.Write(buffer, 0, read);
}
img.Close();
}
}
This works fine, but if I click back and forth between two images, the memory consumption keeps increasing and then runs out of memory. Is there a more efficient way of reading images from Isolated Storage? I could cache a few images in memory, but with hundreds of results, it ends up taking up memory anyway. Any suggestions?
Are you disposing the MemoryStream at some point? This is the only leak I could find.
Also, Stream has a CopyTo() method. Your code could be rewritten like:
using (IsolatedStorageFile currentIsolatedStorage = IsolatedStorageFile.GetUserStoreForApplication())
{
using (var img = currentIsolatedStorage.OpenFile(fileName, FileMode.Open))
{
var imgStream = new MemoryStream(img.Length);
img.CopyTo(imgStream);
return imgStream;
}
}
This will save many many memory allocations.
EDIT:
And for Windows Phone (which does not define a CopyTo()), replaced the CopyTo() method with it's code:
using (IsolatedStorageFile currentIsolatedStorage = IsolatedStorageFile.GetUserStoreForApplication())
{
using (var img = currentIsolatedStorage.OpenFile(fileName, FileMode.Open))
{
var imgStream = new MemoryStream(img.Length);
var buffer = new byte[Math.Min(1024, img.Length)];
int read;
while ((read = img.Read(buffer, 0, buffer.Length)) != 0)
imgStream.Write(buffer, 0, read);
return imgStream;
}
}
The main difference here is that the buffer is set relatively small (1K). Also, added an optimization by providing the constructor of MemoryStream with the length of the image. That makes MemoryStream pre-alloc the necessary space.

Categories