I need help converting a VERY LARGE binary file (ZIP file) to a Base64String and back again. The files are too large to be loaded into memory all at once (they throw OutOfMemoryExceptions) otherwise this would be a simple task. I do not want to process the contents of the ZIP file individually, I want to process the entire ZIP file.
The problem:
I can convert the entire ZIP file (test sizes vary from 1 MB to 800 MB at present) to Base64String, but when I convert it back, it is corrupted. The new ZIP file is the correct size, it is recognized as a ZIP file by Windows and WinRAR/7-Zip, etc., and I can even look inside the ZIP file and see the contents with the correct sizes/properties, but when I attempt to extract from the ZIP file, I get: "Error: 0x80004005" which is a general error code.
I am not sure where or why the corruption is happening. I have done some investigating, and I have noticed the following:
If you have a large text file, you can convert it to Base64String incrementally without issue. If calling Convert.ToBase64String on the entire file yielded: "abcdefghijklmnopqrstuvwx", then calling it on the file in two pieces would yield: "abcdefghijkl" and "mnopqrstuvwx".
Unfortunately, if the file is a binary then the result is different. While the entire file might yield: "abcdefghijklmnopqrstuvwx", trying to process this in two pieces would yield something like: "oiweh87yakgb" and "kyckshfguywp".
Is there a way to incrementally base 64 encode a binary file while avoiding this corruption?
My code:
private void ConvertLargeFile()
{
FileStream inputStream = new FileStream("C:\\Users\\test\\Desktop\\my.zip", FileMode.Open, FileAccess.Read);
byte[] buffer = new byte[MultipleOfThree];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while(bytesRead > 0)
{
byte[] secondaryBuffer = new byte[buffer.Length];
int secondaryBufferBytesRead = bytesRead;
Array.Copy(buffer, secondaryBuffer, buffer.Length);
bool isFinalChunk = false;
Array.Clear(buffer, 0, buffer.Length);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
if(bytesRead == 0)
{
isFinalChunk = true;
buffer = new byte[secondaryBufferBytesRead];
Array.Copy(secondaryBuffer, buffer, buffer.length);
}
String base64String = Convert.ToBase64String(isFinalChunk ? buffer : secondaryBuffer);
File.AppendAllText("C:\\Users\\test\\Desktop\\Base64Zip", base64String);
}
inputStream.Dispose();
}
The decoding is more of the same. I use the size of the base64String variable above (which varies depending on the original buffer size that I test with), as the buffer size for decoding. Then, instead of Convert.ToBase64String(), I call Convert.FromBase64String() and write to a different file name/path.
EDIT:
In my haste to reduce the code (I refactored it into a new project, separate from other processing to eliminate code that isn't central to the issue) I introduced a bug. The base 64 conversion should be performed on the secondaryBuffer for all iterations save the last (Identified by isFinalChunk), when buffer should be used. I have corrected the code above.
EDIT #2:
Thank you all for your comments/feedback. After correcting the bug (see the above edit), I re-tested my code, and it is actually working now. I intend to test and implement #rene's solution as it appears to be the best, but I thought that I should let everyone know of my discovery as well.
Based on the code shown in the blog from Wiktor Zychla the following code works. This same solution is indicated in the remarks section of Convert.ToBase64String as pointed out by Ivan Stoev
// using System.Security.Cryptography
private void ConvertLargeFile()
{
//encode
var filein= #"C:\Users\test\Desktop\my.zip";
var fileout = #"C:\Users\test\Desktop\Base64Zip";
using (FileStream fs = File.Open(fileout, FileMode.Create))
using (var cs=new CryptoStream(fs, new ToBase64Transform(),
CryptoStreamMode.Write))
using(var fi =File.Open(filein, FileMode.Open))
{
fi.CopyTo(cs);
}
// the zip file is now stored in base64zip
// and decode
using (FileStream f64 = File.Open(fileout, FileMode.Open) )
using (var cs=new CryptoStream(f64, new FromBase64Transform(),
CryptoStreamMode.Read ) )
using(var fo =File.Open(filein +".orig", FileMode.Create))
{
cs.CopyTo(fo);
}
// the original file is in my.zip.orig
// use the commandlinetool
// fc my.zip my.zip.orig
// to verify that the start file and the encoded and decoded file
// are the same
}
The code uses standard classes found in System.Security.Cryptography namespace and uses a CryptoStream and the FromBase64Transform and its counterpart ToBase64Transform
You can avoid using a secondary buffer by passing offset and length to Convert.ToBase64String, like this:
private void ConvertLargeFile()
{
using (var inputStream = new FileStream("C:\\Users\\test\\Desktop\\my.zip", FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[MultipleOfThree];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while(bytesRead > 0)
{
String base64String = Convert.ToBase64String(buffer, 0, bytesRead);
File.AppendAllText("C:\\Users\\test\\Desktop\\Base64Zip", base64String);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
}
}
}
The above should work, but I think Rene's answer is actually the better solution.
Use this code:
public void ConvertLargeFile(string source , string destination)
{
using (FileStream inputStream = new FileStream(source, FileMode.Open, FileAccess.Read))
{
int buffer_size = 30000; //or any multiple of 3
byte[] buffer = new byte[buffer_size];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while (bytesRead > 0)
{
byte[] buffer2 = buffer;
if(bytesRead < buffer_size)
{
buffer2 = new byte[bytesRead];
Buffer.BlockCopy(buffer, 0, buffer2, 0, bytesRead);
}
string base64String = System.Convert.ToBase64String(buffer2);
File.AppendAllText(destination, base64String);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
}
}
}
Related
Trying to convert Stream object to byte[] and using the below method for the same:
public static byte[] ReadFully(System.IO.Stream input)
{
byte[] buffer = new byte[16*1024];
using (System.IO.MemoryStream ms = new System.IO.MemoryStream())
{
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
return ms.ToArray();
}
}
However the input parameter "input" is for large file that is of 2 GB and hence the code does not get enter into while loop and hence does not convert it to byte array.
For smaller files it is working fine
That's what a Stream is for.
You don't load the whole content into a byte[], you read a small buffer from the Stream into memory and handle it, then dispose and read the next buffer.
If you still need to use a byte[]:
It seems like your app can't handle more than 2^32 Bytes Memory, meaning it's 32bit.
Try changing it to 64bit (in Project Properties go to Build and disable Prefer 32 bit)
Object limited below 32bit. (that is why all index using int)
how about use list contains byte array to deal entire data?
public List<byte[]> ReadBytesList(string fileName)
{
List<byte[]> rawDataBytes= new List<byte[]>();
byte[] buff;
FileStream fs = new FileStream(fileName,
FileMode.Open,
FileAccess.Read);
BinaryReader br = new BinaryReader(fs);
long numBytes = new FileInfo(fileName).Length;
int arrayCount= (int)(numBytes / 2100000000); //2147483648 is max
int arrayRest = (int)(numBytes % 2100000000);
if(arrayCount>0)
{
for (int i = 0; i < arrayCount; i++)
{
buff = br.ReadBytes(2100000000);
rawDataBytes.Add(buff);
}
buff = br.ReadBytes(arrayRest);
rawDataBytes.Add(buff);
}
else
{
buff = br.ReadBytes(arrayRest);
rawDataBytes.Add(buff);
}
return rawDataBytes;
}
For context I am trying to consume a streamed response from a soap API, which should output a CSV file. The response outputs a string coded in base 64, which I must write into the CSV file.
The api documentation says that the response must be read to a destination buffer-by-buffer, but I am unfamiliar with c# so I am unsure on how to replicate byte and write in the correct context to do so.
Here is the the code I am trying to replicate. The code was provided by the api's documentation:
byte[] buffer = new byte[4000];
bool endOfStream = false;
int bytesRead = 0;
using (FileStream localFileStream = new FileStream(destinationPath, FileMode.Create, FileAccess.Write))
{
using (Stream remoteStream = client.DownloadFile(jobId, chkFormatAsXml.Unchecked))
{
while (!endOfStream)
{
bytesRead = remoteStream.Read(buffer, 0, buffer.Length);
if (bytesRead > 0)
{
localFileStream.Write(buffer, 0, bytesRead);
totalBytes += bytesRead;
}
else
{
endOfStream = true;
}
}
}
}
My current python code looks like this:
buffer = ???
bytesread = 0
with open('csvfile.csv','w') as file:
#opens Pickle file
with (open("data2.pkl", 'r+b')) as openfile:
print openfile
bytesread = len(openfile.read(4000))
if bytesread > 0:
?????
Any help would be greatly appreciated, even if it is just to point me in the right direction. I have also had a few questions referencing this same problem.
Write Streamed Response(file-like object) to CSV file Byte by Byte in Python
How to replicate C# .read(buffer, 0, buffer.Length) in Python
UPDATE:
My code now looks like:
import shutil
with (open("data2.pkl", 'r')) as pklfile:
with open('csvfile4.csv', 'wb') as csvfile:
file.write(shutil.copyfileobj(pklfile,csvfile, 4000)
Unforunately, this just writes the base64 code literally to the csv file, and i'm not sure how to decode properly
i'm implementing a wcf service that accepts image streams. however i'm currently getting an exception when i run it. as its trying to get the length of the stream before the stream is complete. so what i'd like to do is buffer the stream until its complete. however i cant find any examples of how to do this...
can anyone help?
my code so far:
public String uploadUserImage(Stream stream)
{
Stream fs = stream;
BinaryReader br = new BinaryReader(fs);
Byte[] bytes = br.ReadBytes((Int32)fs.Length);// this causes exception
File.WriteAllBytes(filepath, bytes);
}
Rather than try to fetch the length, you should read from the stream until it returns that it's "done". In .NET 4, this is really easy:
// Assuming we *really* want to read it into memory first...
MemoryStream memoryStream = new MemoryStream();
stream.CopyTo(memoryStream);
memoryStream.Position = 0;
File.WriteAllBytes(filepath, memoryStream);
In .NET 3.5 there's no CopyTo method, but you can write something similar yourself:
public static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, bytesRead);
}
}
However, now we've got something to copy a stream, why bother reading it all into memory first? Let's just write it straight to a file:
using (FileStream output = File.OpenWrite(filepath))
{
CopyStream(stream, output); // Or stream.CopyTo(output);
}
I'm not sure what you are returning (or not returning), but something like this might work for you:
public String uploadUserImage(Stream stream) {
const int KB = 1024;
Byte[] bytes = new Byte[KB];
StringBuilder sb = new StringBuilder();
using (BinaryReader br = new BinaryReader(stream)) {
int len;
do {
len = br.Read(bytes, 0, KB);
string readData = Encoding.UTF8.GetString(bytes);
sb.Append(readData);
} while (len == KB);
}
//File.WriteAllBytes(filepath, bytes);
return sb.ToString();
}
A string can hold up to 2 GB, I believe.
Try this :
using (StreamWriter sw = File.CreateText(filepath))
{
stream.CopyTo(sw);
sw.Close();
}
Jon Skeets answer for .Net 3.5 and below using a Buffer Read is actually done incorrectly.
The buffer isn't cleared between reads which can result in issues on any read that returns less than 8192, for example if the 2nd read, read 192 bytes, the 8000 last bytes from the first read would STILL be in the buffer which would then be returned to the stream.
My code below you supply it a Stream and it will return a IEnumerable array.
Using this you can for-each it and Write to a MemoryStream and then use .GetBuffer() to end up with a compiled merged byte[].
private IEnumerable<byte[]> ReadFullStream(Stream stream) {
while(true) {
byte[] buffer = new byte[8192];//since this is created every loop, its buffer is cleared
int bytesRead = stream.Read(buffer, 0, buffer.Length);//read up to 8192 bytes into buffer
if (bytesRead == 0) {//if we read nothing, stream is finished
break;
}
if(bytesRead < buffer.Length) {//if we read LESS than 8192 bytes, resize the buffer to essentially remove everything after what was read, otherwise you will have nullbytes/0x00bytes at the end of your buffer
Array.Resize(ref buffer, bytesRead);
}
yield return buffer;//yield return the buffer data
}//loop here until we reach a read == 0 (end of stream)
}
I am using .NET 3.5 ASP.NET. Currently my web site serves a PDF file in the following manner:
context.Response.WriteFile(#"c:\blah\blah.pdf");
This works great. However, I'd like to serve it via the context.Response.Write(char [], int, int) method.
So I tried sending out the file via
byte [] byteContent = File.ReadAllBytes(ReportPath);
ASCIIEncoding encoding = new ASCIIEncoding();
char[] charContent = encoding.GetChars(byteContent);
context.Response.Write(charContent, 0, charContent.Length);
That did not work (e.g. browser's PDF plugin complains that the file is corrupted).
So I tried the Unicode approach:
byte [] byteContent = File.ReadAllBytes(ReportPath);
UnicodeEncoding encoding = new UnicodeEncoding();
char[] charContent = encoding.GetChars(byteContent);
context.Response.Write(charContent, 0, charContent.Length);
which also did not work.
What am I missing?
You should not convert the bytes into characters, that is why it becomes "corrupted". Even though ASCII characters are stored in bytes the actual ASCII character set is limited to 7 bits. Thus, converting a byte stream with the ASCIIEncoding will effectively remove the 8th bit from each byte.
The bytes should be written to the OutputStream stream of the Response instance.
Instead of loading all bytes from the file upfront, which could possibly consume a lot of memory, reading the file in chunks from a stream is a better approach. Here's a sample of how to read from one stream and then write to another:
void LoadStreamToStream(Stream inputStream, Stream outputStream)
{
const int bufferSize = 64 * 1024;
var buffer = new byte[bufferSize];
while (true)
{
var bytesRead = inputStream.Read(buffer, 0, bufferSize);
if (bytesRead > 0)
{
outputStream.Write(buffer, 0, bytesRead);
}
if ((bytesRead == 0) || (bytesRead < bufferSize))
break;
}
}
You can then use this method to load the contents of your file directly to the Response.OutputStream
LoadStreamToStream(fileStream, Response.OutputStream);
Better still, here's a method opening a file and loading its contents to a stream:
void LoadFileToStream(string inputFile, Stream outputStream)
{
using (var streamInput = new FileStream(inputFile, FileMode.Open, FileAccess.Read))
{
LoadStreamToStream(streamInput, outputStream);
streamInput.Close();
}
}
You may also need to set the ContentType by doing something like this:
Response.ContentType = "application/octet-stream";
Building upon Peter Lillevold's answer, I went and just made some extension methods for his above functions.
public static void WriteTo(this Stream inputStream, Stream outputStream)
{
const int bufferSize = 64 * 1024;
var buffer = new byte[bufferSize];
while (true)
{
var bytesRead = inputStream.Read(buffer, 0, bufferSize);
if (bytesRead > 0)
{
outputStream.Write(buffer, 0, bytesRead);
}
if ((bytesRead == 0) || (bytesRead < bufferSize)) break;
}
}
public static void WriteToFromFile(this Stream outputStream, string inputFile)
{
using (var inputStream = new FileStream(inputFile, FileMode.Open, FileAccess.Read))
{
inputStream.WriteTo(outputStream);
inputStream.Close();
}
}
I get an exception when trying to decompress a (.gz) file using the GZipStream class that is included in the .NET framework. I am using the MSDN documentation. This is the exception:
Writing to the compression stream is not supported.
Here is the application source:
try
{
var infile = new FileStream(#"C:\TarDecomp\TarDecomp\TarDecomp\bin\Debug\nick_blah-2008.tar.gz", FileMode.Open, FileAccess.Read, FileShare.Read);
byte[] buffer = new byte[infile.Length];
// Read the file to ensure it is readable.
int count = infile.Read(buffer, 0, buffer.Length);
if (count != buffer.Length)
{
infile.Close();
Console.WriteLine("Test Failed: Unable to read data from file");
return;
}
infile.Close();
MemoryStream ms = new MemoryStream();
// Use the newly created memory stream for the compressed data.
GZipStream compressedzipStream = new GZipStream(ms, CompressionMode.Decompress, true);
Console.WriteLine("Decompression");
compressedzipStream.Write(buffer, 0, buffer.Length); //<<Throws error here
// Close the stream.
compressedzipStream.Close();
Console.WriteLine("Original size: {0}, Compressed size: {1}", buffer.Length, ms.Length);
} catch {...}
The exception is thrown at the compressedZipStream.write().
Any ideas? What is this exception telling me?
It is telling you that you should call Read instead of Write since it's decompression! Also the memory stream should be constructed with the data, or rather you should pass the file stream directly to the GZipStream constructor.
Example of how it should have been done (haven't tried to compile it):
Stream inFile = new FileStream(#"C:\TarDecomp\TarDecomp\TarDecomp\bin\Debug\nick_blah-2008.tar.gz", FileMode.Open, FileAccess.Read, FileShare.Read);
Stream decodedStream = new MemoryStream();
byte[] buffer = new byte[4096];
using (Stream inGzipStream = new GZipStream(inFile, CompressionMode.Decompress))
{
int bytesRead;
while ((bytesRead = inGzipStream.Read(buffer, 0, buffer.Length)) > 0)
decodedStream.Write(buffer, 0, bytesRead);
}
// Now decodedStream contains the decoded data
The compression code doesn't work like encryption - you can't decompress from one stream to another by writing the compressed data. You have to provide a stream which contains the compressed data already and let GZipStream read from it. Something like this:
using (Stream file = File.OpenRead(filename))
using (Stream gzip = new GZipStream(file, CompressionMode.Decompress))
using (Stream memoryStream = new MemoryStream())
{
CopyStream(gzip, memoryStream);
return memoryStream.ToArray();
}
CopyStream is a simple utility method to read from one stream and copy all the data to another. Something like this:
static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, bytesRead);
}
}
How compression streams work can be puzzling at first.
Reading takes compressed data and writing takes uncompressed data. All in all, the stream ensures you only "see" uncompressed data at all times.
The proper way to achieve what you are trying to do, is to read using the GZipStream and then write using the GZipStream also.