HttpListenerResponse OutputStream Write is slow - c#

I'm working on a HTTP service and reading a file from disk and writing it to the output stream.
The code thats taking a long time looks like this:
using (FileStream fileStream = fileInfo.OpenRead())
{
context.Response.ContentType = "application/octet-stream";
context.Response.ContentLength64 = fileStream.Length;
byte[] buffer = new byte[10 * 1024];
int bytesRead = 0;
while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) > 0)
{
context.Response.OutputStream.Write(buffer, 0, bytesRead);
}
context.Response.Close();
}
I tried changing the buffer size and came to the conclusion that a larger buffer size gives fewer writes in the loop and a shorter transfer time. I also measured the time for the write operation to complete and it takes ~40ms for each write.
Is there anyway to write the data faster? If read all bytes and write is to the output stream, the transfer is instant. However the files can be large and I don't want to read the whole file into memory.

Related

Streaming large multi-part uploads to S3 using AmazonS3Client

An upload stream given to AmazonS3Client must be seekable. You can make a Stream seekable using AmazonS3Util.MakeStreamSeekable. However, the source of this reveals that it will not perform well with large streams:
public static System.IO.Stream MakeStreamSeekable(System.IO.Stream input)
{
System.IO.MemoryStream output = new System.IO.MemoryStream();
const int readSize = 32 * 1024;
byte[] buffer = new byte[readSize];
int count = 0;
using (input)
{
while ((count = input.Read(buffer, 0, readSize)) > 0)
{
output.Write(buffer, 0, count);
}
}
output.Position = 0;
return output;
}
So, what approaches are available to upload a Stream to S3 without copying the entire contents into memory?
Each chunk must be a seekable Stream, but you can have many chunks. The solution is to split up the input Stream.

Convert a VERY LARGE binary file into a Base64String incrementally

I need help converting a VERY LARGE binary file (ZIP file) to a Base64String and back again. The files are too large to be loaded into memory all at once (they throw OutOfMemoryExceptions) otherwise this would be a simple task. I do not want to process the contents of the ZIP file individually, I want to process the entire ZIP file.
The problem:
I can convert the entire ZIP file (test sizes vary from 1 MB to 800 MB at present) to Base64String, but when I convert it back, it is corrupted. The new ZIP file is the correct size, it is recognized as a ZIP file by Windows and WinRAR/7-Zip, etc., and I can even look inside the ZIP file and see the contents with the correct sizes/properties, but when I attempt to extract from the ZIP file, I get: "Error: 0x80004005" which is a general error code.
I am not sure where or why the corruption is happening. I have done some investigating, and I have noticed the following:
If you have a large text file, you can convert it to Base64String incrementally without issue. If calling Convert.ToBase64String on the entire file yielded: "abcdefghijklmnopqrstuvwx", then calling it on the file in two pieces would yield: "abcdefghijkl" and "mnopqrstuvwx".
Unfortunately, if the file is a binary then the result is different. While the entire file might yield: "abcdefghijklmnopqrstuvwx", trying to process this in two pieces would yield something like: "oiweh87yakgb" and "kyckshfguywp".
Is there a way to incrementally base 64 encode a binary file while avoiding this corruption?
My code:
private void ConvertLargeFile()
{
FileStream inputStream = new FileStream("C:\\Users\\test\\Desktop\\my.zip", FileMode.Open, FileAccess.Read);
byte[] buffer = new byte[MultipleOfThree];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while(bytesRead > 0)
{
byte[] secondaryBuffer = new byte[buffer.Length];
int secondaryBufferBytesRead = bytesRead;
Array.Copy(buffer, secondaryBuffer, buffer.Length);
bool isFinalChunk = false;
Array.Clear(buffer, 0, buffer.Length);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
if(bytesRead == 0)
{
isFinalChunk = true;
buffer = new byte[secondaryBufferBytesRead];
Array.Copy(secondaryBuffer, buffer, buffer.length);
}
String base64String = Convert.ToBase64String(isFinalChunk ? buffer : secondaryBuffer);
File.AppendAllText("C:\\Users\\test\\Desktop\\Base64Zip", base64String);
}
inputStream.Dispose();
}
The decoding is more of the same. I use the size of the base64String variable above (which varies depending on the original buffer size that I test with), as the buffer size for decoding. Then, instead of Convert.ToBase64String(), I call Convert.FromBase64String() and write to a different file name/path.
EDIT:
In my haste to reduce the code (I refactored it into a new project, separate from other processing to eliminate code that isn't central to the issue) I introduced a bug. The base 64 conversion should be performed on the secondaryBuffer for all iterations save the last (Identified by isFinalChunk), when buffer should be used. I have corrected the code above.
EDIT #2:
Thank you all for your comments/feedback. After correcting the bug (see the above edit), I re-tested my code, and it is actually working now. I intend to test and implement #rene's solution as it appears to be the best, but I thought that I should let everyone know of my discovery as well.
Based on the code shown in the blog from Wiktor Zychla the following code works. This same solution is indicated in the remarks section of Convert.ToBase64String as pointed out by Ivan Stoev
// using System.Security.Cryptography
private void ConvertLargeFile()
{
//encode
var filein= #"C:\Users\test\Desktop\my.zip";
var fileout = #"C:\Users\test\Desktop\Base64Zip";
using (FileStream fs = File.Open(fileout, FileMode.Create))
using (var cs=new CryptoStream(fs, new ToBase64Transform(),
CryptoStreamMode.Write))
using(var fi =File.Open(filein, FileMode.Open))
{
fi.CopyTo(cs);
}
// the zip file is now stored in base64zip
// and decode
using (FileStream f64 = File.Open(fileout, FileMode.Open) )
using (var cs=new CryptoStream(f64, new FromBase64Transform(),
CryptoStreamMode.Read ) )
using(var fo =File.Open(filein +".orig", FileMode.Create))
{
cs.CopyTo(fo);
}
// the original file is in my.zip.orig
// use the commandlinetool
// fc my.zip my.zip.orig
// to verify that the start file and the encoded and decoded file
// are the same
}
The code uses standard classes found in System.Security.Cryptography namespace and uses a CryptoStream and the FromBase64Transform and its counterpart ToBase64Transform
You can avoid using a secondary buffer by passing offset and length to Convert.ToBase64String, like this:
private void ConvertLargeFile()
{
using (var inputStream = new FileStream("C:\\Users\\test\\Desktop\\my.zip", FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[MultipleOfThree];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while(bytesRead > 0)
{
String base64String = Convert.ToBase64String(buffer, 0, bytesRead);
File.AppendAllText("C:\\Users\\test\\Desktop\\Base64Zip", base64String);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
}
}
}
The above should work, but I think Rene's answer is actually the better solution.
Use this code:
public void ConvertLargeFile(string source , string destination)
{
using (FileStream inputStream = new FileStream(source, FileMode.Open, FileAccess.Read))
{
int buffer_size = 30000; //or any multiple of 3
byte[] buffer = new byte[buffer_size];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while (bytesRead > 0)
{
byte[] buffer2 = buffer;
if(bytesRead < buffer_size)
{
buffer2 = new byte[bytesRead];
Buffer.BlockCopy(buffer, 0, buffer2, 0, bytesRead);
}
string base64String = System.Convert.ToBase64String(buffer2);
File.AppendAllText(destination, base64String);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
}
}
}

Trying to understand Data transfer in byte[] Array C#

First I would like to say that I tried to do research on how to understand what I'm about to ask but just couldn't come up with what I was looking for. This being said I thought I would ask some of you crazy smart people to explain this in lamens terms for me as best as possible.
My issue is that I have a perfectly good "Copy Pasted" code to download a file using ftpWebRequest. I will paste the code below:
public static void DownloadFiles()
{
FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://****/test.zip");
request.Credentials = new NetworkCredential("****", "*****");
request.UseBinary = true;
request.Method = WebRequestMethods.Ftp.DownloadFile;
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
Stream responseStream = response.GetResponseStream();
FileStream writer = new FileStream("***/test.zip", FileMode.Create);
int bufferSize = 2048;
int readCount;
byte[] buffer = new byte[2048];
readCount = responseStream.Read(buffer, 0, bufferSize);
while (readCount > 0)
{
writer.Write(buffer, 0, readCount);
readCount = responseStream.Read(buffer, 0, bufferSize);
}
responseStream.Close();
response.Close();
writer.Close();
}
Like I said this works perfectly and downloads the zip file without any issues. What I'm trying to understand because I think "Copy Paste" code is a horrible idea is how this is actually functinning. The only part that I do not understand is what is actually happening between these points:
int bufferSize = 2048;
int readCount;
byte[] buffer = new byte[2048];
readCount = responseStream.Read(buffer, 0, bufferSize);
while (readCount > 0)
{
writer.Write(buffer, 0, readCount);
readCount = responseStream.Read(buffer, 0, bufferSize);
}
If someone would be so kind to please explain what is actually happening in the buffer, why we set it to 2048 and what the while loop is actually doing would be very appreciated. Just as a side not I have gone through the code and put message boxes and other debug's in to try and understand whats going on but without success. Thank you in advance and I apologize if this seems very elementary.
data is read from the stream in 2 kB chunks and written to file:
int bufferSize = 2048;
int readCount;
byte[] buffer = new byte[2048]; // a buffer is created
// bytes are read from stream into buffer. readCount contains
// number of bytes actually read. it can be less then 2048 for the last chunk.
// e.g if there are 3000 bytes of data, first Read will read 2048 bytes
// and second will read 952 bytes
readCount = responseStream.Read(buffer, 0, bufferSize);
while (readCount > 0) // as long as we have read some data
{
writer.Write(buffer, 0, readCount); // write that many bytes from buffer to file
readCount = responseStream.Read(buffer, 0, bufferSize); // then read next chunk
}
The code reads the contents of the responseStream into the buffer array and writes it out to the writer. The while loop is necessary because when you call Read() on a stream, there's no guarantee it will read enough bytes to fill the buffer. The stream only guarantees that, if there's anything to read (in other words, you haven't reached the end of the stream) it will read at least one byte. Whatever it does, it will return the actual number of bytes read. So, you have to keep calling Read() until it returns 0, meaning that you've reached the end of the stream. The size of the buffer is arbitrary. You could use a buffer of length 1 but there might be performance benefits to using something around 2-4kb (due to typical sizes of network packets or disk sectors) .

how to buffer an input stream until it is complete

i'm implementing a wcf service that accepts image streams. however i'm currently getting an exception when i run it. as its trying to get the length of the stream before the stream is complete. so what i'd like to do is buffer the stream until its complete. however i cant find any examples of how to do this...
can anyone help?
my code so far:
public String uploadUserImage(Stream stream)
{
Stream fs = stream;
BinaryReader br = new BinaryReader(fs);
Byte[] bytes = br.ReadBytes((Int32)fs.Length);// this causes exception
File.WriteAllBytes(filepath, bytes);
}
Rather than try to fetch the length, you should read from the stream until it returns that it's "done". In .NET 4, this is really easy:
// Assuming we *really* want to read it into memory first...
MemoryStream memoryStream = new MemoryStream();
stream.CopyTo(memoryStream);
memoryStream.Position = 0;
File.WriteAllBytes(filepath, memoryStream);
In .NET 3.5 there's no CopyTo method, but you can write something similar yourself:
public static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, bytesRead);
}
}
However, now we've got something to copy a stream, why bother reading it all into memory first? Let's just write it straight to a file:
using (FileStream output = File.OpenWrite(filepath))
{
CopyStream(stream, output); // Or stream.CopyTo(output);
}
I'm not sure what you are returning (or not returning), but something like this might work for you:
public String uploadUserImage(Stream stream) {
const int KB = 1024;
Byte[] bytes = new Byte[KB];
StringBuilder sb = new StringBuilder();
using (BinaryReader br = new BinaryReader(stream)) {
int len;
do {
len = br.Read(bytes, 0, KB);
string readData = Encoding.UTF8.GetString(bytes);
sb.Append(readData);
} while (len == KB);
}
//File.WriteAllBytes(filepath, bytes);
return sb.ToString();
}
A string can hold up to 2 GB, I believe.
Try this :
using (StreamWriter sw = File.CreateText(filepath))
{
stream.CopyTo(sw);
sw.Close();
}
Jon Skeets answer for .Net 3.5 and below using a Buffer Read is actually done incorrectly.
The buffer isn't cleared between reads which can result in issues on any read that returns less than 8192, for example if the 2nd read, read 192 bytes, the 8000 last bytes from the first read would STILL be in the buffer which would then be returned to the stream.
My code below you supply it a Stream and it will return a IEnumerable array.
Using this you can for-each it and Write to a MemoryStream and then use .GetBuffer() to end up with a compiled merged byte[].
private IEnumerable<byte[]> ReadFullStream(Stream stream) {
while(true) {
byte[] buffer = new byte[8192];//since this is created every loop, its buffer is cleared
int bytesRead = stream.Read(buffer, 0, buffer.Length);//read up to 8192 bytes into buffer
if (bytesRead == 0) {//if we read nothing, stream is finished
break;
}
if(bytesRead < buffer.Length) {//if we read LESS than 8192 bytes, resize the buffer to essentially remove everything after what was read, otherwise you will have nullbytes/0x00bytes at the end of your buffer
Array.Resize(ref buffer, bytesRead);
}
yield return buffer;//yield return the buffer data
}//loop here until we reach a read == 0 (end of stream)
}

Uncompress data file with DeflateStream

I'm having trouble reading a compressed (deflated) data file using C# .NET DeflateStream(..., CompressionMode.Decompress). The file was written earlier using DeflateStream(..., CompressionMode.Compress), and it seems to be just fine (I can even decompress it using a Java program).
However, the first Read() call on the input stream to decompress/inflate the compressed data returns a length of zero (end of file).
Here's the main driver, which is used for both compression and decompression:
public void Main(...)
{
Stream inp;
Stream outp;
bool compr;
...
inp = new FileStream(inName, FileMode.Open, FileAccess.Read);
outp = new FileStream(outName, FileMode.Create, FileAccess.Write);
if (compr)
Compress(inp, outp);
else
Decompress(inp, outp);
inp.Close();
outp.Close();
}
Here's the basic code for decompression, which is what is failing:
public long Decompress(Stream inp, Stream outp)
{
byte[] buf = new byte[BUF_SIZE];
long nBytes = 0;
// Decompress the contents of the input file
inp = new DeflateStream(inp, CompressionMode.Decompress);
for (;;)
{
int len;
// Read a data block from the input stream
len = inp.Read(buf, 0, buf.Length); //<<FAILS
if (len <= 0)
break;
// Write the data block to the decompressed output stream
outp.Write(buf, 0, len);
nBytes += len;
}
// Done
outp.Flush();
return nBytes;
}
The call marked FAILS always returns zero. Why? I know it's got to be something simple, but I'm just not seeing it.
Here's the basic code for compression, which works just fine, and is almost exactly the same as the decompression method with the names swapped:
public long Compress(Stream inp, Stream outp)
{
byte[] buf = new byte[BUF_SIZE];
long nBytes = 0;
// Compress the contents of the input file
outp = new DeflateStream(outp, CompressionMode.Compress);
for (;;)
{
int len;
// Read a data block from the input stream
len = inp.Read(buf, 0, buf.Length);
if (len <= 0)
break;
// Write the data block to the compressed output stream
outp.Write(buf, 0, len);
nBytes += len;
}
// Done
outp.Flush();
return nBytes;
}
Solved
After seeing the correct solution, the constructor statement should be changed to:
inp = new DeflateStream(inp, CompressionMode.Decompress, true);
which keeps the underlying input stream open, and the following line needs to be added following the inp.Flush() call:
inp.Close();
The Close() calls forces the deflater stream to flush its internal buffers. The true flag prevents it from closing the underlying stream, which is closed later in Main(). The same changes should also be made to the Compress() method.
In your decompress method, are reassigning inp to a new Stream (a deflate stream). You never close that Deflate stream, but you do close the underlying file stream in Main(). A similar thing is going on in the compress method.
I think that the problem is that the underlying file stream is being closed before the deflate stream's finalizers are automatically closing them.
I added 1 line of code to your Decompress and Compress methods:
inp.Close() // to the Decompressmehtod
outp.Close() // to the compress method.
a better practice would be to enclose the streams in a using clause.
Here's an alternative way to write your Decompress method (I tested, and it works)
public static long Decompress(Stream inp, Stream outp)
{
byte[] buf = new byte[BUF_SIZE];
long nBytes = 0;
// Decompress the contents of the input file
using (inp = new DeflateStream(inp, CompressionMode.Decompress))
{
int len;
while ((len = inp.Read(buf, 0, buf.Length)) > 0)
{
// Write the data block to the decompressed output stream
outp.Write(buf, 0, len);
nBytes += len;
}
}
// Done
return nBytes;
}
I had the same problem with GZipStream, Since we had the original length stored I had to re-write the code to only read the number of bytes expected in the original file.
Hopefully I'm about to learn that there was a better answer (fingers crossed).

Categories