FileResult buffered to memory - c#

I'm trying to return large files via a controller ActionResult and have implemented a custom FileResult class like the following.
public class StreamedFileResult : FileResult
{
private string _FilePath;
public StreamedFileResult(string filePath, string contentType)
: base(contentType)
{
_FilePath = filePath;
}
protected override void WriteFile(System.Web.HttpResponseBase response)
{
using (FileStream fs = new FileStream(_FilePath, FileMode.Open, FileAccess.Read))
{
int bufferLength = 65536;
byte[] buffer = new byte[bufferLength];
int bytesRead = 0;
while (true)
{
bytesRead = fs.Read(buffer, 0, bufferLength);
if (bytesRead == 0)
{
break;
}
response.OutputStream.Write(buffer, 0, bytesRead);
}
}
}
}
However the problem I am having is that entire file appears to be buffered into memory. What would I need to do to prevent this?

You need to flush the response in order to prevent buffering. However if you keep on buffering without setting content-length, user will not see any progress. So in order for users to see proper progress, IIS buffers entire content, calculates content-length, applies compression and then sends the response. We have adopted following procedure to deliver files to client with high performance.
FileInfo path = new FileInfo(filePath);
// user will not see a progress if content-length is not specified
response.AddHeader("Content-Length", path.Length.ToString());
response.Flush();// do not add anymore headers after this...
byte[] buffer = new byte[ 4 * 1024 ]; // 4kb is a good for network chunk
using(FileStream fs = path.OpenRead()){
int count = 0;
while( (count = fs.Read(buffer,0,buffer.Length)) >0 ){
if(!response.IsClientConnected)
{
// network connection broke for some reason..
break;
}
response.OutputStream.Write(buffer,0,count);
response.Flush(); // this will prevent buffering...
}
}
You can change buffer size, but 4kb is ideal as lower level file system also reads buffer in chunks of 4kb.

Akash Kava is partly right and partly wrong. You DO NOT need to add the Content-Length header or do the flush afterward. But you DO, need to periodically flush response.OutputStream and then response. ASP.NET MVC (at least version 5) will automatically convert this into a "Transfer-Encoding: chunked" response.
byte[] buffer = new byte[ 4 * 1024 ]; // 4kb is a good for network chunk
using(FileStream fs = path.OpenRead()){
int count = 0;
while( (count = fs.Read(buffer,0,buffer.Length)) >0 ){
if(!response.IsClientConnected)
{
// network connection broke for some reason..
break;
}
response.OutputStream.Write(buffer,0,count);
response.OutputStream.Flush();
response.Flush(); // this will prevent buffering...
}
}
I tested it and it works.

Related

FileStream returns Length = 0

I'm trying to read a local file and upload it on ftp server. when i read a image file, everything is ok, but when i read a doc or docx file, FileStream returns length = 0. Here is my code:
i checked with some other files, it appears that it only works fine with images and it returns 0 for any other file
if (!ftpClient.FileExists(fileName))
{
try
{
ftpClient.ValidateCertificate += (control, e) => { e.Accept = true; };
const int BUFFER_SIZE = 64 * 1024; // 64KB buffer
byte[] buffer = new byte[BUFFER_SIZE];
using (Stream readStream = new FileStream(tempFilePath, FileMode.Open, FileAccess.Read))
using (Stream writeStream = ftpClient.OpenWrite(fileName))
{
while (readStream.Position < readStream.Length)
{
buffer.Initialize();
int bytesRead = readStream.Read(buffer, 0, BUFFER_SIZE);
writeStream.Write(buffer, 0, bytesRead);
}
readStream.Flush();
readStream.Close();
writeStream.Flush();
writeStream.Close();
DeleteTempFile(tempFilePath);
return true;
}
}
catch (Exception ex)
{
return false;
}
}
I couldn't find whats wrong with it. could you please help me?
While this doesn't answer your specific question, you don't actually need to know the length of your stream. Just keep reading until you hit a zero length read. A zero byte read is guaranteed to indicate the the end of any stream.
Return Value
Type: System.Int32
The total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available, or zero (0) if the end of the stream has been reached.
while (true)
{
int bytesRead = readStream.Read(buffer, 0, BUFFER_SIZE);
if(bytesRead==0)
{
break;
}
writeStream.Write(buffer, 0, bytesRead);
}
alternatively:
readStream.CopyTo(writeStream);
is probably the most concise method of stating your goal...
it was just a silly mistake, i have two fileupload and i've saved the other fileupload, so it creates a zero length file. as it appears the code works fine.
thanks everyone.

Serving large files with C# HttpListener

I'm trying to use HttpListener to serve static files, and this works well with small files. When file sizes grow larger (tested with 350 and 600MB files), the server chokes with one of the following exceptions:
HttpListenerException: The I/O operation has been aborted because of either a thread exit or an application request, or:
HttpListenerException: The semaphore timeout period has expired.
What needs to be changed to get rid of the exceptions, and let it run stable/reliable (and fast)?
Here's some further elaboration: This is basically a follow-up question to this earlier question. The code is slightly extended to show the effect. Content writing is in a loop with (hopefully reasonable) chunk sizes, 64kB in my case, but changing the value didn't make a difference except speed (see the mentioned older question).
using( FileStream fs = File.OpenRead( #"C:\test\largefile.exe" ) ) {
//response is HttpListenerContext.Response...
response.ContentLength64 = fs.Length;
response.SendChunked = false;
response.ContentType = System.Net.Mime.MediaTypeNames.Application.Octet;
response.AddHeader( "Content-disposition", "attachment; filename=largefile.EXE" );
byte[] buffer = new byte[ 64 * 1024 ];
int read;
using( BinaryWriter bw = new BinaryWriter( response.OutputStream ) ) {
while( ( read = fs.Read( buffer, 0, buffer.Length ) ) > 0 ) {
Thread.Sleep( 200 ); //take this out and it will not run
bw.Write( buffer, 0, read );
bw.Flush(); //seems to have no effect
}
bw.Close();
}
response.StatusCode = ( int )HttpStatusCode.OK;
response.StatusDescription = "OK";
response.OutputStream.Close();
}
I'm trying the download in a browser and also in a C# program using HttpWebRequest, it makes no difference.
Based on my research, I suppose that HttpListener is not really able to flush contents to the client or at least does so at its own pace. I have also left out the BinaryWriter and wrote directly to the stream - no difference. Introduced a BufferedStream around the base stream - no difference. Funny enough, if a Thread.Sleep(200) or slightly larger is introduced in the loop, it works on my box. However I doubt it is stable enough for a real solution. This question gives the impression that there's no chance at all to get it running correctly (besides moving to IIS/ASP.NET which I would resort to, but more likely stay away from if possible).
You didn't show us the other critical part how you initialized HttpListener. Therefore I tried your code with the one below and it worked
HttpListener listener = new HttpListener();
listener.Prefixes.Add("http://*:8080/");
listener.Start();
Task.Factory.StartNew(() =>
{
while (true)
{
HttpListenerContext context = listener.GetContext();
Task.Factory.StartNew((ctx) =>
{
WriteFile((HttpListenerContext)ctx, #"C:\LargeFile.zip");
}, context,TaskCreationOptions.LongRunning);
}
},TaskCreationOptions.LongRunning);
WriteFile is your code where Thread.Sleep( 200 ); is removed.
If you want to see the full code of it.
void WriteFile(HttpListenerContext ctx, string path)
{
var response = ctx.Response;
using (FileStream fs = File.OpenRead(path))
{
string filename = Path.GetFileName(path);
//response is HttpListenerContext.Response...
response.ContentLength64 = fs.Length;
response.SendChunked = false;
response.ContentType = System.Net.Mime.MediaTypeNames.Application.Octet;
response.AddHeader("Content-disposition", "attachment; filename=" + filename);
byte[] buffer = new byte[64 * 1024];
int read;
using (BinaryWriter bw = new BinaryWriter(response.OutputStream))
{
while ((read = fs.Read(buffer, 0, buffer.Length)) > 0)
{
bw.Write(buffer, 0, read);
bw.Flush(); //seems to have no effect
}
bw.Close();
}
response.StatusCode = (int)HttpStatusCode.OK;
response.StatusDescription = "OK";
response.OutputStream.Close();
}
}
Here is my SendFile function
void SendFile(Stream output, string fileName)
{
// The parameter output is the HttpListnerResponse.OutputStream and FileName is the name of the file you want to send
FileStream file = new FileStream(fileName, FileMode.Open, FileAccess.Read);
file.CopyTo(output);
output.Close();
file.Close();
}

how to buffer an input stream until it is complete

i'm implementing a wcf service that accepts image streams. however i'm currently getting an exception when i run it. as its trying to get the length of the stream before the stream is complete. so what i'd like to do is buffer the stream until its complete. however i cant find any examples of how to do this...
can anyone help?
my code so far:
public String uploadUserImage(Stream stream)
{
Stream fs = stream;
BinaryReader br = new BinaryReader(fs);
Byte[] bytes = br.ReadBytes((Int32)fs.Length);// this causes exception
File.WriteAllBytes(filepath, bytes);
}
Rather than try to fetch the length, you should read from the stream until it returns that it's "done". In .NET 4, this is really easy:
// Assuming we *really* want to read it into memory first...
MemoryStream memoryStream = new MemoryStream();
stream.CopyTo(memoryStream);
memoryStream.Position = 0;
File.WriteAllBytes(filepath, memoryStream);
In .NET 3.5 there's no CopyTo method, but you can write something similar yourself:
public static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, bytesRead);
}
}
However, now we've got something to copy a stream, why bother reading it all into memory first? Let's just write it straight to a file:
using (FileStream output = File.OpenWrite(filepath))
{
CopyStream(stream, output); // Or stream.CopyTo(output);
}
I'm not sure what you are returning (or not returning), but something like this might work for you:
public String uploadUserImage(Stream stream) {
const int KB = 1024;
Byte[] bytes = new Byte[KB];
StringBuilder sb = new StringBuilder();
using (BinaryReader br = new BinaryReader(stream)) {
int len;
do {
len = br.Read(bytes, 0, KB);
string readData = Encoding.UTF8.GetString(bytes);
sb.Append(readData);
} while (len == KB);
}
//File.WriteAllBytes(filepath, bytes);
return sb.ToString();
}
A string can hold up to 2 GB, I believe.
Try this :
using (StreamWriter sw = File.CreateText(filepath))
{
stream.CopyTo(sw);
sw.Close();
}
Jon Skeets answer for .Net 3.5 and below using a Buffer Read is actually done incorrectly.
The buffer isn't cleared between reads which can result in issues on any read that returns less than 8192, for example if the 2nd read, read 192 bytes, the 8000 last bytes from the first read would STILL be in the buffer which would then be returned to the stream.
My code below you supply it a Stream and it will return a IEnumerable array.
Using this you can for-each it and Write to a MemoryStream and then use .GetBuffer() to end up with a compiled merged byte[].
private IEnumerable<byte[]> ReadFullStream(Stream stream) {
while(true) {
byte[] buffer = new byte[8192];//since this is created every loop, its buffer is cleared
int bytesRead = stream.Read(buffer, 0, buffer.Length);//read up to 8192 bytes into buffer
if (bytesRead == 0) {//if we read nothing, stream is finished
break;
}
if(bytesRead < buffer.Length) {//if we read LESS than 8192 bytes, resize the buffer to essentially remove everything after what was read, otherwise you will have nullbytes/0x00bytes at the end of your buffer
Array.Resize(ref buffer, bytesRead);
}
yield return buffer;//yield return the buffer data
}//loop here until we reach a read == 0 (end of stream)
}

Created Custom File Downloads. MD5 Hash Doesn't Match

Using Asp.Net MVC I was creating a file downloader. The problem with the built in Asp.Net MVC functions is that they don't work on extremely large file downloads and in some browsers they don't pop up the save-as dialog. So I rolled by own using an article from msdn http://support.microsoft.com/kb/812406. The problem now is that the files are downloading perfectly, but the MD5 Checksums aren't matching because the file size is slightly different on the server than the download (even though 1000 tests show that the downloads execute just fine). Here is the code:
public class CustomFileResult : ActionResult
{
public string File { get; set; }
public CustomFileResult(string file)
{
this.File = file;
}
public override void ExecuteResult(ControllerContext context)
{
Stream iStream = null;
// Buffer to read 10K bytes in chunk:
byte[] buffer = new Byte[10000];
// Length of the file:
int length;
// Total bytes to read:
long dataToRead;
// Identify the file name.
string filename = System.IO.Path.GetFileName(this.File);
try
{
// Open the file.
iStream = new System.IO.FileStream(this.File, System.IO.FileMode.Open,
System.IO.FileAccess.Read, System.IO.FileShare.Read);
// Total bytes to read:
dataToRead = iStream.Length;
context.HttpContext.Response.ContentType = "application/octet-stream";
context.HttpContext.Response.AddHeader("Content-Disposition", "attachment; filename=" + filename);
// Read the bytes.
while (dataToRead > 0)
{
// Verify that the client is connected.
if (context.HttpContext.Response.IsClientConnected)
{
// Read the data in buffer.
length = iStream.Read(buffer, 0, 10000);
// Write the data to the current output stream.
context.HttpContext.Response.OutputStream.Write(buffer, 0, length);
// Flush the data to the HTML output.
context.HttpContext.Response.Flush();
buffer = new Byte[10000];
dataToRead = dataToRead - length;
}
else
{
//prevent infinite loop if user disconnects
dataToRead = -1;
}
}
}
catch (Exception ex)
{
// Trap the error, if any.
context.HttpContext.Response.Write("Error : " + ex.Message);
}
finally
{
if (iStream != null)
{
//Close the file.
iStream.Close();
}
context.HttpContext.Response.Close();
}
}
}
And the execution:
return new CustomFileResult(file.FullName);
Try using the
Response.TransmitFile(string fileName)
method.
It's really good and has some things to avoid OutOfMemory expections as well.
http://msdn.microsoft.com/en-us/library/12s31dhy(v=vs.80).aspx
Turns out the issue was a missing header.
context.HttpContext.Response.AddHeader("Content-Length", iStream.Length.ToString());
Adding that header solved the problem.
Once starting to write to the OutputStream, try flushing the OutputStream itself instead of flushing the response:
context.HttpContext.Response.OutputStream.Flush()
Your problem is here:
length = iStream.Read(buffer, 0, 10000);
// Write the data to the current output stream.
context.HttpContext.Response.OutputStream.Write(buffer, 0, length);
Every loop you read into a buffer of exactly 10,000 bytes and write that to stream. That means every file that someone downloads will be in multiples of 10,000. So if I was to download a file that is 9,998 bytes from your site, the file I got would be 10,000 bytes. Meaning that the hash would never match. My file would have 2 null bytes at the end of it.
You need to add a check to make sure that the amount of data to read is >=10k, and if it is not, resize your byte to the exact amount that is left, and transmit that. that should fix the hash mismatch
try something like this:
if (context.HttpContext.Response.IsClientConnected)
{
// Read the data in buffer.
if (dataToRead>=10000)
{
byte[] buffer = new byte[10000];
length = 10000
context.HttpContext.Response.OutputStream.Write(buffer, 0, length);
}
else
{
byte[] buffer = new byte[dataToRead];
length = buffer.Length;
context.HttpContext.Response.OutputStream.Write(buffer, 0, length);
}
// Flush the data to the HTML output.
context.HttpContext.Response.Flush();
dataToRead = dataToRead - length;
}

Uncompress data file with DeflateStream

I'm having trouble reading a compressed (deflated) data file using C# .NET DeflateStream(..., CompressionMode.Decompress). The file was written earlier using DeflateStream(..., CompressionMode.Compress), and it seems to be just fine (I can even decompress it using a Java program).
However, the first Read() call on the input stream to decompress/inflate the compressed data returns a length of zero (end of file).
Here's the main driver, which is used for both compression and decompression:
public void Main(...)
{
Stream inp;
Stream outp;
bool compr;
...
inp = new FileStream(inName, FileMode.Open, FileAccess.Read);
outp = new FileStream(outName, FileMode.Create, FileAccess.Write);
if (compr)
Compress(inp, outp);
else
Decompress(inp, outp);
inp.Close();
outp.Close();
}
Here's the basic code for decompression, which is what is failing:
public long Decompress(Stream inp, Stream outp)
{
byte[] buf = new byte[BUF_SIZE];
long nBytes = 0;
// Decompress the contents of the input file
inp = new DeflateStream(inp, CompressionMode.Decompress);
for (;;)
{
int len;
// Read a data block from the input stream
len = inp.Read(buf, 0, buf.Length); //<<FAILS
if (len <= 0)
break;
// Write the data block to the decompressed output stream
outp.Write(buf, 0, len);
nBytes += len;
}
// Done
outp.Flush();
return nBytes;
}
The call marked FAILS always returns zero. Why? I know it's got to be something simple, but I'm just not seeing it.
Here's the basic code for compression, which works just fine, and is almost exactly the same as the decompression method with the names swapped:
public long Compress(Stream inp, Stream outp)
{
byte[] buf = new byte[BUF_SIZE];
long nBytes = 0;
// Compress the contents of the input file
outp = new DeflateStream(outp, CompressionMode.Compress);
for (;;)
{
int len;
// Read a data block from the input stream
len = inp.Read(buf, 0, buf.Length);
if (len <= 0)
break;
// Write the data block to the compressed output stream
outp.Write(buf, 0, len);
nBytes += len;
}
// Done
outp.Flush();
return nBytes;
}
Solved
After seeing the correct solution, the constructor statement should be changed to:
inp = new DeflateStream(inp, CompressionMode.Decompress, true);
which keeps the underlying input stream open, and the following line needs to be added following the inp.Flush() call:
inp.Close();
The Close() calls forces the deflater stream to flush its internal buffers. The true flag prevents it from closing the underlying stream, which is closed later in Main(). The same changes should also be made to the Compress() method.
In your decompress method, are reassigning inp to a new Stream (a deflate stream). You never close that Deflate stream, but you do close the underlying file stream in Main(). A similar thing is going on in the compress method.
I think that the problem is that the underlying file stream is being closed before the deflate stream's finalizers are automatically closing them.
I added 1 line of code to your Decompress and Compress methods:
inp.Close() // to the Decompressmehtod
outp.Close() // to the compress method.
a better practice would be to enclose the streams in a using clause.
Here's an alternative way to write your Decompress method (I tested, and it works)
public static long Decompress(Stream inp, Stream outp)
{
byte[] buf = new byte[BUF_SIZE];
long nBytes = 0;
// Decompress the contents of the input file
using (inp = new DeflateStream(inp, CompressionMode.Decompress))
{
int len;
while ((len = inp.Read(buf, 0, buf.Length)) > 0)
{
// Write the data block to the decompressed output stream
outp.Write(buf, 0, len);
nBytes += len;
}
}
// Done
return nBytes;
}
I had the same problem with GZipStream, Since we had the original length stored I had to re-write the code to only read the number of bytes expected in the original file.
Hopefully I'm about to learn that there was a better answer (fingers crossed).

Categories