Serving large files with C# HttpListener - c#

I'm trying to use HttpListener to serve static files, and this works well with small files. When file sizes grow larger (tested with 350 and 600MB files), the server chokes with one of the following exceptions:
HttpListenerException: The I/O operation has been aborted because of either a thread exit or an application request, or:
HttpListenerException: The semaphore timeout period has expired.
What needs to be changed to get rid of the exceptions, and let it run stable/reliable (and fast)?
Here's some further elaboration: This is basically a follow-up question to this earlier question. The code is slightly extended to show the effect. Content writing is in a loop with (hopefully reasonable) chunk sizes, 64kB in my case, but changing the value didn't make a difference except speed (see the mentioned older question).
using( FileStream fs = File.OpenRead( #"C:\test\largefile.exe" ) ) {
//response is HttpListenerContext.Response...
response.ContentLength64 = fs.Length;
response.SendChunked = false;
response.ContentType = System.Net.Mime.MediaTypeNames.Application.Octet;
response.AddHeader( "Content-disposition", "attachment; filename=largefile.EXE" );
byte[] buffer = new byte[ 64 * 1024 ];
int read;
using( BinaryWriter bw = new BinaryWriter( response.OutputStream ) ) {
while( ( read = fs.Read( buffer, 0, buffer.Length ) ) > 0 ) {
Thread.Sleep( 200 ); //take this out and it will not run
bw.Write( buffer, 0, read );
bw.Flush(); //seems to have no effect
}
bw.Close();
}
response.StatusCode = ( int )HttpStatusCode.OK;
response.StatusDescription = "OK";
response.OutputStream.Close();
}
I'm trying the download in a browser and also in a C# program using HttpWebRequest, it makes no difference.
Based on my research, I suppose that HttpListener is not really able to flush contents to the client or at least does so at its own pace. I have also left out the BinaryWriter and wrote directly to the stream - no difference. Introduced a BufferedStream around the base stream - no difference. Funny enough, if a Thread.Sleep(200) or slightly larger is introduced in the loop, it works on my box. However I doubt it is stable enough for a real solution. This question gives the impression that there's no chance at all to get it running correctly (besides moving to IIS/ASP.NET which I would resort to, but more likely stay away from if possible).

You didn't show us the other critical part how you initialized HttpListener. Therefore I tried your code with the one below and it worked
HttpListener listener = new HttpListener();
listener.Prefixes.Add("http://*:8080/");
listener.Start();
Task.Factory.StartNew(() =>
{
while (true)
{
HttpListenerContext context = listener.GetContext();
Task.Factory.StartNew((ctx) =>
{
WriteFile((HttpListenerContext)ctx, #"C:\LargeFile.zip");
}, context,TaskCreationOptions.LongRunning);
}
},TaskCreationOptions.LongRunning);
WriteFile is your code where Thread.Sleep( 200 ); is removed.
If you want to see the full code of it.
void WriteFile(HttpListenerContext ctx, string path)
{
var response = ctx.Response;
using (FileStream fs = File.OpenRead(path))
{
string filename = Path.GetFileName(path);
//response is HttpListenerContext.Response...
response.ContentLength64 = fs.Length;
response.SendChunked = false;
response.ContentType = System.Net.Mime.MediaTypeNames.Application.Octet;
response.AddHeader("Content-disposition", "attachment; filename=" + filename);
byte[] buffer = new byte[64 * 1024];
int read;
using (BinaryWriter bw = new BinaryWriter(response.OutputStream))
{
while ((read = fs.Read(buffer, 0, buffer.Length)) > 0)
{
bw.Write(buffer, 0, read);
bw.Flush(); //seems to have no effect
}
bw.Close();
}
response.StatusCode = (int)HttpStatusCode.OK;
response.StatusDescription = "OK";
response.OutputStream.Close();
}
}

Here is my SendFile function
void SendFile(Stream output, string fileName)
{
// The parameter output is the HttpListnerResponse.OutputStream and FileName is the name of the file you want to send
FileStream file = new FileStream(fileName, FileMode.Open, FileAccess.Read);
file.CopyTo(output);
output.Close();
file.Close();
}

Related

Request stream fail to write

I have to upload a large file to the server with the following code snippet:
static async Task LordNoBugAsync(string token, string filePath, string uri)
{
HttpWebRequest fileWebRequest = (HttpWebRequest)WebRequest.Create(uri);
fileWebRequest.Method = "PATCH";
fileWebRequest.AllowWriteStreamBuffering = false; //this line tells to upload by chunks
fileWebRequest.ContentType = "application/x-www-form-urlencoded";
fileWebRequest.Headers["Authorization"] = "PHOENIX-TOKEN " + token;
fileWebRequest.KeepAlive = false;
fileWebRequest.Timeout = System.Threading.Timeout.Infinite;
fileWebRequest.Proxy = null;
using (FileStream fileStream = File.OpenRead(filePath) )
{
fileWebRequest.ContentLength = fileStream.Length; //have to provide length in order to upload by chunks
int bufferSize = 512000;
byte[] buffer = new byte[bufferSize];
int lastBytesRead = 0;
int byteCount = 0;
Stream requestStream = fileWebRequest.GetRequestStream();
requestStream.WriteTimeout = System.Threading.Timeout.Infinite;
while ((lastBytesRead = fileStream.Read(buffer, 0, bufferSize)) != 0)
{
if (lastBytesRead > 0)
{
await requestStream.WriteAsync(buffer, 0, lastBytesRead);
//for some reasons didnt really write to stream, but in fact buffer has content, >60MB
byteCount += bufferSize;
}
}
requestStream.Flush();
try
{
requestStream.Close();
requestStream.Dispose();
}
catch
{
Console.Write("Error");
}
try
{
fileStream.Close();
fileStream.Dispose();
}
catch
{
Console.Write("Error");
}
}
...getting response parts...
}
In the code, I made a HttpWebRequest and push the content to server with buffering. The code works perfectly for any files under 60MB.
I tried a 70MB pdf. The buffer array has different content for each buffering. Yet, the request stream does not seem to be getting written. The bytecount also reached 70M, showing the file is properly read.
Edit (more info): I set the break point at requestStream.Close(). It clearly takes ~2 mins for the request stream to write in 60MB files but only takes 2ms for 70MB files.
My calling:
Task magic = LordNoBugAsync(token, nameofFile, path);
magic.Wait();
I am sure my calling is correct (it works for 0B to 60MB files).
Any advice or suggestion is much appreciated.

FileStream returns Length = 0

I'm trying to read a local file and upload it on ftp server. when i read a image file, everything is ok, but when i read a doc or docx file, FileStream returns length = 0. Here is my code:
i checked with some other files, it appears that it only works fine with images and it returns 0 for any other file
if (!ftpClient.FileExists(fileName))
{
try
{
ftpClient.ValidateCertificate += (control, e) => { e.Accept = true; };
const int BUFFER_SIZE = 64 * 1024; // 64KB buffer
byte[] buffer = new byte[BUFFER_SIZE];
using (Stream readStream = new FileStream(tempFilePath, FileMode.Open, FileAccess.Read))
using (Stream writeStream = ftpClient.OpenWrite(fileName))
{
while (readStream.Position < readStream.Length)
{
buffer.Initialize();
int bytesRead = readStream.Read(buffer, 0, BUFFER_SIZE);
writeStream.Write(buffer, 0, bytesRead);
}
readStream.Flush();
readStream.Close();
writeStream.Flush();
writeStream.Close();
DeleteTempFile(tempFilePath);
return true;
}
}
catch (Exception ex)
{
return false;
}
}
I couldn't find whats wrong with it. could you please help me?
While this doesn't answer your specific question, you don't actually need to know the length of your stream. Just keep reading until you hit a zero length read. A zero byte read is guaranteed to indicate the the end of any stream.
Return Value
Type: System.Int32
The total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available, or zero (0) if the end of the stream has been reached.
while (true)
{
int bytesRead = readStream.Read(buffer, 0, BUFFER_SIZE);
if(bytesRead==0)
{
break;
}
writeStream.Write(buffer, 0, bytesRead);
}
alternatively:
readStream.CopyTo(writeStream);
is probably the most concise method of stating your goal...
it was just a silly mistake, i have two fileupload and i've saved the other fileupload, so it creates a zero length file. as it appears the code works fine.
thanks everyone.

The specified argument is outside the range of valid values - C#

I keep getting this error:
The specified argument is outside the range of valid values.
When I run this code in C#:
string sourceURL = "http://192.168.1.253/nphMotionJpeg?Resolution=320x240&Quality=Standard";
byte[] buffer = new byte[200000];
int read, total = 0;
// create HTTP request
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(sourceURL);
req.Credentials = new NetworkCredential("username", "password");
// get response
WebResponse resp = req.GetResponse();
// get response stream
// Make sure the stream gets closed once we're done with it
using (Stream stream = resp.GetResponseStream())
{
// A larger buffer size would be benefitial, but it's not going
// to make a significant difference.
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}
}
// get bitmap
Bitmap bmp = (Bitmap)Bitmap.FromStream(new MemoryStream(buffer, 0, total));
pictureBox1.Image = bmp;
This line:
while ((read = stream.Read(buffer, total, 1000)) != 0)
Does anybody know what could cause this error or how to fix it?
Thanks in advance
Does anybody know what could cause this error?
I suspect total (or rather, total + 1000) has gone outside the range of the array - you'll get this error if you try to read more than 200K of data.
Personally I'd approach it differently - I'd create a MemoryStream to write to, and a much smaller buffer to read into, always reading as much data as you can, at the start of the buffer - and then copying that many bytes into the stream. Then just rewind the stream (set Position to 0) before loading it as a bitmap.
Or just use Stream.CopyTo if you're using .NET 4 or higher:
Stream output = new MemoryStream();
using (Stream input = resp.GetResponseStream())
{
input.CopyTo(output);
}
output.Position = 0;
Bitmap bmp = (Bitmap) Bitmap.FromStream(output);

Return stream reader from FTP response is good practice or not

I have a method for FTP download file, but I do not save file locally rather I parse the file in memory through ftp response. My question is, is returning stream reader after getting ftp response stream a good practice? Because do not want to do parsing and other stuff in the same method.
var uri = new Uri(string.Format("ftp://{0}/{1}/{2}", "somevalue", remotefolderpath, remotefilename));
var request = (FtpWebRequest)FtpWebRequest.Create(uri);
request.Credentials = new NetworkCredential(userName, password);
request.Method = WebRequestMethods.Ftp.DownloadFile;
var ftpResponse = (FtpWebResponse)request.GetResponse();
/* Get the FTP Server's Response Stream */
ftpStream = ftpResponse.GetResponseStream();
return responseStream = new StreamReader(ftpStream);
For me there are 2 disadvantages of using the stream directly, if you can live with them, you shouldn't waste memory or disk space.
In this stream you can not seek to a specific position, you can only read the contents as it comes in;
Your internet connection could suddenly drop and you will get an exception while parsing and processing your file, either split the parsing and processing or make sure your processing routine can handle the case that a file is processed for a second time (after a failure halfway through the first attempt).
To work around these issues, you could copy the stream to a MemoryStream:
using (var ftpStream = ftpResponse.GetResponseStream())
{
var memoryStream = new MemoryStream()
while ((bytesRead = ftpStream.Read(buffer, 0, buffer.Length)) > 0)
{
memoryStream.Write(buffer, 0, bytesRead);
}
memoryStream.Flush();
memoryStream.Position = 0;
return memoryStream;
}
If you are working with larger files I prefer writing it to a file, this way you minimize the memory footprint of your application:
using (var ftpStream = ftpResponse.GetResponseStream())
{
var fileStream = new FileStream(Path.GetTempFileName(), FileMode.CreateNew)
while ((bytesRead = ftpStream.Read(buffer, 0, buffer.Length)) > 0)
{
fileStream.Write(buffer, 0, bytesRead);
}
fileStream.Flush();
fileStream.Position = 0;
return fileStream;
}
I see more practical returning a responseStream when you are performing an HttpWebRequest. If you are using FtpWebRequest it means you are working with files. I would read the responseStream to byte[] and return the byte file content of the downloaded file, so you can easily work with the System.IO.Fileclasses to handle the file.
Thanks Carlos it was really helpful . I just return the byte[]
byte[] buffer = new byte[16 * 1024];
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = ftpStream.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
memoryStream=ms;
}
return memoryStream.ToArray();
and used byte[] in the method like this
public async Task ParseReport(byte[] bytesRead)
{
Stream stream = new MemoryStream(bytesRead);
using (StreamReader reader = new StreamReader(stream))
{
string line = null;
while (null != (line = reader.ReadLine()))
{
string[] values = line.Split(';');
}
}
stream.Close();
}

FileResult buffered to memory

I'm trying to return large files via a controller ActionResult and have implemented a custom FileResult class like the following.
public class StreamedFileResult : FileResult
{
private string _FilePath;
public StreamedFileResult(string filePath, string contentType)
: base(contentType)
{
_FilePath = filePath;
}
protected override void WriteFile(System.Web.HttpResponseBase response)
{
using (FileStream fs = new FileStream(_FilePath, FileMode.Open, FileAccess.Read))
{
int bufferLength = 65536;
byte[] buffer = new byte[bufferLength];
int bytesRead = 0;
while (true)
{
bytesRead = fs.Read(buffer, 0, bufferLength);
if (bytesRead == 0)
{
break;
}
response.OutputStream.Write(buffer, 0, bytesRead);
}
}
}
}
However the problem I am having is that entire file appears to be buffered into memory. What would I need to do to prevent this?
You need to flush the response in order to prevent buffering. However if you keep on buffering without setting content-length, user will not see any progress. So in order for users to see proper progress, IIS buffers entire content, calculates content-length, applies compression and then sends the response. We have adopted following procedure to deliver files to client with high performance.
FileInfo path = new FileInfo(filePath);
// user will not see a progress if content-length is not specified
response.AddHeader("Content-Length", path.Length.ToString());
response.Flush();// do not add anymore headers after this...
byte[] buffer = new byte[ 4 * 1024 ]; // 4kb is a good for network chunk
using(FileStream fs = path.OpenRead()){
int count = 0;
while( (count = fs.Read(buffer,0,buffer.Length)) >0 ){
if(!response.IsClientConnected)
{
// network connection broke for some reason..
break;
}
response.OutputStream.Write(buffer,0,count);
response.Flush(); // this will prevent buffering...
}
}
You can change buffer size, but 4kb is ideal as lower level file system also reads buffer in chunks of 4kb.
Akash Kava is partly right and partly wrong. You DO NOT need to add the Content-Length header or do the flush afterward. But you DO, need to periodically flush response.OutputStream and then response. ASP.NET MVC (at least version 5) will automatically convert this into a "Transfer-Encoding: chunked" response.
byte[] buffer = new byte[ 4 * 1024 ]; // 4kb is a good for network chunk
using(FileStream fs = path.OpenRead()){
int count = 0;
while( (count = fs.Read(buffer,0,buffer.Length)) >0 ){
if(!response.IsClientConnected)
{
// network connection broke for some reason..
break;
}
response.OutputStream.Write(buffer,0,count);
response.OutputStream.Flush();
response.Flush(); // this will prevent buffering...
}
}
I tested it and it works.

Categories