I'm writing an application similar to HFS which is a HTTP File Server with customization themes in HTML/CSS/JS and i would like to be able to serve my files in multiple parts, because most of download managers connect to the server through multiple connections and download the file as 8 pieces, and that feature ultimately boosts the download speed, and it makes the download to have the capability to resumed and paused.
As far as i know the HTTP's Partial Content makes this possible, i've looked around the web but couldn't find any good example of how to implement it in my code where i use HttpListener to serve webpages and files.
I've seen somewhere that someone suggested to use TcpListener instead but as my whole app works on HttpListener and haven't really find any good examples of Serving Partial Content with TcpListener to switch.
The webserver is multi-threaded and doesn't have any problem handling many requests through different connection simultaneously.
But whenever i download a huge file with IDM it just serves the content though a single connection and IDM shows that the server isn't capable of serving "206" (HTTP Partial Content)
Here's the code that i'm currently using to serve files:
context.Response.ContentType = GetMeme(filename);
Stream input = new FileStream(filename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
context.Response.ContentLength64 = input.Length;
byte[] buffer = new byte[1024 * 16];
int nbytes;
while ((nbytes = input.Read(buffer, 0, buffer.Length)) > 0)
context.Response.OutputStream.Write(buffer, 0, nbytes);
input.Close();
context.Response.StatusCode = (int)HttpStatusCode.OK;
context.Response.OutputStream.Flush();
context.Response.OutputStream.Close();
I tried to get the buffer offset from the HTTP heads but it fails to close the stream due to the offset and says Cannot close stream until all bytes are written.
Is there any better alternative?
Can even HttpListener handle HTTP: 206 correctly?
How would partial content work on TcpListener?
Any useful links and information would be much appreciated.
Disclaimer: this answer is not pretending to be complete, it's just so many things to talk about in this context. But as a beginning...
The listener you use has no relation to it. Your code should be aware of RANGE HTTP header. When RANGE is supplied, read and serve the file as specified in this header + send 206. Otherwise, serve the entire file and send 200.
It's not the buffer offset you get, but the file offset.
First set response code and other metadata (headers), and write to the stream as the last step.
And you'll probably have to completely change the way you actually serve files. For instance, call CopyToAsync() on FileStream you have.
And it's not Meme, it's MIME.
Related
On the server side, I have the following simple statement which includes the contents of the actual file in bytes (AttachmentFile).
MemoryStream stream = null;
stream = new MemoryStream(attachment.AttachmentFile);
All I want to do is to send the file to the client to open it up in a web browser. I've searched the web, but cannot seem to find the right solution.
Can someone please give me some code to accomplish this?
I think this is a quite good article about transfering file with WCF:
http://www.codeproject.com/Articles/166763/WCF-Streaming-Upload-Download-Files-Over-HTTP
Sadly I cannot add this message as a comment.
Using ASP.Net Web API, I am developing a service which (amongst other things) retrieves data from Azure, and returns it to the client.
One way of doing this would be to read the netire blob into a buffer, and then write that buffer to the response. However, I'd rather stream the contents, for better performance.
This is simple with the Azure API:
CloudBlobContainer container = BlobClient.GetContainerReference(containerName);
CloudBlockBlob blob = container.GetBlockBlobReference(blobName);
using (var buffer = new MemoryStream())
{
await blob.DownloadToStreamAsync(buffer);
}
And elsewhere in the code, this is returned to the client:
HttpResponseMessage response = Request.CreateResponse(HttpStatusCode.OK);
response.Content = new StreamContent(buffer);
But can I be certain that the MemoryStream won't be closed/disposed before the client finishes reading?
As long as you don't wrap your memory stream in a "using" statement you will be fine. If you do use "using" you end up with a weird race condition where it works sometimes and fails at other times.
I have code like yours in production and it works fine.
Only thing to be mindful of is that the whole blob is copied into memory before anything is sent to the client. This may cause memory pressures on your server and initial lag, depending on the size of the file.
If that is a concern, you have a couple of options.
One is to create a "lease" on the blob and give the user a URL to read it direct from blob storage for a limited time. That only works for low security scenarios though.
Alternatively you can use chunked transfer encoding. Basically, you read the file from blob storage in chunks and sends it to the client in those chunks. That saves memory - but I have not been able to make it work async, so you are trading memory for threads. Which is the right solution for you will depend in your specific circumstances.
(I have not got the code to hand, post a comment if you want it and I'll try to dig it out, even if it's a bit old).
This is the scenario.
There is a file on a remote file server (say I have a file hosted on DropBox)
I want to offer that file as a download on my web application (c# asp.net 4.0)
I want to hide the location 100% of the original file (I want it to appear to come from my sever).
I do not want to write this file to memory or to disk on my server.
I had assumed that I would want to use a stream to stream copy. example
Stream inputStream = response.GetResponseStream();
inputStream.CopyTo(Response.OutputStream, 4096);
inputStream.Flush();
Response.Flush();
Response.End();
This however copies the entire stream into memory before writing it out to the client browser. Any ideas would be awesome.
I need my server to basically just act as a proxy and shield the original file location
Thanks for any help.
The following code is what I ended up with (the buffer size will change in production)
This gets a file from a url, and starts streaming it chunk by chunk to a client through my server. The part that took me a bit to figure out was flushing the Response after each chunk was written.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(digitalAsset.FullFilePath);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if (response.StatusCode == HttpStatusCode.OK)
{
Response.ClearHeaders();
Response.ClearContent();
Response.Clear();
Response.ContentType = "application/octet-stream";
Response.AddHeader("Content-Disposition", "attachment;filename=" + digitalAsset.FileName);
Stream inputStream = response.GetResponseStream();
byte[] buffer = new byte[512];
int read;
while ((read = inputStream.Read(buffer, 0, buffer.Length)) > 0)
{
Response.OutputStream.Write(buffer, 0, read);
Response.Flush();
}
Response.End();
}
This can stream any size file through my server without waiting for my server to fist store it into memory or onto disk. The real advantage is that it uses very little memory as only the buffered chunk is stored. To the client the downloads begins instantly. (this is probably obvious to most but was very cool for this novice to see) so we are simultaneously downloading the file from one location and uploading it to another using the server as a sort of proxy.
While I don't see reasons why CopyTo will read whole stream in memory (as there is no intermediate stream anywhere), you can write the same Copy manually to be sure it behaves the way you want.
Consider using Async versions of read/write and if you can use async/awit from C# 4.0 to make code readable.
Old way: Asynchronous Stream Processing, new ways Asynchronous File I/O
I have a WCF Service that returns a byte array with a Zip file (50MB) to any client that requests it. If the Zip is very small (say 1MB), the SOAP response is coming from WCF with the byte array embedded in it. But the response size is very huge even for a 1MB file. If I try to transfer the 50MB file the service hangs and throws an out of memory exception, because the SOAP response becomes huge in size.
What is the best option available with WCF / web service to transfer large files (mainly ZIP format) as I am sending back a byte array. Is there any good approach instead of that for sending back the file?
Whether WCF / web service is best way to transfer large files to any client or is there any other better option/technology available so that interoperability and scalability for 10,000 users can be achieved?
My Ccode is below:
String pathfordownload = #"D:\New Folder.zip";
FileStream F2D = new FileStream(pathfordownload, FileMode.Open,FileAccess.Read);
BinaryReader binReader = new BinaryReader(F2D);
binReader.BaseStream.Position = 0;
byte[] binFile = binReader.ReadBytes(Convert.ToInt32 (binReader.BaseStream.Length));
binReader.Close();
return binFile;
A working piece/real piece of information will be really helpful as I am struggling with all the data available in Google and have had no good results for last week.
You can transfer a Stream through WCF and then you can send (almost) limitless length files.
I've faced the exact same problem. The out of memory is inevitable because you are using Byte arrays.
What we did is to flush the data on the hard drive, so in stead of being limited by your virtual memory your capacity for concurrent transactions is the HD space.
Then for transfer, we jut placed the file on the other computer. Of course in our case it was a server to server file transfer. If you want to de decoupled form the peer, you can use a file download in http.
So instead than responding with a file, your service could respond with a http url to the file location. Then when the client has successfully downloaded form the server with a standard HttpRequest or WebClient it calls a method to delete the file. In SOAP that could be Delete(string url), in REST that would be delete method on the resource.
I hope this makes sense to you. The most importnat part of this is to understand that in a scalable software especially if you are looking at 10000 clients (concurrent?) is that you may not use resources that are limited, like memory streams or byte arrays. But rather rely on large and easily expandable resources like a hard drive partition that coule eventually be on a SAN and IT could grow the partition as needed.
I'm having a problem that's been bugging me for a while.
I'm downloading files from a FTP server in .net, and randomly (and I insist, it is completely random), I get the following error:
System.Net.WebException: The remote server returned an error: (550) File unavailable (e.g., file not found, no access).
Our code in .net implements a retry mecanism, so when this error happens, the code will to download all the files again. Then, sometimes, it will succeed, other times, the 550 error will happen on another file, sometimes on the same file, it is completely random.
We is a snippet of the DownloadFile method that is called for each files to be downloaded
byte[] byWork = new byte[2047];
...
FtpWebRequest request = (FtpWebRequest)WebRequest.Create(new Uri(_uri.ToString() + "/" + filename));
request.Method = WebRequestMethods.Ftp.DownloadFile;
request.Credentials = new NetworkCredential(_Username, _Password);
using (FtpWebResponse response = (FtpWebResponse)request.GetResponse())
{
using (Stream rs = response.GetResponseStream())
{
using (FileStream fs = new FileStream(destination, FileMode.Create))
{
do
{
iWork = rs.Read(byWork, 0, byWork.Length);
fs.Write(byWork, 0, iWork);
} while (iWork != 0);
fs.Flush();
}
}
}
again, the thing that bugs me is that if there is an error in this code, the 550 error would happen everytime. However, we can try to download a file, we get the error, we try to download the same file with the same parameters again, and it will work. And it seams to happen more frequently with larger files. Any idea?
Please note, the below is just anecdotal, I don't have anything except vague memories and assumptions to back it up. So rather than a real solution, just take it as a "cheer up, it might not be your fault at all".
I think 550 errors are more likely to be due to some issue with the server rather than the client. I remember getting 550 errors quite often when using an old ISP's badly maintained ftp server, and I did try various clients without it making any real difference. I also remember seeing other people posting messages about similar problems for the same and other servers.
I think the best way to handle it is to just retry the download automatically and hopefully after a few tries you'll get it, though obviously this means that you waste bandwidth.