I'm having a problem that's been bugging me for a while.
I'm downloading files from a FTP server in .net, and randomly (and I insist, it is completely random), I get the following error:
System.Net.WebException: The remote server returned an error: (550) File unavailable (e.g., file not found, no access).
Our code in .net implements a retry mecanism, so when this error happens, the code will to download all the files again. Then, sometimes, it will succeed, other times, the 550 error will happen on another file, sometimes on the same file, it is completely random.
We is a snippet of the DownloadFile method that is called for each files to be downloaded
byte[] byWork = new byte[2047];
...
FtpWebRequest request = (FtpWebRequest)WebRequest.Create(new Uri(_uri.ToString() + "/" + filename));
request.Method = WebRequestMethods.Ftp.DownloadFile;
request.Credentials = new NetworkCredential(_Username, _Password);
using (FtpWebResponse response = (FtpWebResponse)request.GetResponse())
{
using (Stream rs = response.GetResponseStream())
{
using (FileStream fs = new FileStream(destination, FileMode.Create))
{
do
{
iWork = rs.Read(byWork, 0, byWork.Length);
fs.Write(byWork, 0, iWork);
} while (iWork != 0);
fs.Flush();
}
}
}
again, the thing that bugs me is that if there is an error in this code, the 550 error would happen everytime. However, we can try to download a file, we get the error, we try to download the same file with the same parameters again, and it will work. And it seams to happen more frequently with larger files. Any idea?
Please note, the below is just anecdotal, I don't have anything except vague memories and assumptions to back it up. So rather than a real solution, just take it as a "cheer up, it might not be your fault at all".
I think 550 errors are more likely to be due to some issue with the server rather than the client. I remember getting 550 errors quite often when using an old ISP's badly maintained ftp server, and I did try various clients without it making any real difference. I also remember seeing other people posting messages about similar problems for the same and other servers.
I think the best way to handle it is to just retry the download automatically and hopefully after a few tries you'll get it, though obviously this means that you waste bandwidth.
Related
I'm writing an application similar to HFS which is a HTTP File Server with customization themes in HTML/CSS/JS and i would like to be able to serve my files in multiple parts, because most of download managers connect to the server through multiple connections and download the file as 8 pieces, and that feature ultimately boosts the download speed, and it makes the download to have the capability to resumed and paused.
As far as i know the HTTP's Partial Content makes this possible, i've looked around the web but couldn't find any good example of how to implement it in my code where i use HttpListener to serve webpages and files.
I've seen somewhere that someone suggested to use TcpListener instead but as my whole app works on HttpListener and haven't really find any good examples of Serving Partial Content with TcpListener to switch.
The webserver is multi-threaded and doesn't have any problem handling many requests through different connection simultaneously.
But whenever i download a huge file with IDM it just serves the content though a single connection and IDM shows that the server isn't capable of serving "206" (HTTP Partial Content)
Here's the code that i'm currently using to serve files:
context.Response.ContentType = GetMeme(filename);
Stream input = new FileStream(filename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
context.Response.ContentLength64 = input.Length;
byte[] buffer = new byte[1024 * 16];
int nbytes;
while ((nbytes = input.Read(buffer, 0, buffer.Length)) > 0)
context.Response.OutputStream.Write(buffer, 0, nbytes);
input.Close();
context.Response.StatusCode = (int)HttpStatusCode.OK;
context.Response.OutputStream.Flush();
context.Response.OutputStream.Close();
I tried to get the buffer offset from the HTTP heads but it fails to close the stream due to the offset and says Cannot close stream until all bytes are written.
Is there any better alternative?
Can even HttpListener handle HTTP: 206 correctly?
How would partial content work on TcpListener?
Any useful links and information would be much appreciated.
Disclaimer: this answer is not pretending to be complete, it's just so many things to talk about in this context. But as a beginning...
The listener you use has no relation to it. Your code should be aware of RANGE HTTP header. When RANGE is supplied, read and serve the file as specified in this header + send 206. Otherwise, serve the entire file and send 200.
It's not the buffer offset you get, but the file offset.
First set response code and other metadata (headers), and write to the stream as the last step.
And you'll probably have to completely change the way you actually serve files. For instance, call CopyToAsync() on FileStream you have.
And it's not Meme, it's MIME.
I have a page to upload file .txt by using PLUpload library (PLUpload). It worked when I test in client computer for all of browser : IE, Chrome, FF... But when I test in Window Server where hosting this website it throw error:
The process cannot access the file 'SystemPath\Test.txt' because it is
being used by another process.
The website write by ASP.NET and I think root cause is about the security of Window Server. The error code is Error #-200: HTTP Error.
Here is the code when upload :
using System.IO;
MemoryStream uploadStream = new MemoryStream();
using (FileStream source = File.Open(tempFile, FileMode.Open))
{
source.CopyTo(uploadStream);
}
Question: Why IE throw that error just only Window Server and how to fix that ?
There are a variety of processes that could lock up the file.
Eric Lippert suggested that it could be the antivirus: C# file is being used by another process
João Sousa recommends checking your code to make sure it disposes of all connections to the file when it is done: Process cannot access the file because it is being used by another process
Because the error is coming from the file system, anything that interacts with the file system could be locking the file. These may not be the cause of your error, but they are good places to start looking.
I have tried many methods to deserialise this xml from URL. But none were successful due to what I believe is an encoding issue.
If i right click download, then deserialise it from my C drive, it works fine.
So i decided to try downloading the file first, and then process it. But the file it downloads via code is in the wrong encoding as well!
I dont know where to start, but im thinking maybe forcing a UTF-8 or UTF-16 encoding when downloading??
Here is the download code:
using (var client = new WebClient())
{
client.DownloadFile("http://example.com/my.xml", "my.xml");
}
How to download a file from a URL in C#?
Image of file when downloaded
Try this
using (var client = new WebClient())
{
client.Encoding = System.Text.Encoding.UTF8;
client.DownloadFile("http://example.com/my.xml", "my.xml");
}
The file was infact in a gzip format. Despite it being an xml url.
My connections must have been accepting gzip so the server responded with such. Even though i tried a few different methods with different variations. (Downloading/String streaming, parsing string from URL etc)
The solution for me, was to download, then uncompress the gzip file before deserialising. Telling the server not to send gzip didn't work. But may be a possibility for some.
May be this question seems duplicate to you. But still I want exact cause of solution for my problem.
The problem is I am requesting a file from FTP with username and password. After ftp connection initialization, it throws me an exception WebException: Cannot open passive data connection.
But I am able to download the same file using web browser like Chrome with same username and password.
It is a Unity 3D game where user info is actually requested and some user related files will be downloaded. I am using MonoDevelop to code
The server an AWS server. Its IP address is recently changed and it was restarted. I am using the new IP and getting list of files in an XML format. I am able to parse the xml data and be able to request the file.
Here is the code sample I am using for FTP.
reqFTP = (FtpWebRequest)FtpWebRequest.CreateDefault(new Uri("ftp://" + serverIP + ":" + serverPort + "/" + this.receiveInfo.fileDirectory + "/" + downloadLoc));
reqFTP.Method = WebRequestMethods.Ftp.DownloadFile;
Debug.Log (reqFTP.RequestUri);
reqFTP.UseBinary = true;
reqFTP.UsePassive = true;
reqFTP.Credentials = new NetworkCredential(this.receiveInfo.ftpUsername , this.receiveInfo.ftpPassword);
object state = new object ();
FtpWebResponse response = (FtpWebResponse)reqFTP.GetResponse();
Please help me to solve this issue. Atleast some useful information is much appreciated as I am very new to this kind of stuff.
I found the answer to my question.
The problem was with the FTP configuration. When the server was restarted the old ftp settings were imposed on the server. So it was blocking all passive ftp connections from clients. The AWS support person changed the configuration, then it started working.
After the config file was changed, the issue got solved. There is no need to change the code. The code is correct.
I've been trying to finish up a web scraper for a website of mine and have hit a stumbling block on the folder creation for image storage
My code looks like this:
//ftpUser and ftpPass are set at the head of the class
FtpWebRequest lrgRequest = (FtpWebRequest) FtpWebRequest.Create("ftp://ftp.mysite.com/httpdocs/images/large/imgFolder");
lrgRequest.Credentials = new NetworkCredential(ftpUser, ftpPass);
lrgRequest.KeepAlive = false;
lrgRequest.Method = WebRequestMethods.Ftp.MakeDirectory;
FtpWebResponse response = (FtpWebResponse) lrgRequest.GetResponse();
Console.WriteLine(response);
When i run this code it gets to the response and throws the error 550 saying the folder isn't found
I've compared my approach to a number of examples and by the standard approach it should work. The ftp address is valid and has been checked and i'm wondering is there an issue with my server that is stopping this or is my C# causing the problem
If anyone needs any more info please just say
As always any help is greatly appreciated
Regards
Barry
Definition of FTP 550:
Requested action not taken. File unavailable (e.g., file not found, no
access).
I'd check you have the appropriate permissions (or you app does) and verfiy that the file does indeed exist.
Since you are getting a response code, I doubt your code above is the cause of the problem. However you could always check the AuthenticationLevel and ImpersonationLevel to see if these provide any useful information.