Problem with deleting a file over FTP - c#

I am using C# to upload some file to a ftp server. If the file already existed the FtpWebRequest timed out, so I thought to deleate it first.
However the WebRequestMethods.Ftp.DeleteFile also always times out. Am I doing something wrong?
Here is my code:
FtpWebRequest request = (FtpWebRequest)FtpWebRequest.Create(address);
request.Credentials = new NetworkCredential(Username, Password);
request.KeepAlive = false;
request.Method = WebRequestMethods.Ftp.DeleteFile;
try
{
FtpWebResponse resp = (FtpWebResponse)request.GetResponse();
}
catch (Exception e)
{
...
}
EDIT: Oh and it doesn't matter witch file I am trying to delete. As long as the file exists the request will always time out. If the file doesn't exist a different exception is thrown.
Nothing is wrong with credentials, I can do other operations (upload/download without a problem). Also it's not a server problem, if I connect to it with a client (FileZilla) with the same username / pass everything works as it should.
Thank you for your help.

The thing I have found using this Ftp via FtpWebRequest, is it is inherently a lot slower (since it is using the HTTP protocol over port 80), and it drives me crazy because FileZilla can do it a lot quicker (obviously using FTP protocol over port 20/21). There is a open source ftp component found here, I do not know if it will work for you, but worth a shot.
I know this is a subjective answer that will get downvoted, but personally, using ftp over port 80 is bound to be a lot slower especially on file operations like what you are trying to achieve.

Do you have access to the logs of the FTP server? If you do have a look at what commands the FTPWebRequest is making. It could be that it is trying to list the directory before deleting it.
Another issue maybe that the server is in passive mode, I believe FileZilla may automagicly detect this, check the connection in filezilla to see.

Knowing what commands are sent between client and FTP server could help find out what is causing the timeout. Would it be possible to use a packet analyzer such as Ethereal to capture the communication log?
Alternative approach could be using a third party FTP component and enabling logging in it. Following code uses our Rebex FTP:
// create client
Ftp client = new Ftp();
// enable logging
client.LogWriter = new Rebex.FileLogWriter(#"c:\temp\log.txt", Rebex.LogLevel.Debug);
// connect
client.Connect("ftp.example.org");
client.Login("username", "password");
// browse directories, transfer files
client.DeleteFile("file.txt");
// disconnect
client.Disconnect();

Related

Unable to read data from the transport connection partway through download of file

I am running into an interesting problem when trying to use the HttpClient GetAsync function. The specified url worked up until this past weekend with no changes on our side, and everything runs fine locally, but when deployed to our test or production servers it fails midway through the download with Unable to read data from the transport connection.
Currently I retrieve the response using
response = await client.GetAsync(uri, HttpCompletionOption.ResponseHeadersRead);
and once I have the response message (since it is just a headers read) I then download it to a file on the local file system.
//Write the response to file
using (Stream streamToReadFrom = await httpResponse.Content.ReadAsStreamAsync())
{
using (Stream streamToWriteTo = File.Open(fileToWriteTo, FileMode.Create))
{
await streamToReadFrom.CopyToAsync(streamToWriteTo);
}
}
The issue is, while it is streaming the data it will randomly throw a "Unable to read data from the transport connection: The connection was closed". Since the file is being streamed I can see the contents and size of the file on the disk and it only downloads anywhere from 20-30% of the file. I am able to download the file using a Browser (Chrome, Firefox and Edge), Postman, and running the program locally on my own machine. I am also able to download it VIA Edge ON the test server to the file system which to me rules out any firewall issue.
I've tried
Changing HttpCompleteOption.ResponseContentRead, but it fails out with the same error since it tries to get the entire content before writing it to a file.
Added the following config options (Download time out is 4 hours)
client.DefaultRequestHeaders.Add("Connection", "Keep-Alive");
client.DefaultRequestHeaders.Add("Keep-Alive", DOWNLOAD_TIMEOUT.TotalSeconds.ToString());
client.Timeout = DOWNLOAD_TIMEOUT; //Set the max download wait time for the client
System.Net.ServicePointManager.SecurityProtocol |= SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12; //Accept 1 and 1.2 TLS
When doing the file stream, I get files that are filled up to 13120KB, 13296KB,7536KB, etc. The actual size that matches what I download from Postman and Browser is 50654KB.
Not sure what causes it to fail mid download when all other methods still work (especially on my local box). I don't believe that it is a Tls/Ssl error since it IS able to connect and download part of the file, and I don't believe that it is a firewall issue on the test and production server since I am able to hit that endpoint and download it on the respective servers.
Any help would be appreciated!
EDIT: The thing that makes this unique compared to other SO questions, is that I am able to connect and download ~30% of the file before I get the "Unable to read data from the ..." error. A lot of the other questions have that when trying to connect to the url whereas I am getting it after it successfully connected and started the download of the file.
If you want to be confident to cope to infrastructure issues you could give some hardened httpclient a try. Downloader looks quite promising. It manages to proceed downloads after interrupted network connections.
await DownloadBuilder.New().WithUrl(#"https://file-examples-com.github.io/uploads/2018/04/file_example_AVI_1920_2_3MG.avi").WithDirectory(#"C:\temp").Build().StartAsync();
Hard to say but:
can be some kind of antivirus activity (disable and repeat test),
maybe Windows get updated (server / client side)? Is it Windows ? :)
check certificates (expiration etc)?
check this behavior from other machine in network
disable ssl validation in .net
use other librabry like RestSharp and see if error still exists
enable network sniffing (Fiddler) ?

FtpWebRequest unable to list files/directories on server .NET Framework [duplicate]

I am having a problem connecting a Windows service to an FTP site.
I inherited a Windows service from another developer. The service connects to a 3rd party server, downloads a csv file and then processes it. For some reason, the service stopped working (well over a year ago, before I was given the project).
So I went back to basics, created a console app and tried the connection/ file download function only in that app. I have tried many different methods to connect to the FTP, but all of them return the same error to my application:
The remote server returned an error: 227 Entering Passive Mode ()
This is one of the many methods I've tried:
FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://ftpaddress/filename.csv");
request.Method = WebRequestMethods.Ftp.DownloadFile;
request.Credentials = new NetworkCredential("username", "password");
request.UsePassive = true;
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
Stream responseStream = response.GetResponseStream();
StreamReader reader = new StreamReader(responseStream);
Console.WriteLine(reader.ReadToEnd());
Console.WriteLine("Download Complete, status {0}", response.StatusDescription);
reader.Close();
response.Close();
But it falls down on this part:
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
I read in several forums that setting the UsePassive property to False fixes these errors, but all that happened to me was that I got a syntax error instead, as below:
The remote server returned an error: (500) Syntax error, command unrecognized.
The file is hosted on a 3rd party FTP server I have no control over. I can paste the URL into a browser, and I am prompted for a username and password, which then allows me through and I can download the file.
To eliminate our firewall as the cause of the problem, I ran the app on both the internal network and the WiFi (which isn't behind the firewall), and it makes no difference. I also connected through FileZilla in Default, Active and Passive modes, and it worked every time. So no problem there.
So then I ran Wireshark. Here is an image of the wire capture using Filezilla (i.e. a successful one), in Passive mode:
And here is the capture when connecting (and failing) using the app, with passive set to true:
So as you can see in the failed connection above, I can log in to the server just fine. Then for whatever reason an extra request is sent, namely "TYPE I", which prompts the response of "Switching to binary mode." The below that, I get the following:
500 oops: vsf_sysutil_recv_peek: no data
In addition, I also ran it again after setting the Passive property to false, and this is what I got that time:
So my question is twofold;
1, if I somehow get past the UsePassive issue and set that property to false, will that solve my problem?
2, ignoring the UsePassive property, why can't I download the file from the app, but can from everywhere else?
The issue is now resolved. It turned out to be Kaspersky's built-in firewall that was blocking the connection. It's annoying that it didn't present me with a warning when I tried to connect, but reassuring to know my PC is safe.
The clue was in the detail of the 227 return:
10051 – A socket operation was attempted to an unreachable network
Also, for anyone reaching this via Google etc, the remote server was configured to only allow Passive connections, which is why I was getting the 500 syntax error. Studying a Wire capture when downloading a file revealed that Filezilla actually reverts to Passive mode automatically if Active is selected but fails.
The code in my original post works fine now.

SshConnectionException from SshNet in C#

From my script trying to connect to the unix server to download a file but getting below error..
Renci.SshNet.Common.SshConnectionException : Client not connected.
I can connect properly to that server from WinScp by using the same credentials.
Not sure what's going wrong here. Any idea/pointer ?
Code
using (var client = new ScpClient(Config.UnixServer, Config.UnixUsername, Config.UnixPassword))
{
client.Connect();
client.Upload(new FileInfo(fileUpload), fileName);
client.Disconnect();
}
Error
Renci.SshNet.Common.SshConnectionException : Client not connected.
at Renci.SshNet.Session.WaitOnHandle(WaitHandle waitHandle)
at Renci.SshNet.Session.Connect()
at Renci.SshNet.BaseClient.Connect()
WinSCP Session Log
The session log shows that WinSCP is using the sftp protocol (WinSCP supports both scp and sftp protocols). Not all sftp servers will accept scp connections. Switch to the SftpClient class and use the UploadFile method.
I also suspect you meant to call OpenRead() on your FileInfo instance to get a stream.
using (var client = new SftpClient(Config.UnixServer, Config.UnixUsername, Config.UnixPassword))
{
client.Connect();
client.UploadFile(new FileInfo(fileUpload).OpenRead(), fileName);
client.Disconnect();
}
Maybe that helps?
Maybe something to do with unix paths? Check client.RemotePathTransformation = RemotePathTransformation.ShellQuote;
Have quick read here
By default, SSH.NET applies a simple quoting mechanism for remote
paths in ScpClient. More precisely, any remote path is enclosed in
double quotes before it is passed to the scp command on the remote
host.
Perhaps the trust isn't there? In the original logs, OP has this:
2017-09-14 09:10:17.495 Host key matches cached key.
The github page: https://github.com/sshnet/SSH.NET
has information about how to establish an expected fingerprint.
The other issue I run into is when a server has a white list of IP addresses and the server is on it, but my test environment is not.
#Lee, the log from the application performing the connection is most helpful in figuring out these kinds of things, as Martin has pointed out.
Update the Renci reference lib will solved the issues.

Trying to create directory on another machine , getting exceptions

here is my code :
FtpWebRequest reqFTP = null;
Stream ftpStream = null;
string currentDir = string.Format("ftp://{0}", "10.10.10.46/E:/SERVER");
//string currentDir=string.Format("ftp://{0}","10.10.10.21/var/www/webdav/SERVER");
reqFTP = (FtpWebRequest)FtpWebRequest.Create(currentDir);
reqFTP.Method = WebRequestMethods.Ftp.MakeDirectory;
reqFTP.UseBinary = true;
reqFTP.Credentials = new NetworkCredential("core", "c0relynx");
FtpWebResponse response = (FtpWebResponse)reqFTP.GetResponse();
ftpStream = response.GetResponseStream();
ftpStream.Close();
response.Close();
Whenever I execute this code I get the exception "System error".
If I use the url "10.10.10.21/var/www/webdav/SERVER" I get the exception "The remote server returned an error: (530) Not logged in".
10.10.10.46 is a windows PC and 10.10.10.21 has Ubuntu.
Can anyone tell me where I have gone wrong and what I must do to resolve this?
Your code itself is not the problem. It's not how I generally do things, but it does work for me on my test rig. The problem is most likely related to the server configuration and/or credentials.
The (530) Not logged in error is almost always because your credentials are wrong. Check that the username and password are correct for the Ubuntu server by using a standard FTP client like Filezilla or even the Windows command-line ftp client.
As for the other message, it is most likely a generic 500 response returned by the other server. There are too many possible reasons for this and not enough information.
The only thing I can suggest is again to try the operation with the same credentials in some other FTP client to see what is actually happening. Unless you're keen to use a packet trace, and know enough about the FTP protocol to understand what the packets are telling you.
At this point you're going to have to do some diagnostics on the FTP server, which is outside the scope of Stack Overflow. The superuser site is better suited for this.

HttpWebRequest not returning, connection closing

I have a web application that is polling a web service on another server. The server is located on the same network, and is referenced by an internal IP, running on port 8080.
Every 15 secs, a request is sent out, which receives an xml response with job information. 95% of the time, this works well, however at random times, the request to the server is null, and reports a "response forcibly closed by remote host."
Researching this issue, others have set KeepAlive = false. This has not solved the issue. The web server is running .NET 3.5 SP1.
Uri serverPath = new Uri(_Url);
// create the request and set the login credentials
_Req = (HttpWebRequest)WebRequest.Create(serverPath);
_Req.KeepAlive = false;
_Req.Credentials = new NetworkCredential(username, password);
_Req.Method = this._Method;
Call to the response:
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
_ResponseStream = response.GetResponseStream();
The method for this is GET. I tried changing the timeout, but the default is large enough to take this into account.
The other request we perform is a POST to post data to the server, and we are getting the same issue randomly as well. There are no firewalls affecting this, and we ruled out the Virus scanner. Any ideas to help solving this is greatly appreciated!
Are you closing the response stream and disposing of the response itself? That's the most frequent cause of "hangs" with WebRequest - there's a limit to how many connections you can open to the same machine at the same time. The GC will finalize the connections eventually, but if you dispose them properly it's not a problem.
I wouldn't rule out network issues as a possible reason for problems. Have you run a ping to your server to see if you get dropped packets that correspond to the same times as your failed requests?
Set the timeout property of FtpWebRequest object to maximum i tried it with 4 GB File and it's working great.

Categories