I observed that one of my Windows Services was not connecting to an FTP location on a Unix Server, I ran the executable on my PC as the dev didn't log any error and i was getting timeout error on trying to get response from an FTPWebRequest Object in C#.
On trying to access the FTP location using Filezilla I am getting the error
GnuTLS error -110: The TLS connection was non-properly terminated.
Using SFTP does not give this error and also using FTP in plain text(insecure) also does not.
I really do not understand this and note that the application has been running fine for years and this just suddenly happens on like 4 servers.
GnuTLS error -110: The TLS connection was non-properly terminated.
That just means that the peer just closed the socket and did not do a proper TLS shutdown. Some broken clients or servers do this. Assuming that this message relates to a data transfer you can usually ignore this because the transfer was finished anyway, so no data got lost.
Using SFTP does not give this error and also using FTP in plain text(insecure) also does not.
Of course you don't get it, since SFTP is using the SSH protocol instead of TLS and plain FTP does no encryption at all, so no TLS too. And if there is no TLS involved you can not get any TLS errors.
I really do not understand this and note that the application has been running fine for years and this just suddenly happens on like 4 servers.
It might simply be that the servers changed, i.e. either they never supported FTPS (i.e. FTP with TLS, not to be confused with SFTP) before or they now switched to a broken implementation.
If you are connecting to a cPanel server then you can temporarily fix this issue by enabling "Broken Client Compatibility" in Pure-FTP settings in WHM.
Older version of Filezilla may be responsible for the said error. Faced the same error on 3.4 version issue was resolved after downloading the 3.6 one.
Related
I'm accessing Dynamics 365 Business Central OData API in C# application, and when accessing data in my local system is good and when we deployed the application to client server, randomly (50%) they are getting the error "The server committed a protocol violation. Section=ResponseStatusLine".
I have checked the article https://techcommunity.microsoft.com/t5/iis-support-blog/protocol-violation-section-responsestatusline/ba-p/1227792 and applied all the 3 suggestions, but none of them worked.
I'm able to access the client Business Central API integration application in my local system without any issues.
Assuming that this is client Firewall or load balancer issue. But not able to find the solution.
There are many case behind this issue, avoiding the problem rather than actually fixing it. One of the most common cause of this error is the corrupt or missing headers in the request.
Solution:
The server responds with a 100 continue in an incorrect way. Setting expect 100 continue to false and reducing the socket idle time to two seconds
HttpRequestObj.ServicePoint.Expect100Continue = false;
HttpRequestObj.ServicePoint.MaxServicePointIdleTime = 2000;
Ignore corrupted/missing headers. Ref: Link
Stop skype if it is running on the machine.
If a webserver uses UTF-8 that outputs the byte-order-marker (BOM). For example, the default constant Encoding.UTF8 outputs the BOM, and it is easy to forget this. The webpages will work correctly in Firefox and Chrome but HttpWebRequest will bomb. A quick fix is to change the webserver to use the UTF-8 encoding that doesn't output the BOM.
Check the end point of your request server. ex:https or http.
I am having issues uploading files from an ASP.NET server to our CDN (Akamai) using Renci SSH SFTP. Before the launch that started the issues we had been using FTP to upload media, and anything over ~50MB would start to have 502 Bad Gateway issues. Since we wanted to update to SFTP for security anyways, we swapped out our FTP code for SFTP to see if the problem persisted. In our dev and QA environments this seemed tp fix that issue, but admittedly with much lower traffic than our prod server. Once in production, we had around 3k uploads, and of those uploads about 30% of them failed with the following error, originating in Renci SSH.Net
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
The interesting thing is this only happens when the code is being executed by ASP.NET. I made a console application that calls the Services.dll, provides the path to the media files, and provides a decent sized list of test media ranging is size from 800 KB to 187 MB. When running on the server as a console application not a single error was thrown. Therefore I think the issue is somewhere in ASP.NET, IIS, or some server setting I am not aware of.
Our firewall is set up to allow communication to Akamai through port 22 for ssh, which I had my team verify for each of the load balanced servers. Since the console application is running fine I don't think this is the issue. But it is worth mentioning that I looked
The admin panel that is called to upload media items is on an aspx page which I'll call "upload.aspx" for now. In the web config we extended the execution timeout for the upload.aspx to this:
<location path="path/to/upload.aspx">
<system.web>
<httpRuntime maxRequestLength="204800" executionTimeout="2700" />
</system.web>
</location>
Here is the code making the connection
using(var client = new SftpClient(GetConnectionInfo()))
{
client.ConnectionInfo.Timeout = TimeSpan.FromMinutes(5);
//This is the line that is hit before the error.
//It attempts to connect for about 20 seconds and throws an error.
//Happens with or without ConnectionInfo.Timeout being set.
client.Connect();
//perform logic to upload file, then disconnect
client.Disconnect();
}
One last note, the length of time to upload any file is significantly slower than using FTP. I've heard of this happening for other companies and am not surprised by this, but it may help in troubleshooting the issue.
Does anyone have any ideas what could be causing this?
You need to change the TCP setting Max SYN Retransmissions of the operating system. The default is 21 for windows. You can only lower the timeout with client.ConnectionInfo.Timeout.
More info here: https://github.com/sshnet/SSH.NET/issues/183#issuecomment-299283600
While calling wcf ksoap2 from android.
htp.call(SOAP_ACTION, soapEnvelop);
I am getting this exception.
java.net.ConnectException: failed to connect to /10.0.2.2 (port 52442)
after 20000ms: connect failed: ENETUNREACH (Network is unreachable)
My code was working fine till last night but now its not.
Thanks in advance
It's obvious it's not any router-firewall related problem as you are under the same net, so there are only three possibilities:
There's nothing listening on that port on that IP
There's a local firewall on that machine that is blocking that connection attempt
You are not using WIFI so you're not under the same net.
Can you open that URL from your browser in your computer manually? If yes, I'd suggest using some debugging tool to trace TCP packets (I don't know either what kind of operating system you use on the destination machine; if it's some linux distribution, tcpdump might help).
All that assuming you have the android.permission.INTERNET permission in your AndroidManifest.xml file.
I have a project that connects to a RMS file system through Attunity (Version 1.0.0.8). The RMS file is in a different server. The connection pool on both client and service is 10 (Max connection pool size). When we hit the server from the client, we sometimes get the error:
C014: Client connection limit reached - try later.
I would like to understand whether this error is related to Server overloaded or any issues on the Client side. I am sure that the client code that I am using to connect to the server is properly opening and disposing the connection.
This sounds like a problem we were having. We were running Attunity on OpenVms and we were maxing out the number of DECNET connections to Attunity between our nodes; the underlying problem was with our client. The clients in this case would induce a longstanding transaction or had problems releasing their connections back to the pool. We fixed the issue by eliminating the longstanding transactions and then finding the bug in the clients where the they would not release their connection. Unfortunately, all of our clients are implemented in Java and Cobol, so I don't have any .Net specific advice.
Firstly, there are a number of other similar questions and I have read them all and they have a similar problem however they differ from my problem:
In a WPF client application I am making several HttpWebRequests to various websites. On one particular machine and for one particular website I receive the following error, however only intermittently, approximately once every 4 or 5 requests:
System.Net.WebException
The underlying connection was closed: An unexpected error occurred on a receive.
at System.Net.HttpWebRequest.GetResponse()
and the status on the Request is ReceiveFailure.
Here's what I've tried:
Changing the timeout to be very high, turning off keep alive and using http 1.0 instead of 1.1, all of which don't change the problem
Running through a proxy to see what was going on but for some strange reason this problem doesn't happen when I run the exact same code through a proxy (by setting the proxy object on the http web request)
I can access the site no problem with IE and chrome
Tried another machine inside on my home network and had no problems. Similarly with another machine outside of the network.
This problem happened after upgrading this machine to Win7 however after going back to WinXP the problem is still ocurring.
Any help is greatly appreciated, all of the other posts either were for people talking to their own webservices and it was an ASP configuration problem or something like that. The site I am making requests too is an ASP site but I don't believe its a problem with the 3rd party site as this exact same code works on other machines just not one particular one.
A ReceiveFailure status means that your application started receiving the response but it was closed before the complete message was received for some reason.
Since you've already changed the timeout etc with no success the problem might be the server is closing the response stream prior to a complete response for some reason. Isolated to a specific machine might indicate a possible hardware issue or configuration on that particular machine. The fact that it's only for one website on the machine indicates configuration rather than hardware. Is there something different on that machine? Updates? IIS configuration?
It might be working through IE, Chrome and the Proxy because those would have some error handling already built in (IE and Chrome would definitely have error handling and retry the requests, I assume the proxy would too). You may have to build some of this error handling into your code.