We have for some time been using Cloudant NoSQL from the IBM Cloud and we have been extremely happy with the speed, simplicity and reliability. BUT a few weeks ago our front-end server which stores data in Cloudant database startet periodically to log exceptions: "The remote name could not be resolved: '[A unique ID]-bluemix.cloudant.com" at System.Net.HttpWebRequest.EndGetRequestStream.
I added a DNS lookup when the error occurs which logs: "This is usually a temporary error during hostname resolution and means that the local server did not receive a response from an authoritative server" at System.Net.Dns.GetAddrInfo(String name).
This relaxed error message indicates it is not harmful but for us it is.
We see the error for 1-3 minutes every 30-120 minutes on servers but not while debugging locally (this could be lack of patience and/or traffic).
Below is one method of seven which fails
using (HttpClientHandler handler = new HttpClientHandler())
{
handler.Credentials = new NetworkCredential(Configuration.Cloudant.ApiKey, Configuration.Cloudant.ApiPassword);
using (var client = new HttpClient(handler))
{
var uri = new Uri(Configuration.Cloudant.Url); //{https://[A unique ID]-bluemix.cloudant.com/[Our Product]/_find}
var stringContent = new StringContent(QueryFromResource(),
UnicodeEncoding.UTF8,
"application/json");
var task = TaskEx.Run(async () => await client.PostAsync(uri, stringContent));
task.Wait(); // <------ Exception here
if (task.Result.StatusCode == HttpStatusCode.OK)
{
// Handle response deleted
}
}
}
We have updated our .Net framework, experimented with DnsRefreshTimeout, refactored code, extended caching but we keep seeing the issue.
We also added a DNS lookup to Google when the error occurs and this is consistently successful.
Initially we thought load might we an issue but we see the issue even when there is no traffic.
Suggestions are much appreciated!
Related
EDIT - Due to probable misintepretation:
This is not about the server side of HTTP/2 - its about a client HTTP/2 request from an older server OS. Also, i got it to work using python (gobiko.apns) , so it seems to me it should be possible.
EDIT 2
It seems this question has not so much to do with HTTP2, but rather the cipher required by apple. TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 is not used by the SslStream in versions pre-win10. However, since it can be done using python, it seems to me that it should be possible. Any help would be appreciated.
We found some code here and there to get our connection to the APNS working on our development environment. We are using the .p8 certificate and sign a token as authorization (not the 'old' interface).
This works on my dev pc (win10) but when i transfer it to a server 2008 R2 it gives a weird warning. It seems having to do with the setup of the tls connection, however, i'm not too familiar with that area. I really searched but the only thing i can come up with is that server 2008R2 will not support it due to ciphers or something (which seems unreasonable to me).
The code that is working from my pc (using nuget HttpTwo and Newtonsoft):
public static async void Send2(string jwt, string deviceToken)
{
var uri = new Uri($"https://{host}:443/3/device/{deviceToken}");
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
ServicePointManager.ServerCertificateValidationCallback = delegate
{
Console.WriteLine("ServerCertificateValidationCallback");
return true;
};
string payloadData = Newtonsoft.Json.JsonConvert.SerializeObject(new
{
aps = new
{
alert = new
{
title = "hi",
body = "works"
}
}
});
//PayloadData always in UTF8 encoding
byte[] data = System.Text.Encoding.UTF8.GetBytes(payloadData);
var httpClient = new Http2Client(uri);
var headers = new NameValueCollection();
headers.Add("authorization", string.Format("bearer {0}", jwt));
headers.Add("apns-id", Guid.NewGuid().ToString());
headers.Add("apns-expiration", "0");
headers.Add("apns-priority", "10");
headers.Add("apns-topic", bundleId);
try
{
var responseMessage = await httpClient.Send(uri, HttpMethod.Post, headers, data);
if (responseMessage.Status == System.Net.HttpStatusCode.OK)
{
Console.WriteLine("Send Success");
return;
}
else
{
Console.WriteLine("failure {0}", responseMessage.Status);
return;
}
}
catch (Exception ex)
{
Console.WriteLine("ex");
Console.WriteLine(ex.ToString());
return;
}
}
is throwing
System.Security.Authentication.AuthenticationException: A call to SSPI failed, see inner exception. ---> System.ComponentModel.Win32Exception: The message received was unexpected or badly formatted
from server 2008R2.
I also tried it with a WinHttpHandler, which also works from my PC, but throws
System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> System.Net.Http.WinHttpException: A security error occurred
Stacktraces are mostly async thingies, but it boils down to HttpTwo.Http2Connection.<Connect> for the HttpTwo implementation and System.Net.Http.WinHttpHandler.<StartRequest> for the WinHttpHandler.
Is there something i have to add to the server in order to work / will we get it to work?
UPDATE
I included the sourcefiles from HttpTwo in my project and debugged it. The exception occurs on
await sslStream.AuthenticateAsClientAsync (
ConnectionSettings.Host,
ConnectionSettings.Certificates ?? new X509CertificateCollection (),
System.Security.Authentication.SslProtocols.Tls12,
false).ConfigureAwait (false);
on my Win8 test pc. Now, when i use the method overload with only the host argument on my own PC it throws the same exception, i guess because the tls protocol is off then.
According to this Github issue it could have to do with the ciphers. I had some problems before in that area, but it seems to me that at least a WIN8 PC must be able to agree upon secure enough ciphers, right?
Schannel is complaining about "A fatal alert was received from the remote endpoint. The TLS protocol defined fatal alert code is 40.", so that points into that direction also afaik.
It's the cipher... The ciphers Apple uses are not included into SChannel below Win10/Server2016.
(see Edit 2)
I have a C# app and I am accessing some data over REST so I pass in a URL to get a JSON payload back. I access a few different URLs programmatically and they all work fine using this code below except one call.
Here is my code:
var url = "http://theRESTURL.com/rest/API/myRequest";
var results = GetHTTPClient().GetStringAsync(url).Result;
var restResponse = new RestSharp.RestResponse();
restResponse.Content = results;
var _deserializer = new JsonDeserializer();
where GetHTTPClient() is using this code below:
private HttpClient GetHTTPClient()
{
var httpClient = new HttpClient(new HttpClientHandler()
{
Credentials = new System.Net.NetworkCredential("usr", "pwd"),
UseDefaultCredentials = false,
UseProxy = true,
Proxy = new WebProxy(new Uri("http://myproxy.com:8080")),
AllowAutoRedirect = false
});
httpClient.Timeout = new TimeSpan(0,0, 3500);
return httpClient;
}
so as i said, the above code works fine but a bunch of different request but for one particular request, I am getting an exception inside of the
.GetStringAsync(url).Result
call with the error:
Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host
I get that error after waiting for about 10 minutes. What is interesting is that if I put that same URL that isn't working into Internet Explorer directly I do get the JSON payload back (after about 10 minutes as well). So i am confused at why
It would work fine directly from the browser but fail when using the code above.
It fails on this one request but other requests using the same code work fine programmatically.
Any suggestions for things to try or things I should ask the owner of the server to check out on their end to help diagnose what is going on?
I think the timeout is not an issue here, as the error states that connection has been closed remotely and the set timeout is about 58 minutes, which is more than enough compared to your other figures.
Have you tried looking at the requests itself? Might want to edit your question with those results.
If you remove line httpClient.Timeout = new TimeSpan(0,0, 3500); the issue should be solved but if the request would last 20 minutes you should wait all the time.
I am constantly getting NameResolutionFailure errors when I make web requests (GET) in my MVVM Cross Android app.
I've tried out much of the advice provided in forums re this issue as it's a common one but I am unable to fix this.
My current attempt uses the NuGet package ModernHttpClient to perform web requests. The same error occurs - unable to resolve the domain to an IP - however the error message is slightly different from what was happening when I was using HttpWebRequest so I guess thats a slight improvement?
java.net.UnknownHostException: Unable to resolve host "jsonplaceholder.typicode.com": No address associated with hostname
Can you provide advice on why this is always failing? Maybe its my method thats not truely utilising ModernHttpClient?
The following code is part of my IRestService class located in the Core PCL and not in the Android project.
public async Task MakeRequest(WebHeaderCollection headers = null)
{
var handler = new NativeMessageHandler();
string requestUrl = "http://jsonplaceholder.typicode.com/posts/1";
try
{
using (var client = new HttpClient(handler))
{
//client.BaseAddress = new Uri(baseAddress);
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var response = await client.GetAsync(requestUrl);
if (response.IsSuccessStatusCode)
{
var stream = await response.Content.ReadAsStreamAsync();
var reader = new StreamReader(stream);
Mvx.Trace("Stream: {0}", reader.ReadToEnd());
}
}
}
catch (WebException ex)
{
Mvx.Trace("MakeRequest Error: '{0}'", ex.Message);
}
return;
}
PS: I have also attempted to use Cheesebarons ModernHttpClient MVVM Cross plugin but this is causing compile errors in release mode and there is no documentation about what methods and classes it has - maybe its not supported anymore?
PPS: And yes my manifest has internet permission (I check the options and the actual manifest file to confirm)
The ModernHttpClient plugin for MvvmCross is not needed anymore, so don't use it.
So since you have Internet permission set in the AndroidManifest the problem is something else. I've experienced on some Android devices, that the first call to some Internet resource fails with the same error you get. The way I've usually worked around that is to retry the call.
There are various ways to do so.
You can create your own HttpClientHandler which wraps the one coming from ModernHttpClient and create your own retry handling in there.
You can retry using a library such as Polly
I tend to do the latter. So if you add the Polly NuGet you can pretty quickly test out if this solves the problem for you:
var policy = Policy.WaitAndRetryAsync(
5,
retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)),
(ex, span) => {
Mvx.Trace("Retried because of {0}", ex);
}
);
Then retry your task like:
await policy.ExecuteAsync(() => MakeRequest(someHeaders)).ConfigureAwait(false);
Usually on second try the exception goes away.
I've seen this problem on Nexus 5, Nexus 7 and on an Samsung Galaxy SII, but not on other devices. It might also help if you toggle the WiFi on the device prior to debugging.
In any case, you app should have some kind of retry logic as Internet connections can be spotty at times.
I have a Windows Phone 8 app containing code as follows:
using (var client = new HttpClient())
{
var httpRequest = new HttpRequestMessage(method, uri);
try
{
var response = client.SendAsync(httpRequest);
var httpResponse = await response;
if (httpResponse.IsSuccessStatusCode)
{
var result = await httpResponse.Content.ReadAsStringAsync();
return result;
}
else
{
HandleError(httpResponse);
return null;
}
}
catch (Exception ex)
{
throw;
}
}
If the client successfully connects to the server, I will get the expected results, including all the appropriate HTTP status codes and reason phrases.
If the client is unable to contact the server (e.g. incorrect domain/IP/port in the URL), the awaited task completes after some delay, with a 404 status and no reason phrase. The task never throws an exception. The task (the response variable in the code snippet) has the status of "ran to completion". Nothing about the result is indicative of the actual problem - be it networking failure, unreachable server, etc.
How can I capture more meaningful errors in the case where the URL points to a non-existent or unreachable server, socket connection refused, etc.? Shouldn't SendAsync be throwing specific exceptions for such cases?
FWIW, the client code is built into a PCL library using VS 2013 update 3 and running in the Windows Phone emulator, using System.Net.Http 2.2.28 from NuGet
Windows Phone's implementation of the .NET HttpClient is different from Desktop's implementation (wininet vs. custom stack). This is one of the incompatibilities that you need to be aware of, if there is a semantic difference for you.
In my simple test I do get a ReasonPhrase of "Not Found" on the phone emulator. Also you can see that the Headers collection is empty whereas if the server was found and actually returned a real 404 then there would be headers.
When using the System.Net.WebClient.DownloadData() method I'm getting an unreasonably slow response time.
When fetching an url using the WebClient class in .NET it takes around 10 sec before I get a response, while the same page is fetched by my browser in under 1 sec.
And this is with data that's 0.5kB or smaller in size.
The request involves POST/GET parameters and a user agent header if perhaps that could cause problems.
I haven't (yet) tried if other ways to download data in .NET gives me the same problems, but I'm suspecting I might get similar results. (I've always had a feeling web requests in .NET are unusually slow...)
What could be the cause of this?
Edit:
I tried doing the exact thing using System.Net.HttpWebRequest instead, using the following method, and all requests finish in under 1 sec.
public static string DownloadText(string url)
var request = (HttpWebRequest)WebRequest.Create(url);
var response = (HttpWebResponse)request.GetResponse();
using (var reader = new StreamReader(response.GetResponseStream()))
{
return reader.ReadToEnd();
}
}
While this (old) method using System.Net.WebClient takes 15-30s for each request to finish:
public static string DownloadText(string url)
{
var client = new WebClient();
byte[] data = client.DownloadData(url);
return client.Encoding.GetString(data);
}
I had that problem with WebRequest. Try setting Proxy = null;
WebClient wc = new WebClient();
wc.Proxy = null;
By default WebClient, WebRequest try to determine what proxy to use from IE settings, sometimes it results in like 5 sec delay before the actual request is sent.
This applies to all classes that use WebRequest, including WCF services with HTTP binding.
In general you can use this static code at application startup:
WebRequest.DefaultWebProxy = null;
Download Wireshark here http://www.wireshark.org/
Capture the network packets and filter the "http" packets.
It should give you the answer right away.
There is nothing inherently slow about .NET web requests; that code should be fine. I regularly use WebClient and it works very quickly.
How big is the payload in each direction? Silly question maybe, but is it simply bandwidth limitations?
IMO the most likely thing is that your web-site has spun down, and when you hit the URL the web-site is slow to respond. This is then not the fault of the client. It is also possible that DNS is slow for some reason (in which case you could hard-code the IP into your "hosts" file), or that some proxy server in the middle is slow.
If the web-site isn't yours, it is also possible that they are detecting atypical usage and deliberately injecting a delay to annoy scrapers.
I would grab Fiddler (a free, simple web inspector) and look at the timings.
WebClient may be slow on some workstations when Automatic Proxy Settings in checked in the IE settings (Connections tab - LAN Settings).
Setting WebRequest.DefaultWebProxy = null; or client.Proxy = null didn't do anything for me, using Xamarin on iOS.
I did two things to fix this:
I wrote a downloadString function which does not use WebRequest and System.Net:
public static async Task<string> FnDownloadStringWithoutWebRequest(string url)
{
using (var client = new HttpClient())
{
//Define Headers
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var response = await client.GetAsync(url);
if (response.IsSuccessStatusCode)
{
string responseContent = await response.Content.ReadAsStringAsync();
//dynamic json = Newtonsoft.Json.JsonConvert.DeserializeObject(responseContent);
return responseContent;
}
Logger.DefaultLogger.LogError(LogLevel.NORMAL, "GoogleLoginManager.FnDownloadString", "error fetching string, code: " + response.StatusCode);
return "";
}
}
This is however still slow with Managed HttpClient.
So secondly, in Visual Studio Community for Mac, right click on your Project in the Solution -> Options -> set HttpClient implementation to NSUrlSession, instead of Managed.
Screenshot: Set HttpClient implementation to NSUrlSession instead of Managed
Managed is not fully integrated into iOS, doesn't support TLS 1.2, and thus does not support the ATS standards set as default in iOS9+, see here:
https://learn.microsoft.com/en-us/xamarin/ios/app-fundamentals/ats
With both these changes, string downloads are always very fast (<<1s).
Without both of these changes, on every second or third try, downloadString took over a minute.
Just FYI, there's one more thing you could try, though it shouldn't be necessary anymore:
//var authgoogle = new OAuth2Authenticator(...);
//authgoogle.Completed...
if (authgoogle.IsUsingNativeUI)
{
// Step 2.1 Creating Login UI
// In order to access SFSafariViewController API the cast is neccessary
SafariServices.SFSafariViewController c = null;
c = (SafariServices.SFSafariViewController)ui_object;
PresentViewController(c, true, null);
}
else
{
PresentViewController(ui_object, true, null);
}
Though in my experience, you probably don't need the SafariController.
Another alternative (also free) to Wireshark is Microsoft Network Monitor.
What browser are you using to test?
Try using the default IE install. System.Net.WebClient uses the local IE settings, proxy etc. Maybe that has been mangled?
Another cause for extremely slow WebClient downloads is the destination media to which you are downloading. If it is a slow device like a USB key, this can massively impact download speed. To my HDD I could download at 6MB/s, to my USB key, only 700kb/s, even though I can copy files to this USB at 5MB/s from another drive. wget shows the same behavior. This is also reported here:
https://superuser.com/questions/413750/why-is-downloading-over-usb-so-slow
So if this is your scenario, an alternative solution is to download to HDD first and then copy files to the slow medium after download completes.