Using a Keep-Alive connection in WinRT's HttpClient class? - c#

Our WinRT app is incredibly slow when opening connections to our servers. Requests take ~500ms to run. This is blocking some of our scenarios.
When debugging, we noticed that when Fiddler is active, the requests are much faster - ~100ms per request. Some searches later we understood that was because Fiddler was using Keep-Alive connections when proxying calls, which makes our proxied calls much faster.
We double-checked this in two ways.
We set UseProxy to false and observed that the request went back to being slow.
We turned off Fiddler's "reuse connections" option and observed that the requests went back to being slow.
We tried enabling keep-alive through the Connection header (.Connection.Add("Keep-Alive")) but this does not seem to have any effect - in fact, the header seems to be blatantly ignored by the .NET component and is not being sent on the request (again, by inspecting thru Fiddler).
Does anyone know how to set keep-alive on requests in Windows 8, WinRT, HttpClient class?

The following sets the correct headers to turn on keep-alive for me (client is an HttpClient)
client.DefaultRequestHeaders.Connection.Clear();
client.DefaultRequestHeaders.ConnectionClose = false;
// The next line isn't needed in HTTP/1.1
client.DefaultRequestHeaders.Connection.Add("Keep-Alive");
If you want to turn keep-alive off, use
client.DefaultRequestHeaders.Connection.Clear();
client.DefaultRequestHeaders.ConnectionClose = true;

Try using the HttpContent class to add the headers - something like this based on (but untested) http://social.msdn.microsoft.com/Forums/en-CA/winappswithcsharp/thread/ce2563d1-cd96-4380-ad41-6b0257164130
Behind the scenes HttpClient uses HttpWebRequest which would give you direct access to KeepAlive but since you are going through HttpClient you can't directly access that property on the HttpWebRequest class.
public static async Task KeepAliveRequest()
{
var handler = new HttpClientHandler();
var client = new HttpClient(handler as HttpMessageHandler);
HttpContent content = new StringContent(post data here if doing a post);
content.Headers.Add("Keep-Alive", "true");
//choose your type depending what you are sending to the server
content.Headers.ContentType = new MediaTypeHeaderValue("application/x-www-form-urlencoded");
HttpResponseMessage response = await client.PostAsync(url, content);
Stream stream = await response.Content.ReadAsStreamAsync();
return new StreamReader(stream).ReadToEnd();
}
EDIT
Since you only want GET, you can do that with:
public static async Task KeepAliveRequest(string url)
{
var client = new HttpClient();
var request = new HttpRequestMessage()
{
RequestUri = new Uri("http://www.bing.com"),
Method = HttpMethod.Get,
};
request.Headers.Add("Connection", new string[] { "Keep-Alive" });
var responseMessage = await client.SendAsync(request);
return await responseMessage.Content.ReadAsStringAsync();
}

Related

C# - AWS EC2 Outgoing Traffic Does Not Initiate

we are using Cloudinary and AWS S3 Bucket as CDN for our application.
At my local development machine, the Cloudinary.NET and AWS SDK works as expected and can upload files. However when I publish the application to the EC2 instance, it never even initiates the HTTP(s) connection. I can see that through WireShark and Microsoft Network Monitor tools. No TCP requests are made, and the code never reaches the lines after the network calls (the SDKs' methods).
To test things out, I even tried implement the API calls using C#'s HttpClient to no avail. Somehow the calls to the HTTP protocol from C# are not processed at all with no exception. It acts like an infinite timeout.
Since I get no errors at all, I have no idea what I am supposed to do.
NOTE
The EC2 instance's Security Group allows ALL outgoing traffic by the way. And it also allows incoming traffic for Ephimeral ports (whatever that is).
Any directions are appreciated.
Here is the code snippet for HttpClient:
using (HttpClient c = new HttpClient())
{
var fileBytes = new byte[model.FileStream.Length];
_logger.LogInformation("Reading bytes from incoming file stream...");
await model.FileStream.ReadAsync(fileBytes, 0, fileBytes.Length);
model.FileStream.Close();
_logger.LogInformation("Read bytes from incoming file stream...");
MultipartFormDataContent form = new MultipartFormDataContent();
// Also tried including an actual file as ByteArrayContent
//form.Add(new ByteArrayContent(fileBytes, 0, fileBytes.Length), "file");
form.Add(new StringContent("SOME_PUBLICLY_ACCESSABLE_URL"), "file");
form.Add(new StringContent(_cloudinarySettings.ApiKey), "api_key");
form.Add(new StringContent(timestamp.ToString()), "timestamp");
form.Add(new StringContent(signature), "signature");
HttpResponseMessage response = await c.PostAsync(
$"https://api.cloudinary.com/v1_1/{_cloudinarySettings.Cloud}/image/upload",
form);
_logger.LogInformation("RESPONSE: {0}", await response.Content.ReadAsStringAsync());
return null;
}
For Cloudinary:
var uploadParams = new ImageUploadParams()
{
File = new FileDescription(fileName, model.FileStream),
PublicId = publicId,
Overwrite = true,
// TODO:
NotificationUrl = model.CallbackUrl
};
_logger.LogInformation("Trying to upload to CDN");
var uploadResult = await _cloudinary.UploadAsync(uploadParams);
_logger.LogInformation("Uploaded image to CDN");
var url = _cloudinary.Api.UrlImgUp
.Format(uploadResult.Format)
.Transform(new Transformation().FetchFormat("auto"))
.BuildUrl(uploadResult.PublicId);
AWS SDK:
var fileTransferUtility =
new TransferUtility(_amazonS3Client);
var objectId = $"{folder}/{fileName}";
var fileTransferUtilityRequest = new TransferUtilityUploadRequest
{
BucketName = _awsS3Settings.BucketName,
InputStream = model.FileStream,
StorageClass = S3StorageClass.Standard,
PartSize = 6291456, // 6 MB.
Key = objectId,
CannedACL = S3CannedACL.PublicRead
};
fileTransferUtilityRequest.Metadata.Add("X-FileExtension", Path.GetExtension(model.FileName));
fileTransferUtilityRequest.Metadata.Add("X-OriginalFileName", model.FileName);
fileTransferUtilityRequest.Metadata.Add("X-GeneratedFileId", fileId.ToString());
fileTransferUtilityRequest.Metadata.Add("X-GeneratedFileName", fileName);
await fileTransferUtility.UploadAsync(fileTransferUtilityRequest);
BONUS
I even included a simple GET call to google.com with no success again at the
using (HttpClient c = new HttpClient())
{
_logger.LogInformation("Sending request to google...");
c.Timeout = TimeSpan.FromSeconds(2);
HttpResponseMessage response = await c.GetAsync(
$"https://google.com");
_logger.LogInformation("RESPONSE: {0}", await response.Content.ReadAsStringAsync());
return null;
}
This is weird but it seems like the default HttpClient somehow uses system level default proxy or something?
The answer to the question Web request from HttpClient stuck is kind of correct. The solution I came up with was forking the CloudinaryDotNet repository, inject a new HttpClientHandler with Proxy explicitly set to null and UseProxy explicitly set to false. This behaviour is not supported because the library internally creates a new HttpClient once at ApiShared.Proxy.cs and it internally decides based on target framework whether to use the HttpClientHandler or not.
For reference, this is the code change that makes it work:
ApiShared.Proxy.cs Line 17
Remove
public HttpClient Client = new HttpClient();
Add
public HttpClient Client = new HttpClient(new HttpClientHandler
{
UseProxy = false,
Proxy = null
});
Obviously this will not work for everyone and is not an ideal solution. The ideal solution would probably involve digging deeper on how EC2 uses default proxies, maybe somehow including the proxy options the SDKs. But I'm posting anyway in case someone else has a similar problem.

HttpClient takes excessive amount of time (unless Http version is set to 1.0) when making multiple requests in parallel

After testing with the .net HttpClient I'm having the following issue. It will do the first 5-6 requests just fine (within 200ms or so), but after that it'll be a full 60 seconds before the rest complete - and they all complete nearly at once
Here is how I've been testing
var tasks = new List<Task<HttpResponseMessage>>();
var client = new HttpClient();
client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", "secret_jwt");
client.BaseAddress = new Uri("http://myapi/api/");
for (int i = 0; i < 50; i++)
{
tasks.Add(ProcessUrlAsync("organizations/id/185", client));
}
await Task.WhenAll(tasks);
-
static private async Task<HttpResponseMessage> ProcessUrlAsync(string url, HttpClient client)
{
var sw = System.Diagnostics.Stopwatch.StartNew();
var message = await client.GetAsync(url);
sw.Stop();
Console.WriteLine(sw.Elapsed.TotalMilliseconds.ToString());
return message;
}
and my output is typically
174.5346
127.0873
141.9458
141.7396
153.6638
153.3449
61241.5598
61241.8476
61283.9076
61287.406
61326.0361
61328.7341
61368.6317
etc.
This is not the API I'm using that's the issue - I can point the HttpClient to any address and the same issue occurs
If I write my own GetAsync method that sets the Http version to 1.0...
public new async Task<HttpResponseMessage> GetAsync(string url)
{
using (var request = new HttpRequestMessage(HttpMethod.Get, url))
{
request.Version = HttpVersion.Version10; //removing this will reproduce the issue!
return await SendAsync(request);
}
}
It works fine (all complete within a few hundred ms). Why is this, and what can I do to fix it whilst still using Http 1.1? I'm assuming it's something to do with 1.0 having connection : close and 1.1 having connection: keep-alive
I feel rather silly, the solution is as simple as
ServicePointManager.DefaultConnectionLimit = 100;
(Edited to complete)
This looks like Socket exhaustion with HttpClient. There's lots of info available online if you search for that.
One solution is to only create one HttpClient and re-use that.
Another option is to add the header Connection: close to your request
This would explain why switching to HTTP/1.0 seemed to solve the issue -- as leaving connections open was added in 1.1.
As a side note to anyone who comes across this, you can't send the same HttpRequestMessage twice. You will need to create a new one for the second request.

PostAsync in UWP App not working (no content sent)

I'm a bit stuck, I am trying to create a UWP App that will post XML content to a web service. I can get this to work in a regular .net console app without an issue. Trying to re-create this using UWP is proving to be tricky. Using fiddler I've narrowed down that the web service end point isn't receiving my content. It looks like the headers are setup properly the content length is sent correctly but the actual content isn't sent. Here is the heart of the code, it crashes/throws an exception after:
HttpResponseMessage ResponseMessage = await request.PostAsync(requestUri, httpContent2).ContinueWith(
(postTask) => postTask.Result.EnsureSuccessStatusCode());
When I try to execute the PostASync, looking at fiddler, I'm getting:
HTTP/1.1 408 Request body incomplete
Date: Mon, 14 Nov 2016 15:38:53 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
Cache-Control: no-cache, must-revalidate
Timestamp: 10:38:53.430
The request body did not contain the specified number of bytes. Got 0, expected 617
I'm positive that I am getting content to post correct (I read it from a file, I send it to debug window to verify and it's correct). I think it might have to do with HttpContent httpContent2 - In regular .NET I've never needed to use this but with PostAsync I need to use it.
Any thoughts would be appreciated, thank you!
public async void PostWebService()
{
string filePath = "Data\\postbody.txt";
string url = "https://outlook.office365.com/EWS/Exchange.asmx";
Uri requestUri = new Uri(url); //replace your Url
var myClientHandler = new HttpClientHandler();
myClientHandler.Credentials = new NetworkCredential("user#acme.com", "password");
HttpClient request = new HttpClient(myClientHandler);
string contents = await ReadFileContentsAsync(filePath);
Debug.WriteLine(contents);
HttpContent httpContent2 = new StringContent(contents, Encoding.UTF8, "text/xml");
string s = await httpContent2.ReadAsStringAsync();
Debug.WriteLine(s); //just checking to see if httpContent has the correct data
//HttpResponseMessage ResponseMessage = await request.PostAsync(requestUri, httpContent);
request.MaxResponseContentBufferSize = 65000;
HttpResponseMessage ResponseMessage = await request.PostAsync(requestUri, httpContent2).ContinueWith(
(postTask) => postTask.Result.EnsureSuccessStatusCode());
Debug.WriteLine(ResponseMessage.ToString());
}
Well it seems like I found the root cause to my problem. This is appears to be a known bug with System.Net.Http.HttpClient when using network authentication. See this article here
My initial mistake was that I wasn't catching an exceptions thrown by PostAsync. once I wrapped that inside a try/catch block I got the following exception thrown:
“This IRandomAccessStream does not support the GetInputStreamAt method because it requires cloning and this stream does not support cloning.”
The first paragraph of the article I linked to above states:
When you use the System.Net.Http.HttpClient class from a .NET
framework based Universal Windows Platform (UWP) app and send a
HTTP(s) PUT or POST request to a URI which requires Integrated Windows
Authentication – such as Negotiate/NTLM, an exception will be thrown.
The thrown exception will have an InnerException property set to the
message:
“This IRandomAccessStream does not support the GetInputStreamAt method
because it requires cloning and this stream does not support cloning.”
The problem happens because the request as well as the entity body of
the POST/PUT request needs to be resubmitted during the authentication
challenge. The above problem does not happen for HTTP verbs such as
GET which do not require an entity body.
This is a known issue in the RTM release of the Windows 10 SDK and we
are tracking a fix for this issue for a subsequent release.
The recommendation and work around that worked for me was to use the Windows.Web.Http.HttpClient instead of System.Net.Http.HttpClient
Using that recommendation, the following code worked for me:
string filePath = "Data\\postbody.txt";
string url = "https://outlook.office365.com/EWS/Exchange.asmx";
Uri requestUri = new Uri(url); //replace your Url
string contents = await ReadFileContentsAsync(filePath);
string search_str = txtSearch.Text;
Debug.WriteLine("Search query:" + search_str);
contents = contents.Replace("%SEARCH%", search_str);
Windows.Web.Http.Filters.HttpBaseProtocolFilter hbpf = new Windows.Web.Http.Filters.HttpBaseProtocolFilter();
Windows.Security.Credentials.PasswordCredential pcred = new Windows.Security.Credentials.PasswordCredential(url, "username#acme.com", "password");
hbpf.ServerCredential = pcred;
HttpClient request = new HttpClient(hbpf);
Windows.Web.Http.HttpRequestMessage hreqm = new Windows.Web.Http.HttpRequestMessage(Windows.Web.Http.HttpMethod.Post, new Uri(url));
Windows.Web.Http.HttpStringContent hstr = new Windows.Web.Http.HttpStringContent(contents, Windows.Storage.Streams.UnicodeEncoding.Utf8, "text/xml");
hreqm.Content = hstr;
// consume the HttpResponseMessage and the remainder of your code logic from here.
try
{
Windows.Web.Http.HttpResponseMessage hrespm = await request.SendRequestAsync(hreqm);
Debug.WriteLine(hrespm.Content);
String respcontent = await hrespm.Content.ReadAsStringAsync();
}
catch (Exception ex)
{
string e = ex.Message;
Debug.WriteLine(e);
}
Hopefully this is helpful to someone else hitting this issue.

HttpClient: Conditionally set AcceptEncoding compression at runtime

We are trying to implement user-determined (on a settings screen) optional gzip compression in our client which uses HttpClient, so we can log and compare performance across a number of different calls over a period of time. Our first attempt was to simply conditionally add the header as follows:
HttpRequestMessage request = new HttpRequestMessage(Method, Uri);
if (AcceptGzipEncoding)
{
_client.DefaultRequestHeaders.AcceptEncoding.Add(new System.Net.Http.Headers.StringWithQualityHeaderValue("gzip"));
}
//Send to the server
result = await _client.SendAsync(request);
//Read the content of the result response from the server
content = await result.Content.ReadAsStringAsync();
This created the correct request, but the gzipped response was not decompressed on return, resulting in a garbled response. I found that we had to include the HttpClientHandler when constructing the HttpClient:
HttpClient _client = new HttpClient(new HttpClientHandler
{
AutomaticDecompression = DecompressionMethods.GZip
});
This all works well, but we'd like to change whether the client sends the Accept-Encoding: gzip header at runtime, and there doesn't appear to be any way to access or change the HttpClientHandler after it's passed to the HttpClient constructor. In addition, altering the headers of the HttpRequestMessage object doesn't have any effect on the headers of the request if they are defined by the HttpClientHandler.
Is there any way to do this without recreating the HttpClient each time this changes?
Edit: I've also tried to modify a reference to the HttpClientHandler to change AutomaticDecompression at runtime, but that's throwing this exception:
This instance has already started one or more requests. Properties can only be modified before sending the first request.
You're almost there with the first example, you just need to deflate the stream yourself. MS's GZipSteam will help with this:
HttpRequestMessage request = new HttpRequestMessage(Method, Uri);
if (AcceptGzipEncoding)
{
_client.DefaultRequestHeaders.AcceptEncoding.Add(new System.Net.Http.Headers.StringWithQualityHeaderValue("gzip"));
}
//Send to the server
result = await _client.SendAsync(request);
//Read the content of the result response from the server
using (Stream stream = await result.Content.ReadAsStreamAsync())
using (Stream decompressed = new GZipStream(stream, CompressionMode.Decompress))
using (StreamReader reader = new StreamReader(decompressed))
{
content = reader.ReadToEnd();
}
If you want to use the same HttpClient and only want to enable compression for some requests, you are not able to use automatic decompression. When automatic decompression is enabled, the framework also resets the Content-Encoding header of the response. This means that you are unable to find out whether the response was really compressed or not. By the way, also the Content-Length header of the response matches the size of the decompressed content if you turn on automatic decompression.
So you need to decompress the content manually. The following sample shows an implementation for gzip-compressed content (as also shown in #ToddMenier's response):
private async Task<string> ReadContentAsString(HttpResponseMessage response)
{
// Check whether response is compressed
if (response.Content.Headers.ContentEncoding.Any(x => x == "gzip"))
{
// Decompress manually
using (var s = await response.Content.ReadAsStreamAsync())
{
using (var decompressed = new GZipStream(s, CompressionMode.Decompress))
{
using (var rdr As New IO.StreamReader(decompressed))
{
return await rdr.ReadToEndAsync();
}
}
}
else
// Use standard implementation if not compressed
return await response.Content.ReadAsStringAsync();
}
As per the comments above, recreating the HttpClient is really the only (robust) way to do this. Manual decompression can be achieved but it seems to be very difficult to reliably/efficiently determine whether the content has been encoded or not, to determine whether to apply decoding.

401 Unauthorized on SECOND HttpClient/HttpWebRequest call

I have a application that uses the SharePoint 2010 REST API.
In the process of creating an Item there are multiple request done after each other:
1 Call: Getting Items from List: Succes
2 Call: Create Item: 401 Unauthorized
This is the same if I do it like this:
1 Call: Create Item: Succes
2 Call: Delete Item: 401 Unauthorized
What I know is that my functions work separately they DON'T work when they are called after each other.
When I close the application (Windows Phone 8.1 app) after creating a item and when restarted try to delete the item it works.
First I thought it had to do with the way I handle my fields so I changed them to NULL in a finally statement but that didn't work.
public async Task<bool> CreateNewItem(NewItem myNewItem)
{
try
{
StatusBar statusBar = await MyStatusBar.ShowStatusBar("Creating new List Item.");
//Retrieving Settings from Saved file
mySettings = await MyCredentials.GetMySettings();
myCred = new NetworkCredential(mySettings.UserName, mySettings.Password, mySettings.Domain);
using (var handler = new HttpClientHandler { Credentials = myCred })
{
HttpClient client = new HttpClient(handler);
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
NewItem newItem = myNewItem;
var jsonObject = JsonConvert.SerializeObject(newItem);
HttpResponseMessage response = await client.PostAsync(new Uri(baseUrl + listNameHourRegistration), new StringContent(jsonObject.ToString(), Encoding.Unicode, "application/json"));
response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
response.EnsureSuccessStatusCode();
string responseMessage = await response.Content.ReadAsStringAsync();
client.Dispose();
if (responseMessage.Length > 0)
return true;
}
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
return false;
}
finally
{
request = null;
response = null;
myCred = null;
mySettings = null;
}
return false;
}
Just run into the same problem.
Anyway, the 2nd request does not follow the same authentication procedure. Even if you initialize a new HttpClient object. I sniffed the HTTP traffic.
After the 1st request I am doing another with different credentials. This is also ending in a 401. I am really confused...
Seems the NTLM Handshake stucks at the 2nd of 6 steps
http://www.innovation.ch/personal/ronald/ntlm.html
Edit:
You may want to use the CSOM.
http://social.msdn.microsoft.com/Forums/office/en-US/efd12f11-cdb3-4b28-a9e0-32bfab71a419/windows-phone-81-sdk-for-sharepoint-csom?forum=sharepointdevelopment
While I still don't know what the actual problem is, at least I found a workaround: Use the WebRequest class instead of HttpClient.
I was running into this same error when I realized I was adding the headers each time I was calling the endpoint. Hopefully this will help someone.
Instead I initialized the HttpClient instance in my class constructor and set the headers there. Also I learned it is better practice to only use 1 instance instead of recreating with "using" (See this article https://www.aspnetmonsters.com/2016/08/2016-08-27-httpclientwrong/)
I'm invoking CallApiAsync from another class in a loop.
Here's my final solution:
class ApiShared
{
private HttpClient client;
public ApiShared() {
client = new HttpClient();
client.DefaultRequestHeaders.Add("x-api-key", ConfigurationManager.AppSettings["ApiKey"]);
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
}
public async Task<ApiResponse_Root> CallApiAsync(string endpoint)
{
// Make API call
Uri endpointUri = new Uri(endpoint);
var stringTask = client.GetStringAsync(endpointUri);
var data = JsonConvert.DeserializeObject<ApiResponse_Root>(await stringTask);
return data;
}
}
On a windows machine you can resolve this with this registry setting change:
Go to the following Registry entry:
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa
Now add a new DWORD to the Lsa folder called: DisableLoopBackCheck and set this to 1
I see that this question has been posted long back. But I don't see a correctly working solution posted yet to this thread.
I faced exactly the same issue where the next requests kept on failing returning me 401 UnAuthorized.
I figured out using fiddler that from SECOND request onwards, there was a Cookie added to the request which was possibly a result of Set-Cookie response sent by the server along with first response.
So here's how I tackled the situation - Make UseCookies false:
new HttpClientHandler { Credentials = myCred, UseCookies = false }
This should resolve your issue. Hope this helps someone who's looking for a solution to a similar issue.

Categories