We have stumble unto a problem while using WebRequestHandler and HttpRequestCachePolicy where the cached entry seems to get corrupted after a precise sequence. The scenario goes as follow:
request returns a 200 response with Cache-Control: public, must-revalidate, max-age=0 and Last-Modified.
a second request is made with Last-Modified-Since that returns a 500 response with Cache-Control: private. In our case, it is caused by the IIS server when an error occurs during the request.
a third request is sent. It contains a Last-Modified-Since header which causes the server to respond with 304.
The HttpClient result resolves to the 500 response.
My assumption is that the Last-Modified-Since header in the 3rd request should not have been present and is a bug.
The test was made in a sample console application in .NET 4.5 while using the HttpRequestCacheLevel.Default cache policy. Trying the same test in a browser doesn't seem to reproduce the problem when navigating to the URL (I verified that max-age=0 was not present in the request).
What I am trying to understand is if it's a bug in the .NET framework, if we didn't use the API correctly or if we didn't use the response headers correctly.
Related
I iam working on a tool that let users check their API. One of the features is to show the actual send request headers.
Iam having trouble getting these headers though as the Headers property doesnt seem to include them all. I tried looking at tracelisteners but these seem to be more oriented to debugging and the config is global so it applies to all webrequests send by the app which is not what I want.
When I run this code on net48 (in core I seem to get 0 headers back):
// Create a new 'HttpWebRequest' Object to the mentioned URL.
HttpWebRequest myHttpWebRequest=(HttpWebRequest)WebRequest.Create("http://www.contoso.com");
// Assign the response object of 'HttpWebRequest' to a 'HttpWebResponse' variable.
HttpWebResponse myHttpWebResponse=(HttpWebResponse)myHttpWebRequest.GetResponse();
Console.WriteLine("\nThe HttpHeaders are \n\n\tName\t\tValue\n{0}",myHttpWebRequest.Headers);
I get the following output
The HttpHeaders are
Name Value
Host: www.microsoft.com
However in fiddler and with trace listeners I see these headers:
Host: www.contoso.com
Connection: Keep-Alive
Why can't I see the Connection header?
Now I see there is some redirecting going on. WebRequest seems to only show the headers send in the LAST request which didn't had the Connection header.
I am working on a Chatbot in Slack that sends a POST request to http://localhost:44331/values/api an .NET Core API that i built in C#. In the Post request is a response_url in the body I can use to send back the needed information.
So I have been trying to make this work for about two weeks now and used a fiddler and to mimic the request so I can make some changes on the body and the headers to see if that makes a difference.
So after a lot of errors I have come to two specific errors that haven't changed for a long while.
sent with the Slack Chatbot: curl_error_56
There really isnt much I can change in this matter except the url I want to send the request to.
In fact this request has never even reached the post method in my API.
Thats what the Slackbot answers
sent with fiddler: HTTP error 400
I used Requestbin to get the information that has been sent by the bot and copied it into the composer in fiddler.
I am a total novice to Web programming in any kind of way so I really don't know what they have in common.
Are those errors coming because I am using localhost?
What am I missing?
here is the request so you can copy it if needed
host: localhost:44331
Accept: application/json,*/*
Accept-Encoding: gzip,deflate
Content-Type: application/x-www-form-urlencoded
User-Agent: Slackbot 1.0 (+https://api.slack.com/robots)
X-Slack-Request-Timestamp: 1569238196
X-Slack-Signature: v0=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Content-Length: 381
Connection: keep-alive
Alright I got some help by a friend and he explained to me that Slack is unable to access my localhost, because it is not a local app. It is taking its information from the web and making it send a request to my localhost is useless because it is not a static IP.
What i need is an Endpoint.
I can get one by requesting it from my ISP (Internet Service Provider) or getting myself a server which already has an static IP.
Thanks for any help you wanted to provide.
I'm posting telemetry data to the Application Insights data collector endpoint at https://dc.services.visualstudio.com/v2/track using HttpClient.
This works fine when using .NET Core 2.0. However, after upgrading to .NET Core 2.1 all requests fail with HTTP status 500 Internal Server Error and an HTML response body that looks like this:
(It's an IIS 8.5 Detailed Error page)
In .NET Core 2.1 they have changed how HttpClientHandler works and added an AppContext switch that allows us to keep the old behavior:
AppContext.SetSwitch("System.Net.Http.UseSocketsHttpHandler", false);
Indeed, setting this switch fixes the problem!
But I'd like to avoid using that switch:
Firstly, because I'd like to use the new stuff. It must be better :-)
Secondly, because that switch changes behavior for all new HttpClientHandler instances. Not just the one I'm using to post data to the Application Insights endpoint.
I've tried to figure out what's causing this by looking at the traffic using Wireshark. Trying to interpret HTTPS traffic on that low level is beyond me. I did find something suspicious in the TLS handshake though.
Both the SocketsHttpHandler (that doesn't work) and the legacy WinHttpHandler (that do work) use TLS 1.2. However, only the WinHttpHandler uses the TLS Certificate Status Request extension. Could that be the reason why requests are failing?
In any case, the TLS connection is setup a request is sent and a response is retrieved. And apparently there is an error on the server side while processing the request.
I'd like to figure out what's causing this problem. Any help is appreciated!
As suggested by #mjwills I changed my code so that it uses HTTP (not SSL) and post to a local program that allows me to capture the request. (I didn't figure out how to do this in a clean way in Wireshark, so I ended up writing my own little capture tool based on TcpListener):
Capture when using the new SocketsHttpHandler:
POST / HTTP/1.1
Transfer-Encoding: chunked
Content-Type: application/x-json-stream
Content-Encoding: gzip
Host: localhost:12345
A
[...10 bytes of data...]
F4
[...244 bytes of data...]
0
Capture when using the legacy WinHttpHandler:
POST / HTTP/1.1
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/x-json-stream
Content-Encoding: gzip
Host: localhost:12345
a
[...10 bytes of data...]
f4
[...244 bytes of data...]
0
These are the differences:
WinHttpHandler wants to keep the connection alive while SocketsHttpHandler does not.
WinHttpHandler use lower case hex for chunk lengths while SocketsHttpHandler use upper case.
The first 10 bytes contains the gzip header. The remaining 244 bytes contains the gzip encoded json telemetry data.
I am using HttpClient to make a post request. I get back 405 method not allowed. When capturing a trace in fiddler, it goes out as GET instead of POST!
using (var client = new HttpClient())
{
var url = AppSettingsUtil.GetString("url");
var response = client.PostAsJsonAsync(url, transaction).Result;
}
I am aware of the async/await issues. This is a simplified sample to show the issue.
Is there some sort of web.config or machine.config setting that could be affecting this? Other requests (sent through RestSharp) send Posts correctly
Here is what fiddler captures. Rerunning the trace in fiddler also returns the 405 (as expected). Manually switching it to POST and running works from fiddler.
Also, perhaps because the method was switched to GET, there is no body captured in fiddler, I had to manually paste in the JSON
GET /*URL*/ HTTP/1.1
Content-Type: application/json; charset=utf-8
Host: /*host*/
Connection: Keep-Alive
The problem appears to be that someone changed the URL without telling us, and they put a redirect in place. HttpClient is responding to the redirect, but ends up actually sending the request to the final destination as a Get.
This seems like a bug in HttpClient to me, that it should either send the ultimate request as a Post, or throw an exception saying it can't do what I asked it to.
See Forwarding a response from another server using JAX-RS
I'm using HttpClient 0.6.0 from NuGet.
I have the following C# code:
var client = new HttpClient(new WebRequestHandler() {
CachePolicy =
new HttpRequestCachePolicy(HttpRequestCacheLevel.CacheIfAvailable)
});
client.GetAsync("http://myservice/asdf");
The service (this time CouchDB) returns an ETag value and status code 200 OK. There is returned a Cache-Control header with value must-revalidate
Update, here are the response headers from couchdb (taken from the visual studio debugger):
Server: CouchDB/1.1.1 (Erlang OTP/R14B04)
Etag: "1-27964df653cea4316d0acbab10fd9c04"
Date: Fri, 09 Dec 2011 11:56:07 GMT
Cache-Control: must-revalidate
Next time I do the exact same request, HttpClient does a conditional request and gets back 304 Not Modified. Which is right.
However, if I am using low-level HttpWebRequest class with the same CachePolicy, the request isn't even made the second time. This is the way I would want HttpClient also behave.
Is it the must-revalidate header value or why is HttpClient behaving differently? I would like to do only one request and then have the rest from cache without the conditional request..
(Also, as a side-note, when debugging, the Response status code is shown as 200 OK, even though the service returns 304 Not Modified)
Both clients behave correctly.
must-revalidate only applies to stale responses.
When the must-revalidate directive is present in a response received by a cache, that cache MUST NOT use the entry after it becomes stale to respond to a
subsequent request without first revalidating it with the origin server. (I.e., the cache MUST do an end-to-end revalidation every time, if, based solely on the origin server's Expires or max-age value, the cached response is stale.)
Since you do not provide explicit expiration, caches are allowed to use heuristics to determine freshness.
Since you do not provide Last-Modified caches do not need to warn the client that heuristics was used.
If none of Expires, Cache-Control: max-age, or Cache-Control: s- maxage (see section 14.9.3) appears in the response, and the response does not include other restrictions on caching, the cache MAY compute a freshness lifetime using a heuristic. The cache MUST attach Warning 113 to any response whose age is more than 24 hours if such warning has not already been added.
The response age is calculated based on Date header since Age is not present.
If the response is still fresh according to heuristic expiration, caches may use the stored response.
One explanation is that HttpWebRequest uses heuristics and that there was a stored response with status code 200 that was still fresh.
Answering my own question..
According to http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.4 I would say that
a "Cache-Control: must-revalidate" without expiration states that the resource should be validated on every request.
In this case it means a conditional GET should be done every time the resource is made. So in this case System.Net.Http.HttpClient is behaving correctly and the legacy (Http)WebRequest is doing invalid behavior.