I have some code running within an ApiController (ASP.Net Web API) that itself wants to make a GET request to another web service. The web service (also part of my app) returns Cache-Control headers indicating an expiry time for the content it returns.
I am using the new System.Net.Http.HttpClient, configured with a WebRequestHandler in order to use client-side caching (the default HttpClientHandler does not support cache configuration, although it does use System.Net.WebRequest as its underlying HTTP implementation):
var client = new HttpClient(new WebRequestHandler {
UseDefaultCredentials = true,
CachePolicy = new RequestCachePolicy(RequestCacheLevel.Default)
});
var response = client.GetAsync("someUri").Result;
response.EnsureSuccessStatusCode();
On the server I am enabling caching within my controller action via...
var response = new HttpResponseMessage(HttpStatusCode.OK);
response.Headers.CacheControl = new CacheControlHeaderValue {
Public = true,
MaxAge = new TimeSpan(0, 5, 0); // Five minutes in this case
};
// Omitted, some content is added to the response
return response;
The above (abbreviated) code works correctly within a test; I make multiple calls to the service in this way and only the first call actually contacts the service (observed via log messages on the service in IIS); subsequent calls use the cache.
However, running the same code hosted on IIS itself, it seems the HttpClient ignores the caching result (I have also set up my IoC container such that only one instance of the HttpClient exists in the AppDomain) and calls the service each time. This is running as the AppPoolIdentity.
Interestingly, if I change the app pool to run as NetworkService, then the response has status code 401 Unauthorized (I have tried setting Preauthenticate = true on the WebRequestHandler but the status code is still 401). The same is true if I change the app pool to run under my own credentials.
So, is there something about running the App Pool under the NetworkService identity, and virtual AppPoolIdentity, that prevents them from using client-side caching. Where does the content cached by WebRequest physically exist anyway?
WinInet caching is not supported when running on IIS, please check the following support article from MS http://support.microsoft.com/kb/238425
I don't see any reason why the cache should not work under IIS. The cache is implemented by WinINetProxy and is the same cache that is used by Internet Explorer.
Try setting the max-age instead of the expiry time.
Related
I've written my first API, which is a basic validation API. A client (.NET Core Desktop) application logs in with a username and password. I then check valid licences against a database and return licence details. If the (corporate) user has never used the software on this PC before, some shared initial JSON settings are downloaded.
The thing is, there is a different set of JSON settings downloaded depending on the client's site. When I update these settings in the SQL database, this change does not reflect in the returned data. Something, somewhere is caching.
This morning, I attempted a test:
You can see that I have replaced the JSON settings with letters a-f. When I call the API, I still get the full JSON settings field returned, however!
From the Client side, I'm performing the request like this:
public async Task<IEnumerable<ISetting>> GetSettingsAsync(Guid licenceId)
{
SettingsRequest request = new() { LicenceId = licenceId };
IEnumerable<SettingsResponse> settingsResponse = null;
using (HttpClient client = Client)
{
var responseMessage = await client.PostAsJsonAsync<SettingsRequest>("api/Settings", request).ConfigureAwait(false);
...
I understand a little about server-side caching and have attempted do disable this with both:
[Authorize]
[Route("api/[controller]")]
[ApiController]
[ResponseCache(NoStore = true, Location = ResponseCacheLocation.None)]
public class SettingsController : ControllerBase
...
at the start of my controllers and in my ConfigureServices method in the API, have called:
services.AddMvc(o =>
{
o.Filters.Add(new ResponseCacheAttribute { NoStore = true, Location = ResponseCacheLocation.None });
});
But I still end up with the same result.
Another important factor is that if I publish to IISExpress server or Azure, I get the expected results for the first run.
But the second run onwards on IISExpress or Azure App Service after publishing, I receive quite old, cached results.
Something, somewhere is caching my results but I'm at a loss to know what it is. Any help much appreciated.
EDITS based on discussions below:
Response headers in Postman contain no-store,no-cache on both the first and subsequent requests. This applies when my API runs on both Azure and IISExpress.
I am using EF Core with an Azure SQL Database with the repository pattern. I have tried adding "AsNoTracking()" to my database requests to eliminate the possibility of EF doing the caching.
A quick and dirty desktop app put together to query the SQL database directly returns expected data every time.
I am trying to make a GET request to a RESTful service but the .NET HttpClient is receiving a timeout, but via Postman this returns an expected response (an error response, since I'm not yet "logged in") within seconds.
What I've tried:
I have checked the URLs are the same and they are.
I thought perhaps Postman's automagical headers are the issue, but after setting these in the HttpClient I still receive a timeout.
If I increase the timeout on the HttpClient I still get a timeout from the server ((504) Gateway Timeout).
Client = new HttpClient(new HttpClientHandler()
{
AutomaticDecompression = System.Net.DecompressionMethods.GZip
});
var objTask = Client.GetStringAsync(objUrl);
objTask.Wait();
strResponse = objTask.Result;
strResponse = Client.GetStringAsync(objUrl).Result;
Why am I receiving a timeout via HttpClient?
Note:
Our WebApi is calling our WCF Service, which in turn is calling the 3rd party RESTful service via HttpClient---for security reasons I can't call the 3rd party API directly from our WebApi controller. The website is based on the .NET 4.0 Framework.
With lots of help from #jdweng and #gunr2171, I've realised the issue was due to the proxy stored in the App.Config on the self-hosted WCF Services. If I commented this out, I wouldn't get a time-out issue.
The HttpClient constructor can take in a HttpClientHandler, which has a Proxy property where you can bypass the local proxy as so:
return new HttpClient(new HttpClientHandler()
{
AutomaticDecompression = System.Net.DecompressionMethods.GZip,
Proxy = new WebProxy()
{
BypassProxyOnLocal = true
}
})
{
BaseAddress = new Uri("https:\\my.service")
};
I'm using an aspnet core 3 reverse proxy in Azure Kubernetes Service, loosely based on ProxyKit, which used to work just fine in a previous cluster. On the new cluster (the only difference I can see is that the old one used kubenet, the new one an azure virtual network) I constantly get TaskCanceledExceptions on HttpClient.SendAsync when an upstream request takes more than a few seconds.
This is the relevant method that throws the exception:
return await _httpClient.SendAsync(
UpstreamRequest,
HttpCompletionOption.ResponseContentRead,
HttpContext.RequestAborted)
.ConfigureAwait(false);
The HttpClient is provided by HttpClientFactory using the typed AddHttpClient middleware.
Things i've tried so far:
explicitly set a 30 seconds timeout for the HttpClient
passing no CancelationToken to the SendAsync method
implement custom timeout handling as suggested in this article
This is how the HttpClientFactory was configured before:
var httpClientBuilder = services
.AddHttpClient<ProxyKitClient>()
.ConfigurePrimaryHttpMessageHandler(sp => new HttpClientHandler
{
AllowAutoRedirect = false,
UseCookies = false
});
And this is the configuration right now:
var httpClientBuilder = services
.AddHttpClient<ProxyKitClient>(o => o.Timeout = Timeout.InfiniteTimeSpan)
.ConfigurePrimaryHttpMessageHandler(sp => new TimeoutHandler
{
InnerHandler = new HttpClientHandler
{
AllowAutoRedirect = false,
UseCookies = false
}
});
The behavior did not change whatsoever.
How can i make sure that HttpClient waits for the upstream request to finish? Kestrel and HttpClient default timeouts are way higher than the requests that are aborted right now.
As a side note, when i revert to aspnet core 2.2, the behavior is exactly the same.
I commented:
Task cancellation for something like SendAsync is going to occur when the client closes the connection. As such, I think you're looking the wrong place. You need to figure out why the clients are closing the connection prematurely.
As a result, the OP was able to determine that this issue lied with Azure Application Gateway:
Makes sense, and you are totally right. Was pulling my remaining hair out the whole day and the culprit is Azure Application Gateway with a request timeout of 1 second.
Basically, I need to be able to make an HTTP Request to a Website on the same machine I am on, without modifying the host file to create a pointer to the domain name.
For example.
I am running the code on one website, let's say www.bobsoft.com which is on a server.
I need to make an HTTP request to www.tedsoft.com which is on the same server.
How can I make a call using a C# HttpClient without modifying the host file? Take into account that the websites are routed by bindings in IIS. I do know the domain I am going to use ahead of time, I just have to make it all internal in the code without server changes.
Thanks!
IIS bindings on the same port but different hostnames are routed based on the http Host header. The best solution here is really to configure local DNS so requests made to www.tedsoft.com don't leave the machine. That being said, if these kinds of configuration aren't an option you can easily set the host header as a part of your HttpRequestMessage.
I have 2 test sites configured on IIS.
Default Web Site - returns text "test1"
Default Web Site 2 - returns text "test2"
The following code uses http://127.0.0.1 (http://localhost also works) and sets the host header appropriately based on the IIS bindings to get the result you're looking for.
class Program
{
static HttpClient httpClient = new HttpClient();
static void Main(string[] args)
{
string test1 = GetContentFromHost("test1"); // gets content from Default Web Site - "test1"
string test2 = GetContentFromHost("test2"); // gets content from Default Web Site 2 - "test2"
}
static string GetContentFromHost(string host)
{
HttpRequestMessage msg = new HttpRequestMessage(HttpMethod.Get, "http://127.0.0.1");
msg.Headers.Add("Host", host);
return httpClient.SendAsync(msg).Result.Content.ReadAsStringAsync().Result;
}
}
I have two applications.
Both on the same server
Both running as the same service account
Both require windows Auth
I'm trying to use HttpClient to get from one app to the other with a simple post request; however, the identity doesn't seem to get used.
What I'm using looks like this:
var testIdentity = System.Security.Principal.WindowsIdentity.GetCurrent();
var handler = new HttpClientHandler()
{
UseDefaultCredentials = true
};
using (var client = new HttpClient(handler))
{
//...
HttpResponseMessage respose = client.PostAsJsonAsync("api/controller/Method", request);
response.EnsureSuccessStatusCode(); // Exception here!
//...
}
I've verified testIdentity is the service account I want to be running as, but it doesn't seem to make it. I always get a 401 response back.
I've also tested the application sending the request locally (but same domain), and the WebAPI on the server, but that doesn't work either (same 401 response).
If I have both applications local then it works as expected.
Any idea what I may be missing?
Little hesitant to accept this as the answer as I don't know the underlying cause yet; however, the issue I ran into was fixed by impersonating an account on a different domain.