I've written my first API, which is a basic validation API. A client (.NET Core Desktop) application logs in with a username and password. I then check valid licences against a database and return licence details. If the (corporate) user has never used the software on this PC before, some shared initial JSON settings are downloaded.
The thing is, there is a different set of JSON settings downloaded depending on the client's site. When I update these settings in the SQL database, this change does not reflect in the returned data. Something, somewhere is caching.
This morning, I attempted a test:
You can see that I have replaced the JSON settings with letters a-f. When I call the API, I still get the full JSON settings field returned, however!
From the Client side, I'm performing the request like this:
public async Task<IEnumerable<ISetting>> GetSettingsAsync(Guid licenceId)
{
SettingsRequest request = new() { LicenceId = licenceId };
IEnumerable<SettingsResponse> settingsResponse = null;
using (HttpClient client = Client)
{
var responseMessage = await client.PostAsJsonAsync<SettingsRequest>("api/Settings", request).ConfigureAwait(false);
...
I understand a little about server-side caching and have attempted do disable this with both:
[Authorize]
[Route("api/[controller]")]
[ApiController]
[ResponseCache(NoStore = true, Location = ResponseCacheLocation.None)]
public class SettingsController : ControllerBase
...
at the start of my controllers and in my ConfigureServices method in the API, have called:
services.AddMvc(o =>
{
o.Filters.Add(new ResponseCacheAttribute { NoStore = true, Location = ResponseCacheLocation.None });
});
But I still end up with the same result.
Another important factor is that if I publish to IISExpress server or Azure, I get the expected results for the first run.
But the second run onwards on IISExpress or Azure App Service after publishing, I receive quite old, cached results.
Something, somewhere is caching my results but I'm at a loss to know what it is. Any help much appreciated.
EDITS based on discussions below:
Response headers in Postman contain no-store,no-cache on both the first and subsequent requests. This applies when my API runs on both Azure and IISExpress.
I am using EF Core with an Azure SQL Database with the repository pattern. I have tried adding "AsNoTracking()" to my database requests to eliminate the possibility of EF doing the caching.
A quick and dirty desktop app put together to query the SQL database directly returns expected data every time.
Related
I am working on an email program for a WordPress WooCommerce store. I have added a webhook into the WooCommerce settings in order to hit a C# WebAPI application that I have created. I am trying to get run an email process every time an order is created.
I created an API controller with that accepts the JSON data that is sent over from WooCommerce, and when testing it locally from a Postman request, the controller is hit no problem.
I am trying to publish the application on Azure, and when I do, I am getting a 400 error from both the WooCommerce settings as well as any Postman requests.
I have tried setting
<system.webServer>
<httpErrors existingResponse="PassThrough"/>
</system.webServer>
in the web.config file, which then returned a 502 error from Postman, but still the same error on the WooCommerce webhook side.
I also tried re-deploying which did not work either.
I am able to view the homepage as well as the API controller link on the standard MVC homepage/menu view.
The controller is a standard API controller inheriting from the APIController class:
public class OrdersController : ApiController
and it contains one method, ProcessOrders, with an [HttpGet, HttpPost] attribute on it, and a route of /api/orders/callback.
The controller is responsible for inserting the payload from the webhook into the database and then it uses some other classes email out some specific information about the product.
Is there some kind of setting that needs to be set in the Azure portal or on the web.config file? I am not very experienced with Azure so I am not too familiar if there is anything else that needs to be done for this.
Azure Information
I am deploying to azure from quick publish inside Visual Studio
A Pay-As-You-Go resource group
.Net Framework version 4.7
Remote debugging enabled
Web App app service
The only App service setting is WEBSITE_NODE_DEFAULT_VERSION set to 6.9.1
I have tried both Release and Debug configurations, the connection is valid when trying the Validate Connection option inside configuration, the File Publish Options are all unchecked (Remove additional files at destination, precompile during publishing, exclude files from app_data folder) and the publish works fine, and the app is accessible, except for the api controller.
The project is built as a .Net framework version 4.6.1 with Web API.
Thanks in advance for any help!
Edit
I have removed the parameter that was part of the original method that is being hit by the API, and I am now getting the error of No HTTP resource was found that matches the request URI. I have tried changing the Route data annotation as well as just making it HttpGet to see if it was accessible, and it is not working. I created a second action on the controller just to test returning a string, and that worked without problem, so I am not sure why it is not accessible. The intro to the method is as follows:
[HttpGet]
[Route("api/orders/callback")]
public string Callback() { return "Test"; }
I also updated this method to try just returning a simple string and it does not work. I also adjusted the second test method to accept a string POSTed to it and it is returning The requested resource does not support http method 'POST'.
This method looks like the following:
[HttpPost]
[Route("api/orders/new")]
public string SecondCallback(string payload) {
return payload;
}
I am using azure app service and DB for my C# ODATA API and DB as the backend of of my phone app.
I only have one app service that hosts 10s of endpoints. There are times when I need to publish new versions and I don't want any incoming requests during that time of deployment.
I don't mind that users are not able to finish their requests during the maintenance.
Is there anything in Azure or API that can let me:
1. turn off the api/app service manually?
2. Be able to inform the user that a maintenance is in progress?
This is my trial:
the only thing I can come up with is this. While users always use the "odata" in their url requests: https://myserverl/odata/Users
which is setup in the webapi.config like this:
config.MapODataServiceRoute("odata", "odata", builder.GetEdmModel());
I put the routePrefix (2nd odata) in a web.config.
When I need to turn off access, I change my web.config (which I can access manually even after the publish of code into Azure) to be like this:
<add key="odata" value="noaccess" />
and in my webapi.config:
string odata = ConfigurationManager.AppSettings["odata"].ToString();
config.MapODataServiceRoute("odata", odata, builder.GetEdmModel());
and then save the web.config which will reset the server and all incoming requests that has "odata" will result into error. I can always set it back later.
This method will stop the users from sending requests during maintenance but will not let them know what is going on.
I figured it out.
when I call the server from my client, I verify that the response is between 200 & 299 before parsing results or any other further processing.
So now, I check also for the possible response from the server that it could be either 403 (access is denied) or 503 (server is unavailable). That's where I can add code to notify the user.
In Azure, simply stopping the app service, will generate one of those 2 error codes.
Note: You must check for both: 403 & 503.
I am using a singleton dictionary collection to store a collection of users so that the users can access the api using their guid to access their credit card details.
Is there a more secure way to store data rather than using a singleton dictionary collection? It's only for temporary storage - no database involved.
I would probably question a single guid being the keys to the kingdom in terms of credit card details... Infact I would also question giving out full credit card details under any circumstances from an API.
But let's pretend the question is more about temporary storage in general.
In memory storage is fine, but you will run into two problems.
When your app is restarted, the cache is "lost".
You will be unable to scale horizontally as the cache is for a single app. Other machines within your server pool will have to build their own cache (And you will have other things like sticky sessions to worry about etc).
A better option is indeed Redis. You will need to download redis from here assuming you are on windows and install it. Obviously when deployed you can use something like Azure Redis etc. And for the code, luckily Microsoft have done most of the work for you. Taken mostly from this post : http://dotnetcoretutorials.com/2017/01/06/using-redis-cache-net-core/
Install this package in your .net core app.
Install-Package Microsoft.Extensions.Caching.Redis
In your configure services method in your startup.cs, you need to add Redis as a cache service. You do it like so :
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
services.AddDistributedRedisCache(option =>
{
option.Configuration = "127.0.0.1";
option.InstanceName = "master";
});
}
Then you can just ask for the IDistributedCache in your controllers/services. Something like the following :
public class HomeController : Controller
{
private readonly IDistributedCache _distributedCache;
public HomeController(IDistributedCache distributedCache)
{
_distributedCache = distributedCache;
}
[HttpGet]
public async Task<string> Get()
{
//You can access _distributedCache here.
}
}
What happens when application restarts? Don't you lose data after that? If don't (your list is hardcoded) i think it's nothing wrong with collection. But after each app restart you lose your data and need to reenter again, you can try like https://redis.io/ for caching user collection. After application restart you won't lose your collection data.
I have an MVC application that also includes an ASP.Net WebForm to host the MS ReportViewer Web Control. We make extensive use of WebAPI to allow for client posting from Knockout viewmodels client side.
In this application, we're making use of cookies to maintain a few minor pieces of user data--a GUID, an int, and a bool.
What we see at present is that the application works correctly until a user opens the ReportViewer. At that point, we're no longer able to read any cookies from the request headers. This has been consistently reproduced in several browsers.
Examination with Fiddler has revealed that the cookies are properly posted to the server. In the first case, the cookie value is as follows:
theCulture=en-US; ASP.NET_SessionId=uhmquapd1bgghpmfgy24oodf; .ASPXAUTH=6BC2F53F9CA0CF5A437998B206B564B28B5AB362153E6E0629C9142F9E3A0285494F674716A126E4632A932BCE12CE094FE590911CE5E97EA42D0C610A44D8462A15BA9A54760883DDF712B5B199C136413667954F094FEBA2A57826BC84702A4D90D7382E360594ABC2F9EBDCEE696B4662077F; special=theId=1077b59a-100d-429b-b223-f8f0508fdc27&staffingId=77096&isBackupUser=False
In the second case, after opening the ReportViewer, our cookies are as follows:
theCulture=en-US; ASP.NET_SessionId=uhmquapd1bgghpmfgy24oodf; .ASPXAUTH=6BC2F53F9CA0CF5A437998B206B564B28B5AB362153E6E0629C9142F9E3A0285494F674716A126E4632A932BCE12CE094FE590911CE5E97EA42D0C610A44D8462A15BA9A54760883DDF712B5B199C136413667954F094FEBA2A57826BC84702A4D90D7382E360594ABC2F9EBDCEE696B4662077F; special=theId=1077b59a-100d-429b-b223-f8f0508fdc27&staffingId=77096&isBackupUser=False; /Reserved.ReportViewerWebControl.axd%3FOpType%3DSessionKeepAlive%26ControlID%3Dc3b959ab1a7c42e6a9fed5d2762a8c86_SKA=1
At which point we can no longer read them from WebApi. The method that returns the cookie in the WebAPI Controller is this:
public OurType GetApproverInfo()
{
OurType data = new OurType();
CookieHeaderValue cookie = Request.Headers.GetCookies("special").FirstOrDefault();
CookieState cookieState = cookie["special"];
data.Id = Guid.Parse(cookieState["theId"]);
data.StaffingId = Int32.Parse(cookieState["staffingId"]);
data.IsBackupUser = bool.Parse(cookieState["isBackupUser"]);
return data;
}
Anyone else seen something like this?
UPDATE: I've just learned that creating a cookie with a leading / in the name causes the same behavior in WebAPI.
What I see is that the ReportView is messing up the cookie. Probably you're setting up the cookie somewhere in the WebApi that has not the same pipeline execution sequence as the ReportView Handle.
Heres the WebApi life cycle: ASP.NET WEB API: HTTP MESSAGE LIFECYLE
But, you can override the ReportViewerServerConnection, using IReportServerConnection, and there persist yours cookie pattern.
Take a look here: Web.config Settings for ReportViewer
Another way should be setting up the cookie in a Custom IHttpModule, integrated into the ASP.NET pipe line, and then target both, WebApi and ReportView
I'm developing a Web API RESTful service on the Azure platform.
I thought that the default client-side caching behavior would be to cache GET requests (since GET is idempotent and all).
To my surprise when I deployed the service to Azure all responses were sent with a Cache-Control: private header or other cache-disallowing header.
I tried the solution suggested in this question, it did work locally in IIS but did not work once we deployed to Azure. I could not find anything in the documentation about this ability which I thought was very basic in a RESTful service, I really hope that I'm missing something obvious, in MVC it was very easy.
tl;dr
We need to cache GET requests on the client side when using Azure and Web API.
I don't believe Azure is doing anything to you in this respect. It's a matter of you needing to specify exactly what caching properties you want for your resource(s).
With WebAPI you can control what caching properties your response has via the CacheControlHeaderValue which is accessible via the myHttpResponseMessage.Headers.CacheControl property.
Assuming you had a controller action like this:
public Foo Get(int id)
{
Foo myFoo = LoadSomeFooById(id);
return myFoo;
}
You'll need to do something like this to take control of the caching explicitly:
public HttpResponseMessage Get(int id)
{
Foo myFoo = LoadSomeFooById(id);
HttpResponseMessage myHttpResponseMessage = this.Request.CreateResponse(HttpStatusCode.OK, myFoo)
CacheControlHeaderValue cacheControlHeaderValue = new CacheControlHeaderValue();
cacheControlHeaderValue.Public = true;
cacheControlHeaderValue.MaxAge = TimeSpan.FromMinutes(30);
myHttpResponseMessage.Headers.CacheControl = cacheControlHeaderValue;
return myHttpResponseMessage;
}
Many of the other properties related to caching that you'd expect are also available on the CacheControlHeaderValue class, this is just the most basic example.
Also, bear in mind my example is extremely brute force/simplistic in that all the caching behavior/logic is right there in the action method. A much cleaner implementation might be to have an ActionFilterAttribute which contains all the caching logic based on attribute settings and applies it to the HttpResponseMessage. Then you could revert to the more model centric action method signature because you would, in this case, no longer need access to the HttpResponseMessage anymore at that level. As usual, many ways to skin the cat and you have to determine which works best for your specific problem domain.
Take a look at this http://forums.asp.net/post/4939481.aspx it implements caching as an attribute that modifies the HTTP response.
Disclaimer: I haven't tried it.
I would recommend this https://github.com/filipw/AspNetWebApi-OutputCache
Simple, quick and has various options to cache.
Hope that helps