We are building an MVC application, where there is huge static data to be loaded when the user first time logs in.
Luckily most of the data that has to be loaded during login is all master data and doesn't change for anyusers
But since the size of the master data is huge, we felt it is best to implement caching server side as the browser might not be able to hold the data
I have read an codeproject post on OutputCache by an Microsoft MVP, he clearly explained what cache does and what are the things to keep in mind while using caching.
So i implemented all that he suggested in my controller by just adding the line
[OutputCache(Duration = 10, VaryByParam = "none",
Location=OutputCacheLocation, NoStore=true)]
above my ActionMethod.
But i could not debug whether the data is loading from cache or there is another server hit happening.
So my first question is how do i debug whether Output cache is working or not?
And then, in our previous MVC applications we used httpcontext.current.cache which worked absolutely fine.
So, here is my second question, which is why should i prefer OuputCache over httpcontext.current.cache and why not vice versa?
What difference do they offer to caching an application?
EDIT:1
This is the method in my login view controller,
public ActionResult GetRegions(string Ids)
{
objRegionsResult = GetRegionsList();
if (!string.IsNullOrEmpty(Ids))
objRegionsResult = objRegionsResult.Where(x => Ids.Split(',').Contains(x.Type.ToString())).ToList();
return Json(objRegionsResult, JsonRequestBehavior.AllowGet);
}
private List<MORegionMaster> GetRegionsList()
{
RequestUri = "Home/GetRegions";
HttpResponseMessage response = ConnectAPI(RequestUri);
if (response.IsSuccessStatusCode)
{
objRegionsResult = response.Content.ReadAsAsync<List<MORegionMaster>>().Result;
}
}
return objRegionsResult;
}
So the above method is where i hit the api controller, which inturn hits the businesslogic class and subsequently the database and returns the datatable.
We use OutputCache when we want to cache the result of an action (not static files but cache the business logic result). We use this when we want to serve the data for all users for a particular duration.
We use httpcontext.current.cache when we want to cache some data that can be used multiple times within the same request like caching "Current logged in user object" to avoid multiple db hits.
Also, lifetime of Output Cache is not limited to current http request only but the lifetime of httpcontext.current.cache is up to current http request only.
Related
I have a system where at some point, the user will be locked to a single page. In this situation his account his locked and he cannot be redirected to any other page and this is after authentication.
The verification is done using Page Filters accessing database. To improve performance I have used memory cache.
However, the result wasn't as expected because once the cache is used for a single user it will affect all the others.
As far as i know, you can separate caching using tag helpers per user but I have no idea if this is possible using code
public async Task<IActionResult> Iniciar(int paragemId, string paragem)
{
var registoId = Convert.ToInt32(User.GetRegistoId());
if (await _paragemService.IsParagemOnGoingAsync(registoId))
{
return new JsonResult(new { started = false, message = "Já existe uma paragem a decorrer..." });
}
else
{
await _paragemService.RegistarInicioParagemAsync(paragemId, paragem, registoId);
_registoService.UpdateParagem(new ProducaoRegisto(registoId)
{
IsParado = true
});
await _registoService.SaveChangesAsync();
_cache.Set(CustomCacheEntries.RecordIsParado, true, DateTimeOffset.Now.AddHours(8));
return new JsonResult(new { started = true, message = "Paragem Iniciada." });
}
}
here i only check first if the user account is blocked in the database first without checking cache first and then create the cache entry.
Every user will be locked because of this.
So my point is... Is there a way to achieve this like tag helpers?
The CacheTagHelper is different than cache in general. It works via the request and therefore can vary on things like headers or cookie values. Just using MemoryCache or IDistributedCache directly is low-level; you're just adding values for keys directly, so there's nothing here to "vary" on.
That said, you can compose your key using something like the authenticated user's id, which would then give each user a unique entry in the cache, i.e. something like:
var cacheKey = $"myawesomecachekey-{User.FindFirstValue(ClaimTypes.NameIdentifier)}";
Short of that, you should use session storage, which is automatically unique to the user, because it's per session.
There are several alternatives to the cache. For details please see this link that describes them in greater detail.
Session State
An alternative would be to store the value in session state. This way, the session of one user does not interfere with the ones of others.
However, there are some downsides of this approach. If the session state is kept in memory, you cannot run your application in a server farm because one server does not know of the others session memory. So you would need to save the session state in a cache (REDIS?) or a database.
In addition, as session memory is stored in the server users cannot change it and avoid the redirection that you try to implement. The downside is that this reduces the amount of users that your server can handle because the server needs to have a specific amount of memory per user.
Cookies
You can send a cookie to the client and check for this cookie when the next request arrives at your server. The downside of this approach is that the user can delete the cookie. If the only consequence of a missing cookie is a request to the database, this is neglectable.
You can use session cookies that are discarded by the server when the session expires.
General
Another hint is that you need to clear the state memory when a user signs out so that with the next sign in, the state is correctly set up for the new user.
In a data-driven web application, I have several razor views which contain tables that were created using tabulator, and are updated using self calling ajax functions on an interval. In most of the views, the number of tables is no more than three, and network performance and lag between updates is fairly quick. I am currently creating a view that houses over six tables, and am seeing an uptick in dropped request, stagnation in the data, and general slowness that seems to be directly attributable to the increased number of concurrent requests on the page. I am by no means a JavaScript or MVC expert, and am trying to investigate whether my code is just inefficient, or if the approach itself is wrong.
Essentially, each tabulator table has an ajax function that will hit an endpoint at the MVC layers, which routes to a WebApi2 endpoint to retrieve the latest data, and finally, use one of tabulators loading functions to load the data into the table. The data end is SQL using Entity Framework. I have tried several implementations built around this process, but below is the approach I am currently using.
// Function for waiting in between calls
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
// Function for getting data from endpoint
async function fetch() {
$.ajax({
type: "GET",
url: www.mydata.com,
}).done(async function (data) {
// Tabulator function for replacing existing data with new
table.replaceData(data);
await sleep(10000);
fetch();
}).fail(async function () {
await sleep(10000);
fetch();
})
}
// Call the fetch function the first time, self calls afterwards
fetch();
Each table in the view runs a version of the code snippet above. I added the sleep functions, hoping to introduce some delay between request. It is also worth pointing out that at the ASP.NET MVC layer, there is also increased congestion. It takes longer to service request made by the user to create a new entity, due to the all the other request flying back and forth.
To Spread out the request to the server you could choose to change the way your tables are configured.
Im assuming at the moment you are using the ajaxURL property to set the tables url, which will cause the table to make the request on load.
var table = new Tabulator("#example-table", {
ajaxURL:"http://www.getmydata.com/now", //ajax URL
});
This would result in all six tables making the request at the same time.
There are a couple of different approaches you could take
Delay Requests
You could instead leave this field out of your table definition (causing it to have an empty table on load, and then load the data later using the setData function. You could wrap this in a setTimeout function:
//wait one second before loading data
setTimeout(function(){
table.setData("http://www.getmydata.com/now");
}, 1000);
If you set each table with a different time out they will load in a staggered fashion.
Bundle Requests
Instead of allowing the tables to retrieve their data, you could retrieve it yourself in a single ajax request to the server that retrieve all the data, and then bundle it out to the tables using the setData function.
Although if you are loading significant amounts of data this could cause you delays.
Progressive Loading
If you are trying to transfer large amounts of data, this would certainly slow down the requests.
In these cases using the ajaxProgressiveLoad functionality of tabulator allows you to paginate your data and send it to the table in smaller chunks one after the other, which wont tie up your server continuously for so long:
var table = new Tabulator("#example-table", {
ajaxURL:"http://www.getmydata.com/now", //ajax URL
ajaxProgressiveLoad:"load", //sequentially load all data into the table
});
More details on this option can be found in the Progressive Ajax Loading Documentation
I have a Web API providing a backend to an Angular.JS web application. The backend API needs to track the state of user activities. (Example: it needs to note which content ID a user last retrieved from the API)
Most access to the API is authenticated via username/password. For these instances, it works fine for me to store the user state in our database.
However, we do need to allow "guest" access to the service. For guests, the state does need to be tracked but should not be persisted long-term (e.g. session-level tracking). I'd really like to not have to generate "pseudo users" in our user table just to store the state for guest users, which does not need to be maintained for a significant period of time.
My plan is to generate a random value and store it in the client as a cookie. (for guests only - we use bearer authentication for authenticated users.) I would then store whatever state is necessary in an in-memory object, such as a Dictionary, using the random value as a key. I could then expire items off the dictionary periodically. It is perfectly acceptable for this data to be lost if the Web API is ever relaunched, and it would even be acceptable for the dictionary to be reset say, every day at a certain time.
What I don't know how to do in WebAPI is create the dictionary object, so that it will persist across Web API calls. I basically need a singleton dictionary object that will maintain its contents for as long as the server is running the Web API (barring a scheduled clearing or programmatic flushing)
I had the idea of dumping the Dictionary off to disk every time an API call is made, and then reading it back in when it's needed, but this does not allow for multiple simultaneous in-flight requests. The only method I can think of right now is to add another database table (guest_state or something) and replicate the users table, and then setup some sort of manual method to regularly clean out the data in the guest table.
Summary: what I need is
a way to store some data persistently in a Web API backend without having to go off to a database
preferably store this data in a Dictionary object so I can use randomly-generated session IDs as the key, and an object to store the state
the data is OK to be cleared after a set period of time or on a regular basis (not too frequently, maybe a minimum of a 6 hour persistence)
I figured out a solution using the Singleton pattern:
public static class Services
{
private static Dictionary<string, string> cache;
private static object cacheLock = new object();
public static Dictionary<string,string> AppCache
{
get
{
lock (cacheLock)
{
if (cache == null)
{
cache = new Dictionary<string, string>();
}
return cache;
}
}
}
}
public class testController()
{
[HttpGet]
public HttpResponseMessage persist()
{
HttpResponseMessage hrm = Request.CreateResponse();
hrm.StatusCode = HttpStatusCode.OK;
Services.AppCache.Add(Guid.NewGuid().ToString(), DateTime.Now.ToString());
string resp = "";
foreach (string s in Services.AppCache.Keys)
{
resp += String.Format("{0}\t{1}\n", s, Services.AppCache[s]);
}
resp += String.Format("{0} records.", Services.AppCache.Keys.Count);
hrm.Content = new StringContent(resp, System.Text.Encoding.ASCII, "text/plain");
return hrm;
}
}
It seems the Services.AppCache object successfully holds onto data until either the idle timeout expires or the application pool recycles. Luckily I can control all of that in IIS, so I moved my app to its own AppPool and setup the idle timeout and recycling as appropriate, based on when I'm ok with the data being flushed.
Sadly, if you don't have control over IIS (or can't ask the admin to set the settings for you), this may not work if the default expirations are too soon for you... At that point using something like a LocalDB file or even a flat JSON file might be more useful.
When the user makes selection and clicks a button, I call to:
public ActionResult Storage(String data)
{
Session["Stuff"] = data;
return null;
}
Then, I redirect them to another page where the data is accessed by
#Session["Stuff"]
This far, I'm happy. What I do next is that upon a click on a button on the new page, I perform a call to:
public ActionResult Pdfy()
{
Client client = new Client();
byte[] pdf = client.GetPdf("http://localhost:1234/Controller/SecondPage");
client.Close();
return File(pdf, "application/pdf", "File.pdf");
}
Please note that the PDFization itself works perfectly well. The problem is that when I access the second page a second time (it's beeing seen by the user and looks great both in original and on reload), it turns out that Session["Stuff"] suddenly is null!
Have I started a new session by the recall?
How do I persistently retain data stored in Session["Stuff"] before?
If you're simply storing string data (as would be indicated by your method signature) in an MVC application, don't.
It's far easier to pass the data as a query parameter to each method that needs it. It's far easier to manage and doesn't rely on Session sticky-ness.
To generate the appropriate links, you can pass data to your views and use Html.ActionLink to generate your links with the appropriate parameter data.
Here's several reasons why the session variable could return null:
null is passed into Storage
Some other code sets Session["Stuff"] to null
The session times out
Something calls Session.Clear() (or Session.Abandon())
The underlying AppPool is restarted on the server
Your web server is farmed and session state is not distributed properly
The first two can be discovered by debugging.
this is the current code in ASP.NET MVC2 (RTM) System.Web.Mvc.AuthorizeAttribute class :-
public virtual void OnAuthorization(AuthorizationContext filterContext)
{
if (filterContext == null)
{
throw new ArgumentNullException("filterContext");
}
if (this.AuthorizeCore(filterContext.HttpContext))
{
HttpCachePolicyBase cache = filterContext.HttpContext.Response.Cache;
cache.SetProxyMaxAge(new TimeSpan(0L));
cache.AddValidationCallback(
new HttpCacheValidateHandler(this.CacheValidateHandler), null);
}
else
{
filterContext.Result = new HttpUnauthorizedResult();
}
}
so if i'm 'authorized' then do some caching stuff, otherwise throw a 401 Unauthorized response.
Question: What does those 3 caching lines do?
cheers :)
This code exists to allow you to put both [OutputCache] and [Authorize] together on an action without running the risk of having a response that was generated for an authorized user cached and served to a user that is not authorized.
Here's the source code comment from AuthorizeAttribute.cs:
Since we're performing authorization
at the action level, the authorization
code runs after the output caching
module. In the worst case this could
allow an authorized user to cause the
page to be cached, then an
unauthorized user would later be
served the cached page. We work around
this by telling proxies not to cache
the sensitive page, then we hook our
custom authorization code into the
caching mechanism so that we have the
final say on whether a page should be
served from the cache.
So just what is this attribute doing? It first disables proxy caching of this response, as proxies can't make the proper determination of which users are or are not authorized to view it. And if a proxy serves the response to an unauthorized user, this is a Very Bad Thing.
Now what about AddValidationCallback? In ASP.NET, the output caching module hooks events that run before the HTTP handler. Since MVC is really just a special HTTP handler, this means that if the output caching module detects that this response has already been cached, the module will just serve the response directly from cache without going through the MVC pipeline at all. This is also potentially a Very Bad Thing if the output cache serves the response to an unauthorized user.
Now take a closer look at CacheValidateHandler:
private void CacheValidateHandler(HttpContext context, object data, ref HttpValidationStatus validationStatus) {
validationStatus = OnCacheAuthorization(new HttpContextWrapper(context));
}
// This method must be thread-safe since it is called by the caching module.
protected virtual HttpValidationStatus OnCacheAuthorization(HttpContextBase httpContext) {
if (httpContext == null) {
throw new ArgumentNullException("httpContext");
}
bool isAuthorized = AuthorizeCore(httpContext);
return (isAuthorized) ? HttpValidationStatus.Valid : HttpValidationStatus.IgnoreThisRequest;
}
This effectively just associates the AuthorizeCore method with the cached response. When the output cache module detects a match, it will re-run the AuthorizeCore method to make sure that the current user really is allowed to see the cached response. If AuthorizeCore returns true, it's treated as a cache hit (HttpValidationStatus.Valid), and the response is served from cache without going through the MVC pipeline. If AuthorizeCore returns false, it's treated as a cache miss (HttpValidationStatus.IgnoreThisRequest), and the MVC pipeline runs as usual to generate the response.
As an aside, since a delegate is formed to AuthorizeCore (thus capturing the particular instance of AuthorizeAttribute) and saved in a static cache, this is why all types subclassing AuthorizeAttribute must be thread-safe.
call to AuthorizeCore will validate if request is authorized.
If authorized, it put an AddValidationCallback in order to test if the cached output is still valid according to cache policy. If so, the cached output is sent to the client.
Regarding the 3 lines for caching;
well, first at all you should understand that an output cache must be correct or as correct as possible. In order to meassure its "correctness", the system will test if it meets certain conditions (e.g. it has not been modified).
This is stuff can be done in the 3 lines..