Is it possible to share HttpRuntime.Cache between multiple web servers? - c#

We have a web application that is storing all the site data in HttpRuntime.Cache.
We now need to deploy the application across 2 load balanced web servers.
This being the case, each web server will have its own cache, which is not ideal because if a user requests data from webserver1 it will be cached, but there next request might go to webserver2, and the data that their previous request cached won't be available.
Is it possible to use a shared-cache provider to share the HttpRuntime.Cache between the two web servers or to replicate the cache between them, so that the same cache will be available on both web servers? If so, what can I do to solve this problem?

No, you can't share the built-in ASP.NET cache, but you could use something like memcached or AppFabric instead.

Nope, it's not possible. You have to use so called ditributed cache like Microsoft AppFabric Caching or very popupar open source product memcached.

It sounds from your question that you've got user data it the cache? In that case I'd be with Aliostad and say dont go there!
HttpRuntime cache should be used for static but regularly used items that come from the database, the main purpose should be preventing database hits that would otherwise occur on every request regardless of user... so things like options in a combobox or certian configuration settings
If you do genuinely need caching for user data, then as above Memcached, Appfabric or nvelocity
There are layers of caching suitable for different needs, having only 2 webservers suggests that you dont yet require the Distributed caching frameworks above.
What is the server load, and what is the limiting factor, CPU, RAM, Network Bandwith? On your DB or your webservers? Each of these indicates a different caching strategy.

Don't go there. Normally, cache being a static object, only lives in the AppDomain. Manually updating these is a world of pain and strongly advise against.
You can use a number of caching solutions that sit in front of your server for that kind of purpose.

Related

How to hold temporary object list in each .net core API request?

In my .net core api more than 200 modules are executing of a single request, and having 500 database call, maximum sql queries are calling several times in a loop with same parameter. My application is deployed in AWS EKS.
How to hold temporary object list in a .net core api request? So that we can reduce the database call? It can also reduce the response time.
I can do this using static class but it can impact the load a AWS EKS container.
So without session is there any mechanism and best practices that can help to hold the data to entire session and re use that data without calling same sql query?
There are a few strategies, you're talking about caching responses.
You can implement IMemoryCache and have it within your application. The "downside" is that if you have multiple instances running at the same time they will all have different caches because memory is, well, memory. So if you have one instance and the data is not really that big, is fine.
Another approach is to use something like Redis and store keys with respective responses and implement TryGet from there before calling your actual database or source of data.
Another approach is to use hosting cache, something like Cloudflare to cache specific endpoints with specific parameters if possible. (ecommerce usually uses that, for example... caching a list of products by filters or caching a product page)
Personally, on my private projects, I use a mix of memory with redis. Using memory for application configuration and things that are important for the app itself and client data almost always goes to redis, having a few endpoints running under no cache policies.

How to properly share an object in my controller between methods with ASP .NET MVC?

I'm working with ASP.NET and I want to load once a big object (specific by user) in my controller and then use it in my view.
I though about a static property but I find some problems with it.
See : Is it a bad idea to use a large static variable?
I'm not familiar with this language and I have no idea how to properly share the same object between the different methods for each user. Could you tell me how to do that ? Do you know if singleton could be a solution ?
A singleton won't help you here if this big object is going to be different for every user.
Without knowing all the details, I'd say perhaps your best option is to store it in the Session object,
HttpContext.Current.Session["bigObject"] = bigObject;
Depending on the size & traffic, this can have performance problems, so I'd suggest you read up on the pros and cons of using Session
If you want to get to use something for the next simultaneous request, then use TempData - which is a bucket where you can hold data that is only needed for the following request. That is, anything you put into TempData is discarded after the next request completes.
If you want to persist the information specific to user, then go for Session. With session, you will have timeout, so that after certain amount of time the data stored in session will be lost. You can configure this value to much more bigger value. Apart from that when you go for Webfarm of servers, then maintaining session will be a problem, probably you might need to go for managing session in SQL Server or in some other store.
Alternatively you can Use Runtime Cache object in ASP.Net MVC to keep all the common data. Cached data can be accessed fast and we have other benefits like expiring cache etc. You can share this Cache object across users, or you can maintain different cache objects for different users, that is purely dependent on your logic. In case of webfarms, yo u have distributed cache servers like redis, Azure Cache service etc., which you can use them.

handling large datasets in web api & odata

I have been working with asp.net web api over recent weeks with great success. It has really assisted me with producing an interface for mobile clients to programme against over http.
I reached a point where I need some assistance.
I have a new endpoint which will can a database and could return 100K results. I am using OData to filter the data and return a paginated set of the data.
As this could happen for mutliple requests, I am concerned with performance. Returning 100K records from the database every time is not ideal. So I have some ideas.
First one is to cache the 100K results and let OData do its magic on this every time. I am working with AppFabric distributed cache as its a load balanced environment. However caching such an amount of data in AppFabric could result in memory complications so think I am best avoiding this.
Next option is to forget about the magic of OData and send the filters I use in to the database and return only the required data each time. So in other words hit the db every time.
I could look at using a caching handler like the version outlined in this article to cache in the http cache -> http://byterot.blogspot.ie/2012/06/aspnet-web-api-caching-handler.html The drawback of this is if the data gets update via another system which it may, the cached data is not expired.
Any other tips as to how I may handle this scenario, large amount of data, filtered with odata in conjunction with web api?
This is a question that's likely to result in a wide variety of answers. That said, let me put on my pre-MSFT hat and give you my two cents.
A lot of architecture questions are best answered with the consultant's answer, "It depends." The answer depends in your case on a few things specifically. Some developers have a problem with caching layers because there are additional things to think about. An ACID-compliant database buys you a lot of insurance that you have at least a very finite amount of eventual consistency.
If it were me making this decision, I would be considering a few things:
How many rows am I returning on a regular basis?
Are they the same rows over and over?
How big is that in memory? (100k is really not that many rows; you're right about not wanting those 100k rows to hit the disk every time, but it's probably not a problem to keep them all in memory; SQL Server would probably do this for you anyway.)
What am I willing to deal with re: eventual consistency? Do I want some other software to deal with it? (What frequently scares people about caches are things like ensuring that invalidation and insertion get done properly and consistently from different applications/different places in the application.)
Given the information you've already provided (tiered architecture, willingness to try a distributed cache) I think you should pursue a caching layer. There are lots of good caches out there. AppFabric worked fine for us before I worked at Microsoft, but I've also dealt with a variety of other caching layers as well.
Assuming you use Entity Framework it would be the best option to return the IQueryable of EF directly. This way the magic of OData will work directly on your database. $limit and $take will be mapped directly to your SQL query.
best way is to a distributed cache, which you are already using. but the cache provider which you are using i.e. AppFabric, has some limitations. by limitations i mean the feature limitations. check out NCache which is a well mature and feature rich third party distributed cache provider.
if you want to understand the differences of NCache and Appfabric, check the youtube link below, this is FYI....
http://www.youtube.com/watch?v=3CPi1QlskrU
The caching that I have pointed out in the blog http://byterot.blogspot.ie/2012/06/aspnet-web-api-caching-handler.html applies to HTTP caching also known as output caching. Actually the data itself is not cached on the server but on the client or mid-stream cache servers, so is not suitable for what you have it mind.

How can I cache private data in a webfarm for ASP MVC

I am making a member based web app in ASP MVC3 and I am trying to plan ahead, at first our user base will not be huge, but as with any software the potential for a sudden volume spike is always a possibility.
Thinking ahead to this scenario, I know that the database is the bottleneck area on most web apps. We are using MSSQL 2008RS we will have dedicated servers with several client databases each client has there own database so if one server begins to bottle neck we can scale vertically or move some of the databases to a new server and begin filling it up.
To access the databases we use primarily LINQ 2 SQL and are currently re-factoring some of our code to make use of the IQueryable mechanisms to do a lazy load of content. but each page contains quite a bit of content from various parts of the database.
We also have a few large databases that are used for widgets in the program that rarely change but have millions of rows. The goal with those is to somehow sync them to the primary source and distribute them across several machines and then load balance those servers.
With this layout should I even worry about caching, or will the built-in caching mechanisms in MSSQL be sufficient?
If so where should I begin? I have looked briefly at app fabric but it looks as tho it is for Azure only?
Resources:
How to cache data in a MVC application
http://stephenwalther.com/blog/archive/2008/08/28/asp-net-mvc-tip-39-use-the-velocity-distributed-cache.aspx
http://stephenwalther.com/blog/archive/2008/08/29/asp-net-mvc-tip-40-don-t-cache-pages-that-require-authentication.aspx
Lazy loading is a performance killer. Its better to load the entire object graph with one join than to lazy load other properties. This is especially the case with a list of objects. If you iterate you'll end up lazy loading for each item in the list. Furthermore every call to the db has overhead. Less calls = better performance.
SO was a top 1000 website before it needed two database servers. I think you'll be ok.
If your revenue model says "each client will have its own database" than your scaling issues should be really easy to solve. Sounds like you already have a plan to scale up with more servers as your client base increases. Whats the problem?
Caching on the web tier is usually the first scaling fix you'll have to worry about. You probably don't need to do a fresh db call with each page request.
Overall this sounds like a lot of premature optimization. Your traffic hasn't reached a point where you need to be worried about scaling. Make these kinds of decisions at the last second possible.
The database cache is different to most caches - it can if course load used data into memory and re-use query plans, but that isn't really a cache as such.
AppFabric is definitely not just azure; after all, I it was you wouldnt be able to install it (and use it) locally :) but in truth there is little between AppFabroc, redis and memcached (the latter lacks persistance, of course).
But I think you should initially look at using the inbuilt asp.net caching; both data caching via HttpContext.Cache, and caching of entire responses (or, in MVC 3, partials). Obviously you should have a broad idea of what data is used heavily by lots of requests, and is safe to re-use : cache that!
Just make sure you treat all cached FAA as immutable (if you need to update the cache, re-add a modified value; don't modify the existing objects) - reason: it won't work the same if you start needing to use distributed caching, as that uses serialization, and any changes you make won't be seen by the next request.

Why use isolated storage in an ASP.NET application?

I need to store user preferences on a per page basis in my application. For example, several pages use a custom grid pager control that needs to keep its current page size between postbacks. Most of the settings don't need to persist once the user leaves the page, but in some situations they do need to be restored. Note: Session is disabled in this application and will not be used.
I did some reading on isolated storage and understand that it can be used to store these user settings. Obviously cookies have been around a long time and are a proven approach to this scenario, but what about isolated storage? Is it going to work for all browsers and in all environments? Are permissions a problem? Does it require configuring anything on the end-user's side? Just how widely used is it? Why should one use isolated storage in an application for the given example?
Thanks!
Obviously cookies have been around a
long time and are a proven approach to
this scenario, but what about isolated
storage? Is it going to work for all
browsers and in all environments?
Ah - .NET isolated storage is SERVER SIDE. Like a database. It is meant as a small way to store small amounts of data( ONE user, not all users, viewstate) on the side the .NET application runs (in asp.net case = the server).
As such it is totally irrelvant to your question.
Put the data in a database. I know of VERY few usages of isolated stoage for ASP.NET applications, it craetes a TON of long term problems. It is not meant for server side apps.
You can always use hidden form field variables on a per-page basis, as a way to keep track of that page's state.
This is my preference to a session state strategy to deal with the scenario of users having say 2 FireFox browser instances open to the same page. No need to deal with session state issues in that scenario.

Categories