asp.net session variables to store all global data - c#

I have inherited a project from a developer who was rather fond of session variables. He has used them to store all sorts of global stuff - datatables, datasets, locations of files, connection strings etc. I am a little worried that this may not be very scalable and we do have the possibility of a lot more users in the immediate future.
Am I right to be concerned, and if so why?
Is there an easy way to see how much memory this is all using on the live server at the moment?
What would be the best approach for re-factoring this to use a better solution?

Yes, I would say that you do have some cause for concern. Overuse of session can cause a lot of performance issues. Ideally, session should only be used for information that is specific to the user. Obviously there are exceptions to this rule, but keep that in mind when you're refactoring.
As for the refactoring itself, I would look into caching any large objects that are not user-specific, and removing anything that doesn't need to be in session. Don't be afraid to make a few trips to the database to retrieve information when you need it. Go with the option that puts the least overall strain on the server. The trick is keeping it balanced and distributing the weight as evenly as possible across the various layers of the application.

It was probably due to poor design, and yes you should be concerned if you plan on getting heavier traffic or scaling the site.
Connection strings should be stored in web.config. Seems like you would have to do some redesigning of the data-layer and how the pages pass data to each other to steer away from storing datatables and datasets in Session. For example, instead of storing a whole dataset in Session, store, or pass by url, something small (like an ID) that can be used to re-query the database.

Sessions always hurt scalability. However, once sessions are being used, the impact of a little bit more data in a session isn't that bad.
Still, it has to be stored somewhere, has to be retrieved from somewhere, so it's going to have an impact. It's going to really hurt if you have to move to a web-farm to deal with being very successful, since that's harder to do well in a scalable manner. I'd start by taking anything that should be global in the true sense (shared between all sessions) and move it into a truly globally-accessible location.
Then anything that depended upon the previous request, I'd have be sent by that request.
Doing both of those would reduce the amount they were used for immensely (perhaps enough to turn off sessions and get the massive scalability boost that gives).

Depending on the IIS version, using Session to store state can have an impact on scaling. The later versions of IIS are better.
However, the main problem I have run into is that sessions expire and then your data is lost; you may provide your own Session_OnEnd handler where it is possible to regenerate your session.

Overall yes, you should be concerned about this.
Session is a "per user" type of storage that is in memory. Looking at the memory usage of the ASP.NET Worker Process will give you an idea of memory usage, but you might need to use third-party tools if you want to dig in deeper to what is in. In addition session gets really "fun" when you start load balancing etc.
ConnectionStrings and other information that is not "per user" should really not be handled in a "per user" storage location.
As for creating a solution for this though, a lot is going to depend on the data itself, as you might need to find multiple other opportunities/locations to get/store the info.

You are right in feeling concerned about this.
Connection strings should be stored in Web.config and always read from there. The Web.config file is cached, so storing things in there and then on Session is redundant and unnecessary. The same can be said for locations of files: you can probably create key,value pairs in the appSettings section of your web.config to store this information.
As far as storing datasets, datatables, etc; I would only store this information on Session if getting them from the database is really expensive and provided the data is not too big. A lot of people tend to do this kind of thing w/o realizing that their queries are very fast and that database connections are pooled.
If getting the data from the database does take long, the first thing I would try to remedy would be the speed of my queries. Am I missing indexes? What does the execution plan of my queries show? Am I doing table scans, etc., etc.
One scenario where I currently store information on Session (or Cache) is when I do have to call an external web service that takes more than 2 secs on average to retrieve what I need. Once I get this data I don't need to getting again on every page hit, so I cache it.
Obviously an application that stores pretty much everything it can on Session is going to have scalability issues because memory is a limited resource.

if memory is the issue, why not change session mode to sql server so you can store session data in sql server which requires little code changes.
how to store session data in sql server:
http://msdn.microsoft.com/en-us/library/ms178586.aspx
the catch is that the classes stored in sql server must be serializable and you can use json.net to do just that.

Related

Caching big data

I have an application that monitors various systems in realtime. I got different reports with different fields depending on the monitored application. We are gathering data in 3 minute intervals. And these 3 minute intervals can be 120mb as raw json and 2-3mb as zipped or gzipped json. We are zipping then caching to the disk to avoid database requests by requesting those caches from disk, unzipping them and loading the json data to application. We are holding these caches for like 3 days to 30 days depending on the report type.
For years we have used disk caching. Zipping the 3 minute interval data and then saving it to disk. This led me to use a lot of locks and mutexes.
I know I'm not the only one with this kind of problem. My cache is big. My question is; Is there a better way to save this data and obtain it? Memory caching is not a solution for me because 30 days of data can't be on memory and I am not able to add memory to the server for this application. I need something else. Something better than disk and without the usage of locks.
P.S. : Application is also multi-threaded.
I would consider a NoSQL storage engine. I am thinking at Redis in particular. Redis is a in-memory, fast, key-value store with persistence, which should be a good fit for this kind of scenario. You can then defer most of the lock/consistency hassle to it.
A problem with Redis is if you are really bound to a Windows env. There is an "unofficial" port of redis; the port is done by Microsoft itself.. but I admit that I would not be extremely confident in using it in production.
As for a C# client/library, there is Booksleeve. This site (SO) uses it :) so I bet it is pretty stable!
Of course you will need to tailor Redis to your needs. Redis does offer persistence, and the persistence is configurable (see http://redis.io/topics/persistence). Also, it offers expiration of objects (http://redis.io/commands/expire), very handy for a cache-like mechanism, and the ability to build more complex, atomic commands starting from simpler ones.
I would use Redis to handle the in-memory cache, keeping all the (primary) keys in memory, with data both on disk and in-memory. The in-memory data associated with an volatile key. The primary key points to the in-memory key and to a file name; if the key it points at is invalid, you can re-load data and access it.
This is a complex solution, but it has two advantages:
it should be vary fast
it offloads some of the locks/etc burden to Redis
should be easy to migrate from your solution to this one
Alternatively, Redis also offers a VM solution
http://oldblog.antirez.com/post/redis-virtual-memory-story.html, but I do not know how stable it is, nor have I ever tried it.
Another alternative is to explore other NoSQL solutions; since you mentioned JSON data, I will look at MongoDB.
Finally, a crazy idea... are you on a 64-bit machine?
Have you considered "letting the OS handle it", with a really big page file and page-file-backed memory mapped files (or a standard file too)? Mind you, it might be a very BAD idea...! But it is something that maybe you could try out/research about?

How many SQL queries per HTTP request is optimal?

I know the answer to this question for the most part is "It Depends", however I wanted to see if anyone had some pointers.
We execute queries each request in ASP.NET MVC. Each request we need to get user rights information, and Various data for the Views that we are displaying. How many is too much, I know I should be conscious to the number of queries i am executing. I would assume if they are small queries and optimized out, half-a-dozen should be okay? Am I right?
What do you think?
Premature optimization is the root of all evil :)
First create your application, if it is sluggish you will have to determine the cause and optimize that part. Sure reducing the queries will save you time, but also optimizing those queries that you have to do.
You could spend a whole day shaving off 50% time spend off a query, that only took 2 milisecond to begin with, or spend 2 hours on removing some INNER JOINS that made another query took 10 seconds. Analyse whats wrong before you start optimising.
The optimal amount would be zero.
Given that this is most likely not achievable, the only reasonable thing to say about is: "As little as possible".
Simplify your site design until it's
as simple as possible, and still
meeting your client's requirements.
Cache information that can be cached.
Pre-load information into the cache
outside the request, where you can.
Ask only for the information that you
need in that request.
If you need to make a lot of independant queries for a single request, parallelise the loading as much as possible.
What you're left with is the 'optimal' amount for that site.
If that's too slow, you need to review the above again.
User rights information may be able to be cached, as may other common information you display everywhere.
You can probably get away with caching more than the requirements necessitate. For instance - you can probably cache 'live' information such as product stock levels, and the user's shopping cart. Use SQL Change Notifications to allow you to expire and repopulate the cache in the background.
As few as possible.
Use caching for lookups. Also store some light-weight data (such as permissions) in the session.
Q: Do you have a performance problem related to database queries?
Yes? A: Fewer than you have now.
No? A: The exact same number you have now.
If it ain't broke, don't fix it.
While refactoring and optimizing to save a few milliseconds is a fun and intellectually rewarding way for programmers to spend time, it is often a waste of time.
Also, changing your code to combine database requests could come at the cost of simplicity and maintainability in your code. That is, while it may be technically possible to combine several queries into one, that could require removing the conceptual isolation of business objects in your code, which is bad.
You can make as many queries as you want, until your site gets too slow.
As many as necessary, but no more.
In other words, the performance bottlenecks will not come from the number of queries, but what you do in the queries and how you deal with the data (e.g. caching a huge yet static resultset might help).
Along with all the other recommendations of making fewer trips, it also depends on how much data is retrieved on each round trip. If it is just a few bytes, then it can probably be chatty and performance would not hurt. However, if each trip returns hundreds of kb, then your performance will hurt faster.
You have answered your own question "It depends".
Although, trying to justify optimal number number of queries per HTTP request is not a legible scenario. If your SQL server has real good hardware support than you could run good number of queries in less time and have real low turn around time for the HTTP request. So basically, "it depends" as you rightly said.
As the comments above indicate, some caching is likely appropriate for your situation. And like your question suggests, the real answer is "it depends." Generally, the fewer the queries, the better since each query has a cost associated with it. You should examine your data model and your application's requirements to determine what is appropriate.
For example, if a user's rights are likely to be static during the user's session, it makes sense to cache the rights data so fewer queries are required. If aspects of the data displayed in your View are also static for a user's session, these could also be cached.

Caching architecture for search results in an ASP.NET application

What is a good design for caching the results of an expensive search in an ASP.NET system?
Any ideas would be welcomed ... particularly those that don't require inventing a complex infrastructure of our own.
Here are some general requirements related to the problem:
Each search result can produce include from zero to several hundred result records
Each search is relatively expensive and timeconsuming to execute (5-15 seconds at the database)
Results must be paginated before being displayed at the client to avoid information overload for the user
Users expect to be able to sort, filter, and search within the results returned
Users expect to be able to quickly switch between pages in the search results
Users expect to be able to select multiple items (via checkbox) on any number of pages
Users expect relatively snappy performance once a search has finished
I see some possible options for where and how to implement caching:
1. Cache on the server (in session or App cache), use postbacks or Ajax panels to facilitate efficient pagination, sorting, filtering, and searching.
PROS: Easy to implement, decent support from ASP.NET infrastructure
CONS: Very chatty, memory intensive on server, data may be cached longer than necessary; prohibits load balancing practices
2. Cache at the server (as above) but using serializeable structures that are moved out of memory after some period of time to reduce memory pressure on the server
PROS: Efficient use of server memory; ability to scale out using load balancing;
CONS: Limited support from .NET infrastructure; potentially fragile when data structures change; places additional load on the database; significantly more complicated
3. Cache on the client (using JSON or XML serialization), use client-side Javascript to paginate, sort, filter, and select results.
PROS: User experience can approach "rich client" levels; most browsers can handle JSON/XML natively - decent libraries exist for manipulation (e.g. jQuery)
CONS: Initial request may take a long time to download; significant memory footprint on client machines; will require hand-crafted Javascript at some level to implement
4. Cache on the client using a compressed/encoded representation of the data - call back into server to decode when switching pages, sorting, filtering, and searching.
PROS: Minimized memory impact on server; allows state to live as long as client needs it; slightly improved memory usage on client over JSON/XML
CONS: Large data sets moving back and forth between client/server; slower performance (due to network I/O) as compared with pure client-side caching using JSON/XML; much more complicated to implement - limited support from .NET/browser
5. Some alternative caching scheme I haven't considered...
For #1, have you considered using a state server (even SQL server) or a shared cache mechanism? There are plenty of good ones to choose from, and Velocity is getting very mature - will probably RTM soon. A cache invalidation scheme that is based on whether the user creates a new search, hits any other page besides search pagination, and finally a standard timeout (20 minutes) should be pretty successful at weeding your cache down to a minimal size.
References:
SharedCache (FOSS)
NCache ($995/CPU)
StateServer (~$1200/server)
StateMirror ("Enterprise pricing")
Velocity (Free?)
If you are able to wait until March 2010, .NET 4.0 comes with a new System.Caching.CacheProvider, which promises lots of implementations (disk, memory, SQL Server/Velocity as mentioned).
There's a good slideshow of the technology here. However it is a little bit of "roll your own" or a lot of it infact. But there will probably be a lot of closed and open source providers being written for the Provider model when the framework is released.
For the six points you state, a few questions crops up
What is contained in the search results? Just string data or masses of metadata associated with each result?
How big is the set you're searching?
How much memory would you use storing the entire set in RAM? Or atleast having a cache of the most popular 10 to 100 search terms. Also being smart and caching related searches after the first search might be another idea.
5-15 seconds for a result is a long time to wait for a search so I'm assuming it's something akin to an expedia.com search where multiple sources are being queried and lots of information returned.
From my limited experience, the biggest problem with the client-side only caching approach is Internet Explorer 6 or 7. Server only and HTML is my preference with the entire result set in the cache for paging, expiring it after some sensible time period. But you might've tried this already and seen the server's memory getting eaten.
Raising an idea under the "alternative" caching scheme. This doesn't answer your question with a given cache architecture, but rather goes back to your original requirements of your search application.
Even if/when you implement your own cache, it's effectiveness can be less than optimal -- especially as your search index grows in size. Cache hit rates will decrease as your index grows. At a certain inflection point, your search may actually slow down due to resources dedicated to both searching and caching.
Most search sub-systems implement their own internal caching architecture as a means of efficiency in operation. Solr, an open-source search system built on Lucene, maintains its own internal cache to provide for speedy operation. There are other search systems that would work for you, and they take similar strategies to results caching.
I would recommend you consider a separate search architecture if your search index warrants it, as caching in a free-text keyword search basis is a complex operation to effectively implement.
Since you say any ideas are welcome:
We have been using the enterprise library caching fairly successfully for caching result sets from a LINQ result.
http://msdn.microsoft.com/en-us/library/cc467894.aspx
It supports custom cache expiration, so should support most of your needs (with a little bit of custom code) there. It also has quite a few backing stores including encrypted backing stores if privacy of searches is important.
It's pretty fully featured.
My recommendation is a combination of #1 and #3:
Cache the query results on the server.
Make the results available as both a full page and as a JSON view.
Cache each page retrieved dynamically at the client, but send a REQUEST each time the page changes.
Use ETAGs to do client cache invalidation.
Have a look at SharedCache- it makes 1/2 pretty easy and works fine in a load balanced system. Free, open source, and we've been using it for about a year with no issues.
While pondering your options, consider that no user wants to page through data. We force that on them as an artifact of trying to build applications on top of browsers in HTML, which inherently do not scale well. We have invented all sorts of hackery to fake application state on top of this, but it is essentially a broken model.
So, please consider implementing this as an actual rich client in Silverlight or Flash. You will not beat that user experience, and it is simple to cache data much larger than is practical in a regular web page. Depending on the expected user behavior, your overall bandwidth could be optimized because the round trips to the server will get only a tight data set instead of any ASP.NET overhead.

ASP.NET MVC Caching scenario

I'm still yet to find a decent solution to my scenario. Basically I have an ASP.NET MVC website which has a fair bit of database access to make the views (2-3 queries per view) and I would like to take advantage of caching to improve performance.
The problem is that the views contain data that can change irregularly, like it might be the same for 2 days or the data could change several times in an hour.
The queries are quite simple (select... from where...) and not huge joins, each one returns on average 20-30 rows of data (with about 10 columns).
The queries are quite simple at the sites current stage, but over time the owner will be adding more data and the visitor numbers will increase. They are large at the moment and I would be looking at caching as traffic will mostly be coming from Google AdWords etc and fast loading pages will be a benefit (apparently).
The site will be hosted on a Microsoft SQL Server 2005 database (But can upgrade to 2008 if required).
Do I either:
Set the caching to the minimum time an item doesn't change for (E.g. cache for say 3 mins) and tell the owner that any changes will take upto 3 minutes to appear?
Find a way to force the cache to clear and reprocess on changes (E.g. if the owner adds an item in the administration panel it clears the relevant caches)
Forget caching all together
Or is there an option that would be suit this scenario?
If you are using Sql Server, there's also another option to consider:
Use the SqlCacheDependency class to have your cache invalidated when the underlying data is updated. Obviously this achieves a similar outcome to option 2.
I might actually have to agree with Agileguy though - your query descriptions seem pretty simplistic. Thinking forward and keeping caching in mind while you design is a good idea, but have you proven that you actually need it now? Option 3 seems a heck of a lot better than option 1, assuming you aren't actually dealing with significant performance problems right now.
Premature optimization is the root of all evil ;)
That said, if you are going to Cache I'd use a solution based around option 2.
You have less opportunity for "dirty" data in that manner.
Kindness,
Dan
2nd option is the best. Shouldn't be so hard if the same app edits/caches data. Can be more tricky if there is more than one app.
If you can't go that way, 1st might be acceptable too. With some tweaks (i.e. - i would try to update cache silently on another thread when it hits timeout) it might work well enough (if data are allowed to be a bit old).
Never drop caching if it's possible. Everyone knows "premature optimization..." verse, but caching is one of those things that can increase scalability/performance of application dramatically.

How many DataTable objects should I use in my C# app?

I'm an experienced programmer in a legacy (yet object oriented) development tool and making the switch to C#/.Net. I'm writing a small single user app using SQL server CE 3.5. I've read the conceptual DataSet and related doc and my code works.
Now I want to make sure that I'm doing it "right", get some feedback from experienced .Net/SQL Server coders, the kind you don't get from reading the doc.
I've noticed that I have code like this in a few places:
var myTableDataTable = new MyDataSet.MyTableDataTable();
myTableTableAdapter.Fill(MyTableDataTable);
... // other code
In a single user app, would you typically just do this once when the app starts, instantiate a DataTable object for each table and then store a ref to it so you ever just use that single object which is already filled with data? This way you would ever only read the data from the db once instead of potentially multiple times. Or is the overhead of this so small that it just doesn't matter (plus could be counterproductive with large tables)?
For CE, it's probably a non issue. If you were pushing this app to thousands of users and they were all hitting a centralized DB, you might want to spend some time on optimization. In a single-user instance DB like CE, unless you've got data that says you need to optimize, I wouldn't spend any time worrying about it. Premature optimization, etc.
The way to decide varys between 2 main few things
1. Is the data going to be accesses constantly
2. Is there a lot of data
If you are constanty using the data in the tables, then load them on first use.
If you only occasionally use the data, fill the table when you need it and then discard it.
For example, if you have 10 gui screens and only use myTableDataTable on 1 of them, read it in only on that screen.
The choice really doesn't depend on C# itself. It comes down to a balance between:
How often do you use the data in your code?
Does the data ever change (and do you care if it does)?
What's the relative (time) cost of getting the data again, compared to everything else your code does?
How much value do you put on performance, versus developer effort/time (for this particular application)?
As a general rule: for production applications, where the data doesn't change often, I would probably create the DataTable once and then hold onto the reference as you mention. I would also consider putting the data in a typed collection/list/dictionary, instead of the generic DataTable class, if nothing else because it's easier to let the compiler catch my typing mistakes.
For a simple utility you run for yourself that "starts, does its thing and ends", it's probably not worth the effort.
You are asking about Windows CE. In that particular care, I would most likely do the query only once and hold onto the results. Mobile OSs have extra constraints in batteries and space that desktop software doesn't have. Basically, a mobile OS makes bullet #4 much more important.
Everytime you add another retrieval call from SQL, you make calls to external libraries more often, which means you are probably running longer, allocating and releasing more memory more often (which adds fragmentation), and possibly causing the database to be re-read from Flash memory. it's most likely a lot better to hold onto the data once you have it, assuming that you can (see bullet #2).
It's easier to figure out the answer to this question when you think about datasets as being a "session" of data. You fill the datasets; you work with them; and then you put the data back or discard it when you're done. So you need to ask questions like this:
How current does the data need to be? Do you always need to have the very very latest, or will the database not change that frequently?
What are you using the data for? If you're just using it for reports, then you can easily fill a dataset, run your report, then throw the dataset away, and next time just make a new one. That'll give you more current data anyway.
Just how much data are we talking about? You've said you're working with a relatively small dataset, so there's not a major memory impact if you load it all in memory and hold it there forever.
Since you say it's a single-user app without a lot of data, I think you're safe loading everything in at the beginning, using it in your datasets, and then updating on close.
The main thing you need to be concerned with in this scenario is: What if the app exits abnormally, due to a crash, power outage, etc.? Will the user lose all his work? But as it happens, datasets are extremely easy to serialize, so you can fairly easily implement a "save every so often" procedure to serialize the dataset contents to disk so the user won't lose a lot of work.

Categories