View count for blog post. ASP.NET WebAPI 2 - c#

A few days ago, i create my little project. This project is my personal blog. He based on ASP.NET WebAPI (back-end), Angular (front-end).
My Post entity have ViewCount field.
How to calculate the number of post views? And that at restart (F5), the counter does not increase.
Is there a ready-made piece of code or implementation tips?
Thanks to everyone who responds.

You'll need to create something on your own as your question too broad, but generally speaking, you'll need to update your Post entity in your action each time it's hit. For example:
post.ViewCount++;
db.Entry(post).State = EntityState.Modified;
db.SaveChanges();
However, there's a number of things to take into consideration:
You'll need to plan for concurrency issues (i.e., multiple simultaneous requests attempting to update the view count at the same time). To do that, you'll need to catch and respond to DbUpdateConcurrencyException when saving.
try
{
db.SaveChanges();
}
catch (DbUpdateConcurrencyException)
{
// handle it
}
There's various strategies for how to handle concurrency. Microsoft details your options. However, simply handling it once, may not be enough as the next save could also cause a concurrency exception, and so on. I'd recommend employing something like Polly, which gives you far more powerful exception handling abilities, including retrying a number of times or forever.
You'll need to weed out duplicate requests (From an F5 refresh, for example). To do that, you'll likely want to set something in the user's session, such as an access time, and then only count the view if it's been some determined amount of time since the user last accessed the page, or the key doesn't exist in their session (first view). Bear in mind session timeouts with this though. For example, if you only want to count a view for a user every hour, but your session times out after 20 minutes, that's not going to work. In that scenario, you'd want to store the access time some place more persistent, like a database table. If you do use the session, you should also use a SQL Server or Redis backing for your session state, rather than In Proc or State Server, since the former will be much more reliable than the latter.
You'll need to take into account bots. This could be any automated viewing of the page, benign or malicious. At the very least, you'll want to account for spiders like GoogleBot, so you don't increment the view count every time your site gets indexed. However, trying to determine if a request originates from a bot or not, is a whole thing in itself. Entire databases of bot UA strings are managed explicitly to try to track what's out in the wild. It's achievable to exclude at least a vast majority of automated traffic, but your view count will basically be as accurate as you are about filtering bots out.
Based on all that, unless you have a really good reason you actually need to track the view count yourself, I'd say don't worry about it. For example, if you're just trying to report internally on usage statistics, employing something like Google Analytics, will be a far easier and more manageable solution. Even if you do need the view count for something in your application, it still may be better to install third-party analytics software locally. Look for a solution with an API or at least some way to get at the data programmatically, and then you can simply pull the view count from that.

Related

How to cache an object without user interruption?

This is easier to explain if I first explain my order of events:
User requests a webpage that requires x database objects
The server checks the cache to see if it is cached, and if it is not, it re-requests the database
The page is sent to the user
My issue is, when the user requests a webpage and the cache is expired, it will take a very long time for the cache to update. The reason is because the data that is being cached includes data fetched from other locations, so web requests are being made to update the cache. Because it is making web requests, it can take quite a while for the cache to update, causing the user's webpage to sit there and load for upwards of ten seconds or more.
My question is, how would I go about reducing or completely removing these edge cases where, when the cache is updating, the user's webpage takes forever to load?
The first solution I came up with was to see if I could persist the MemoryCache past its expiration time, or at the very least check its expiration time, so that I can fetch the old object and return that to the user, and then initiate a cache update in another thread for the next user. However, I found that MemoryCache completely removes the items entirely upon expiration (which makes sense), and that there is no way to avoid doing this. I looked into using CacheItemPriority.NeverRemove, but there is no way to check the expiration time (for some weird reason).
So the second solution I came up with was to create my own cache, but I don't know how I would go about doing that. The object I am storing is a list objects, so I would prefer to avoid a wrapper object around them (but, if that's what I have to do, I'll be willing to do that). I would like this cache to be abstract, of course, so it can handle any type of item, and using a wrapper object for lists would not allow me to do that.
So what I'm looking for in a custom cache is:
Ability to check expiration date
Items are not removed upon expiration so that I can manually update them
Yet through the past couple of hours searching online, I have found nothing that describes a cache that's even remotely close to being able to do something like this (at least, one that's provided with .NET Core or available in the NuGet packages). I have also not found a guide or any examples that would help me understand how to create a custom cache like this.
How would I go abouts making this custom cache? Or is a cache even what I'm looking for here?

Handling frequent database writes triggered by asp.net page

I need to store some data in a SQL Server database every time someone opens or refreshes a page of a website made in asp.net.
Should I try to buffer the inserts, writing them to the DB all together every X time, or is it acceptable to write them one by one?
I know I should provide some data about how many views I expect but the one who is supposed to tell me this has no idea... Here I'm just asking if there's any kind of best practice about handling frequent writes to a DB from an asp site. It's not a problem (logic wise) if the information insertion is delayed.
Personally, but I don't think this is merely opinion, I would start off doing what was simplest and seemed most natural, without worrying about optimizations.
So if the server side page render event (probably not the actual event name) seems like a natural place to insert some records I would do just that.
If you're doing this on multiple pages then you might want to centralize the inserts using some sort of filter that all requests pass through (probably not the right term for asp.net either, but you get the idea).
Later on, if it turns out that doing this is introducing an unacceptable amount of latency, you can introduce some asynchronous way to update the database, perhaps a message queue or some of the c# ansync constructs.

Architectural question

As a result of a previous post (Architecture: simple CQS) I've been thinking how I could build a simple system that is flexible enough to be extended later.
In other words: I don't see the need for a full-blown CQRS now, but I want it to be easy to evolve to it later, if needed.
So I was thinking to separate commanding from querying, but both based on the same database.
The query part would be easy: a WCF data service based on views to that it's easy to query for data. Nothing special there.
The command part is something more difficult, and here's an idea: commands are of course executed in an asynchronous way, so they don't return a result. But, my ASP.NET MVC site's controllers often need feedback from a command (for example if a registration of a member succeeded or not). So if the controller sends a command, it also generates a transaction ID (a guid) that is passed together with the command properties. The command service receives this command, puts it into a transactions table in the database with state 'processing', and is executed (using DDD principles). After execution, the transactions table is updated, so that state becomes 'completed' or 'failed', and other more detailed information like the primary key that was generated.
Meanwhile the site is using the QueryService to poll for the state of this transaction, until it receives 'completed' or 'failed', and then it can continue its work based on this result. If the transactions table is polled and the result was 'completed' or 'failed', the entry is deleted.
A side effect is that I don't need guid's as keys for my entities, which is a good thing for performance and size.
In most cases this polling mechanism is probably not needed, but is possible if needed. And the interfaces are designed with CQS in mind, so open for the future.
Do you think of any flaws in this approach? Other ideas or suggestions?
Thanks!
Lud
I think you are very close to a full CQRS system with your approach.
I have a site that I used to do something similar to what you are describing. My site, braincredits.com, is architected using CQRS, and all commands are async in nature. So, as a result, when I create an entry, there is really no feedback to the user other than the command was successfully submitted for processing (not that it processed).
But I have a user score on the site (a count of their "credits") that should change as the user submits more items. But I don't want the user to keep hitting F5 to refresh the browser. So I am doing what you are proposing -- I have an AJAX call that fires off every second or two to see if the user's credit count has changed. If it has, the new amount is brought back and the UI is updated (with a little bit of animation to catch the user's attention -- but not too flashy).
What you're talking about is eventual consistency -- that the state of the application that the user is seeing will eventually be consistent with the system data (the system of record). That concept is pretty key to CQRS, and, in my opinion, makes a lot of sense. As soon as you retrieve data in a system (whether it's a CQRS-based one or not), the data is old. But if you assume that and assume that the client will eventually be consistent, then your approach makes sense and you can also design your UI to account for that AND take advantage of that.
As far as suggestions, I would watch how much polling you do and how much data you're sending up and back. Do go overboard with polling, which is sounds like you're not. But target what should be updated on a regular basis on your site and I think you'll be good.
The WCF Data Service layer for the query side is a good idea - just make sure it's only read-enabled (which I'm sure you've done).
Other than that, it sounds like you're off to a good start.
I hope this helps. Good luck!

Prevent duplicate editing / Locking DB records while editing - single backend server

Situation: multiple front-ends (e.g. Silverlight, ASP) sharing a single back-end server (WCF RIA or other web service).
I am looking for a standard to prevent multiple people from editing the same form. I understand that this is not an easy topic, but requirements are requirements.
Previously I used the DB last modified date against the submitted data and give a warning or error if the data was modified since it was loaded. The initial system simply overrode the data without any warning. The problem is that I have a new requirement to prevent both these situations. There will be many UIs, so a locking system might be a challenge, and there is obviously no guarantee that the client will not close the window/browser in the middle of an edit.
I would appreciate any help.
If I'm correct, it seems what you are talking about is a form of check-out/edit/check-in style workflow. You want when one user is editing a record, no other users can even begin to edit the same record.
This is a form of pessimistic concurrency. Many web and data access frameworks have support for (the related) optimistic concurrency - that is, they will tell you that someone else already changed the record when you tried to save. Optimistic has no notion of locking, really - it makes sure that no other user saved between the time you fetched and the time you save.
What you want is not an easy requirement over the web, since the server really has no way to enforce the check-in when a user aborts an edit (say, by closing the browser). I'm not aware of any frameworks that handle this in general.
Basically what you need is to hold checkout information on the server. A user process when editing would need to request a checkout, and the server would grant/deny this based on what they are checking out. The server would also have to hold the information that the resource is checked out. When a user saves the server releases the lock and allows a new checkout when requested. The problem comes when a user aborts the edit - if it's through the UI, no problem... just tell the server to release the lock.
But if it is through closing the browser, powering off the machine, etc then you have an orphaned lock. Most people solve this one of two ways:
1. A timeout. The lock will eventually be released. The upside here is that it is fairly easy and reliable. The downsides are that the record is locked for a while where it's not really in edit. And, you must make your timeout long enough that if the user takes a really, really long time to save they don't get an error because the lock timed out (and they have to start over).
2. A heartbeat. The user has a periodic ping back to the server to say "yep, still editing". This is basically the timeout option from #1, but with a really short timeout that can be refreshed on demand. The upside is that you can make it arbitrarily short. The downside is increased complexity and network usage.
Checkin/checkout tokens are really not that hard to implement if you already have a transacted persistant store (like a DB): the hard part is integrating it into your user experience.

NHibernate session management?

Firstly, let me give a brief description of the scenario. I'm writing a simple game where pretty much all of the work is done on the server side with a thin client for players to access it. A player logs in or creates an account and can then interact with the game by moving around a grid. When they enter a cell, they should be informed of other players in that cell and similarly, other players in that cell will be informed of that player entering it. There are lots of other interactions and actions that can take place but it's not worth going in to detail on them as it's just more of the same. When a player logs out then back in or if the server goes down and comes back up, all of the game state should persist, although if the server crashes, it doesn't matter if I lose 10 minutes or so of changes.
I've decided to use NHibernate and a SQLite database, so I've been reading up a lot on NHibernate, following tutorials and writing some sample applications, and am thoroughly confused as to how I should go about this!
The question I have is: what's the best way to manage my sessions? Just from the small amount that I do understand, all these possibilities jump out at me:
Have a single session that's always opened that all clients use
Have a single session for each client that connects and periodically flush it
Open a session every time I have to use any of the persisted entities and close it as soon as the update, insert, delete or query is complete
Have a session for each client, but keep it disconnected and only reconnect it when I need to use it
Same as above, but keep it connected and only disconnect it after a certain period of inactivity
Keep the entities detached and only attach them every 10 minutes, say, to commit the changes
What kind of strategy should I use to get decent performance given that there could be many updates, inserts, deletes and queries per second from possibly hundreds of clients all at once, and they all have to be consistent with each other?
Another smaller question: how should I use transactions in an efficient manner? Is it fine for every single change to be in its own transaction, or is that going to perform badly when I have hundreds of clients all trying to alter cells in the grid? Should I try to figure out how to bulk together similar updates and place them within a single transaction, or is that going to be too complicated? Do I even need transactions for most of it?
I would use a session per request to the server, and one transaction per session. I wouldn't optimize for performance before the app is mature.
Answer to your solutions:
Have a single session that's always opened that all clients use: You will have performance issues here because the session is not thread safe and you will have to lock all calls to the session.
Have a single session for each client that connects and periodically flush it: You will have performance issues here because all data used by the client will be cached. You will also see problems with stale data from the cache.
Open a session every time I have to use any of the persisted entities and close it as soon as the update, insert, delete or query is complete: You won't have any performance problems here. A disadvantage are possible concurrency or corrupt data problems because related sql statements are not executed in the same transaction.
Have a session for each client, but keep it disconnected and only reconnect it when I need to use it: NHibernate already has build-in connection management and that is already very optimized.
Same as above, but keep it connected and only disconnect it after a certain period of inactivity: Will cause problems because the amount of sql connections is limited and will also limit the amount of users of your application.
Keep the entities detached and only attach them every 10 minutes, say, to commit the changes: Will cause problems because of stale data in the detached entities. You will have to track changes yourself, which makes you end up with a piece of code that looks like the session itself.
It would be useless to go into more detail now, because I would just repeat the manuals/tutorials/book. When you use a session per request, you probably won't have problems in 99% of the application you describe (and maybe not at all). Session is a lightweight not threadsafe class, that to live a very short. When you want to know exactly how the session/connection/caching/transaction management works, I recommend to read a manual first, and than ask some more detailed questions about the unclear subjects.
Read the 'ISessionFactory' on this page of NHibernate documentation. ISessions are meant to be single-threaded (i.e., not thread-safe) which probably means that you shouldn't be sharing it across users. ISessionFactory should be created once by your application and ISessions should be created for each unit of work. Remember that creating an ISessions does not necessarily result in opening a database connection. That depends on how your SessionFactory's connection pooling strategy is configured.
You may also want to look at Hibernate's Documentation on Session and Transaction.
I would aim to keep everything in memory, and either journal changes or take periodic offline snapshots.
Have a read through NHibernate Best Practices with ASP.NET, there are some very good tips in here for a start. As mentioned already be very careful with an ISession as it is NOT threadsafe, so just keep that in mind.
If you require something a little more complex then take a look into the NHibernate.Burrow contrib project. It states something like "the real power Burrow provides is that a Burrow conversation can span over multiple http requests".

Categories