I need to store some data in a SQL Server database every time someone opens or refreshes a page of a website made in asp.net.
Should I try to buffer the inserts, writing them to the DB all together every X time, or is it acceptable to write them one by one?
I know I should provide some data about how many views I expect but the one who is supposed to tell me this has no idea... Here I'm just asking if there's any kind of best practice about handling frequent writes to a DB from an asp site. It's not a problem (logic wise) if the information insertion is delayed.
Personally, but I don't think this is merely opinion, I would start off doing what was simplest and seemed most natural, without worrying about optimizations.
So if the server side page render event (probably not the actual event name) seems like a natural place to insert some records I would do just that.
If you're doing this on multiple pages then you might want to centralize the inserts using some sort of filter that all requests pass through (probably not the right term for asp.net either, but you get the idea).
Later on, if it turns out that doing this is introducing an unacceptable amount of latency, you can introduce some asynchronous way to update the database, perhaps a message queue or some of the c# ansync constructs.
Related
A few days ago, i create my little project. This project is my personal blog. He based on ASP.NET WebAPI (back-end), Angular (front-end).
My Post entity have ViewCount field.
How to calculate the number of post views? And that at restart (F5), the counter does not increase.
Is there a ready-made piece of code or implementation tips?
Thanks to everyone who responds.
You'll need to create something on your own as your question too broad, but generally speaking, you'll need to update your Post entity in your action each time it's hit. For example:
post.ViewCount++;
db.Entry(post).State = EntityState.Modified;
db.SaveChanges();
However, there's a number of things to take into consideration:
You'll need to plan for concurrency issues (i.e., multiple simultaneous requests attempting to update the view count at the same time). To do that, you'll need to catch and respond to DbUpdateConcurrencyException when saving.
try
{
db.SaveChanges();
}
catch (DbUpdateConcurrencyException)
{
// handle it
}
There's various strategies for how to handle concurrency. Microsoft details your options. However, simply handling it once, may not be enough as the next save could also cause a concurrency exception, and so on. I'd recommend employing something like Polly, which gives you far more powerful exception handling abilities, including retrying a number of times or forever.
You'll need to weed out duplicate requests (From an F5 refresh, for example). To do that, you'll likely want to set something in the user's session, such as an access time, and then only count the view if it's been some determined amount of time since the user last accessed the page, or the key doesn't exist in their session (first view). Bear in mind session timeouts with this though. For example, if you only want to count a view for a user every hour, but your session times out after 20 minutes, that's not going to work. In that scenario, you'd want to store the access time some place more persistent, like a database table. If you do use the session, you should also use a SQL Server or Redis backing for your session state, rather than In Proc or State Server, since the former will be much more reliable than the latter.
You'll need to take into account bots. This could be any automated viewing of the page, benign or malicious. At the very least, you'll want to account for spiders like GoogleBot, so you don't increment the view count every time your site gets indexed. However, trying to determine if a request originates from a bot or not, is a whole thing in itself. Entire databases of bot UA strings are managed explicitly to try to track what's out in the wild. It's achievable to exclude at least a vast majority of automated traffic, but your view count will basically be as accurate as you are about filtering bots out.
Based on all that, unless you have a really good reason you actually need to track the view count yourself, I'd say don't worry about it. For example, if you're just trying to report internally on usage statistics, employing something like Google Analytics, will be a far easier and more manageable solution. Even if you do need the view count for something in your application, it still may be better to install third-party analytics software locally. Look for a solution with an API or at least some way to get at the data programmatically, and then you can simply pull the view count from that.
I have a number of images that are stored as VARBINARY(MAX) (using FileStream) in a database. I'm looking to retrieve about 10 images or so at a time.
The prescribed, most common way using ASP.net is to use an HTTP handler and hit the database for each individual image. Seems fine, but is a bit slow at times.
Is it best to download all images for a given page at the same time in one big data chunk? Or should I try to grab each individually? Best practice?
Probably best to do them individually on a domain that doesn't have cookies set, or make sure your handler will work with multiple simultaneous requests. That way you can stream multiple results from the DB at the same time, and stream multiple images from your webserver as it gets them.
Well,
I think many people would have different opinions, and reasons about what the best practice is for them, but in reality, it all depends on hardware, software, data structure, and if the data is normalized.
In general, the SQL server likes SET operations better, meaning, the loops in general are slower.But, loops are safer for IOPs related issues, and they are better at causing less locks.
I am not sure which object mapper, or built in SQL library you are using( I have a feeling you may be using LINQ after you built a SQL class), but it also depends on the library you are using, and I would definitely recommend dapper.
I think reading them all at once would be faster, and here is why;
- If it is as you say, and you hit the database each time for the image, then that would add the delay of reconnecting to the database, so the latency will occur. But when there is one connection, the data retrieval is straight and your connection is open at that moment without requiring further session authentication.
I would recommend downloading them all at once, and informing the end user with a download screen during the process of that. Also for retrieving data, this link is very helpful I believe : https://technet.microsoft.com/en-us/library/dd425070(v=sql.100).aspx
Depending on the features of your server, and edition, you could definitely use different features.
In my client-server architecture I have few API functions which usage need to be limited.
Server is written in .net C# and it is running on IIS.
Until now I didn't need to perform any synchronization. Code was written in a way that even if client would send same request multiple times (e.g. create sth request) one call will end with success and all others with error (because of server code + db structure).
What is the best way to perform such limitations? For example I want no more that 1 call of API method: foo() per user per minute.
I thought about some SynchronizationTable which would have just one column unique_text and before computing foo() call I'll write something like foo{userId}{date}{HH:mm} to this table. If call end with success I know that there wasn't foo call from that user in current minute.
I think there is much better way, probably in server code, without using db for that. Of course, there could be thousands of users calling foo.
To clarify what I need: I think it could be some light DictionaryMutex.
For example:
private static DictionaryMutex FooLock = new DictionaryMutex();
FooLock.lock(User.GUID);
try
{
...
}
finally
{
FooLock.unlock(User.GUID);
}
EDIT:
Solution in which one user cannot call foo twice at the same time is also sufficient for me. By "at the same time" I mean that server started to handle second call before returning result for first call.
Note, that keeping this state in memory in an IIS worker process opens the possibility to lose all this data at any instant in time. Worker processes can restart for any number of reasons.
Also, you probably want to have two web servers for high availability. Keeping the state inside of worker processes makes the application no longer clustering-ready. This is often a no-go.
Web apps really should be stateless. Many reasons for that. If you can help it, don't manage your own data structures like suggested in the question and comments.
Depending on how big the call volume is, I'd consider these options:
SQL Server. Your queries are extremely simple and easy to optimize for. Expect 1000s of such queries per seconds per CPU core. This can bear a lot of load. You can use a SQL Express for free.
A specialized store like Redis. Stack Overflow is using Redis as a persistent, clustering-enabled cache. A good idea.
A distributed cache, like Microsoft Velocity. Or others.
This storage problem is rather easy because it fits a key/value store model well. And the data is near worthless so you don't even need to backup.
I think you're overestimating how costly this rate limitation will be. Your web-service is probably doing a lot more costly things than a single UPDATE by primary key to a simple table.
As a result of a previous post (Architecture: simple CQS) I've been thinking how I could build a simple system that is flexible enough to be extended later.
In other words: I don't see the need for a full-blown CQRS now, but I want it to be easy to evolve to it later, if needed.
So I was thinking to separate commanding from querying, but both based on the same database.
The query part would be easy: a WCF data service based on views to that it's easy to query for data. Nothing special there.
The command part is something more difficult, and here's an idea: commands are of course executed in an asynchronous way, so they don't return a result. But, my ASP.NET MVC site's controllers often need feedback from a command (for example if a registration of a member succeeded or not). So if the controller sends a command, it also generates a transaction ID (a guid) that is passed together with the command properties. The command service receives this command, puts it into a transactions table in the database with state 'processing', and is executed (using DDD principles). After execution, the transactions table is updated, so that state becomes 'completed' or 'failed', and other more detailed information like the primary key that was generated.
Meanwhile the site is using the QueryService to poll for the state of this transaction, until it receives 'completed' or 'failed', and then it can continue its work based on this result. If the transactions table is polled and the result was 'completed' or 'failed', the entry is deleted.
A side effect is that I don't need guid's as keys for my entities, which is a good thing for performance and size.
In most cases this polling mechanism is probably not needed, but is possible if needed. And the interfaces are designed with CQS in mind, so open for the future.
Do you think of any flaws in this approach? Other ideas or suggestions?
Thanks!
Lud
I think you are very close to a full CQRS system with your approach.
I have a site that I used to do something similar to what you are describing. My site, braincredits.com, is architected using CQRS, and all commands are async in nature. So, as a result, when I create an entry, there is really no feedback to the user other than the command was successfully submitted for processing (not that it processed).
But I have a user score on the site (a count of their "credits") that should change as the user submits more items. But I don't want the user to keep hitting F5 to refresh the browser. So I am doing what you are proposing -- I have an AJAX call that fires off every second or two to see if the user's credit count has changed. If it has, the new amount is brought back and the UI is updated (with a little bit of animation to catch the user's attention -- but not too flashy).
What you're talking about is eventual consistency -- that the state of the application that the user is seeing will eventually be consistent with the system data (the system of record). That concept is pretty key to CQRS, and, in my opinion, makes a lot of sense. As soon as you retrieve data in a system (whether it's a CQRS-based one or not), the data is old. But if you assume that and assume that the client will eventually be consistent, then your approach makes sense and you can also design your UI to account for that AND take advantage of that.
As far as suggestions, I would watch how much polling you do and how much data you're sending up and back. Do go overboard with polling, which is sounds like you're not. But target what should be updated on a regular basis on your site and I think you'll be good.
The WCF Data Service layer for the query side is a good idea - just make sure it's only read-enabled (which I'm sure you've done).
Other than that, it sounds like you're off to a good start.
I hope this helps. Good luck!
I have a requirement to monitor the Database rows continuously to check for the Changes(updates). If there are some changes or updates from the other sources the Event should be fired on my application (I am using a WCF). Is there any way to listen the database row continuously for the changes?
I may be having more number of events to monitor different rows in the same table. is there any problem in case of performance. I am using C# web service to monitor the SQL Server back end.
You could use an AFTER UPDATE trigger on the respective tables to add an item to a SQL Server Service Broker queue. Then have the queued notifications sent to your web service.
Another poster mentioned SqlDependency, which I also thought of mentioning but the MSDN documentation is a little strange in that it provides a windows client example but also offers this advice:
SqlDependency was designed to be used
in ASP.NET or middle-tier services
where there is a relatively small
number of servers having dependencies
active against the database. It was
not designed for use in client
applications, where hundreds or
thousands of client computers would
have SqlDependency objects set up for
a single database server.
Ref.
I had a very similar requirement some time ago, and I solved it using a CLR SP to push the data into a message queue.
To ease deployment, I created an CLR SP with a tiny little function called SendMessage that was just pushing a message into a Message Queue, and tied it to my tables using an AFTER INSERT trigger (normal trigger, not CLR trigger).
Performance was my main concern in this case, but I have stress tested it and it greatly exceeded my expectations. And compared to SQL Server Service Broker, it's a very easy-to-deploy solution. The code in the CLR SP is really trivial as well.
Monitoring "continuously" could mean every few hours, minutes, seconds or even milliseconds. This solution might not work for millisecond updates: but if you only have to "monitor" a table a few times a minute you could simply have an external process check a table for updates. (If there is a DateTime column present.) You could then process the changed or newly added rows and perform whatever notification you need to. So you wouldn't be listening for changes, you'd be checking for them. One benefit of doing the checking in this manner would be that you wouldn't risk as much of a performance hit if a lot of rows were updated during a given quantum of time since you'd bulk them together (as opposed to responding to each and every change individually.)
I pondered the idea of a CLR function
or something of the sort that calls
the service after successfully
inserting/updating/deleting data from
the tables. Is that even good in this
situation?
Probably it's not a good idea, but I guess it's still better than getting into table trigger hell.
I assume your problem is you want to do something after every data modification, let's say, recalculate some value or whatever. Letting the database be responsible for this is not a good idea because it can have severe impacts on performance.
You mentioned you want to detect inserts, updates and deletes on different tables. Doing it the way you are leaning towards, this would require you to setup three triggers/CLR functions per table and have them post an event to your WCF Service (is that even supported in the subset of .net available inside sql server?). The WCF Service takes the appropriate actions based on the events received.
A better solution for the problem would be moving the responsibility for detecting data modification from your database to your application. This can actually be implemented very easily and efficiently.
Each table has a primary key (int, GUID or whatever) and a timestamp column, indicating when the entry was last updated. This is a setup you'll see very often in optimistic concurrency scenarios, so it may not even be necessary to update your schema definitions. Though, if you need to add this column and can't offload updating the timestamp to the application using the database, you just need to write a single update trigger per table, updating the timestamp after each update.
To detect modifications, your WCF Service/Monitoring application builds up a local dictionay (preferably a hashtable) with primary key/timestamp pairs at a given time interval. Using a coverage index in the database, this operation should be really fast. The next step is to compare both dictionaries and voilá, there you go.
There are some caveats to this approach though. One of them is the sum of records per table, another one is the update frequency (if it gets too low it's ineffective) and yet another pinpoint is if you need access to the data previous to modification/insertion.
Hope this helps.
Why don't you use SQL Server Notification service? I think that's the exact thing you are looking for. Go through the documentation of notification services and see if that fits your requirement.
I think there's some great ideas here; from the scalability perspective I'd say that externalizing the check (e.g. Paul Sasik's answer) is probably the best one so far (+1 to him).
If, for some reason, you don't want to externalize the check, then another option would be to use the HttpCache to store a watcher and a callback.
In short, when you put the record in the DB that you want to watch, you also add it to the cache (using the .Add method) and set a SqlCacheDependency on it, and a callback to whatever logic you want to call when the dependency is invoked and the item is ejected from the cache.