I have a windows service that is polling a database. I am using EF6 and linq to do my queries and updates, etc.
The polling needs to be as often as possible, probably every 2 seconds or something in that area.
My gut tells me to have one connection and keep it open while my service is running, however something else tells me to open and close the connection every time. I feel that the latter will slow it down (will this really slow it down this much?).
What are the best practices when it comes to polling a database within a windows service? Should I really be polling my database so often?
I think you should dispose of the context frequently and create a new one every time you poll the database.
The main reason is that unless you disable object tracking (really only suitable for read only operation), the context gets bigger and bigger over time, with each successive polling operation loading more data into the context's cache. As well as the increase in memory this causes, SaveChanges() gets slower as the ObjectContext then looks for changes in the objects which are attached to it.
If the connection is lost for any reason, you'll also have a hard time associating a new connection with the context. Regardless, based on my own experience, it won't slow anything down, it's quick to construct any EF context objects after the first one, because the model is cached on first load.
I wouldn't worry about every polling every 2 seconds. That seems totally reasonable to me.
As an aside, if you're using SQL Server, you can use Sql Dependency to fire an event when data changes, but polling is the most reliable option.
http://msdn.microsoft.com/en-us/library/62xk7953(v=vs.110).aspx
Alternatively, if you're dead set against polling, you could look at using a Message Broker system like RabbitMQ and updating your apps to use it, but be prepared to lose a couple of weeks implementing the infrastructure.
Related
I have a unique (or so I think) problem - we have an ASP.NET web app using MVC principles. The project will be at most single threaded (our business requires single point of control). We are using Entity Framework to connect to the database
Problem:
We want to query our database less frequently than every page load.
I have considered putting our database connection in a singleton but am worried about connecting to in too infrequently -- will a query still work if it connected a significant time ago? How would you recommend connecting to the database?
How would you recommend connecting to the database?
Do NOT use a shared connection. Connections are not thread-safe, and are pooled by .NET, so creating one generally isn't an expensive operation.
The best practice is to create a command and connection for every database request. If you are using Entity Framework, then this will be taken care of for you.
If you want to cache results using the built-in Session or Cache properties, then that's fine, but don't cache disposable resources like connections, EF contexts, etc.
If at some point you find you have a measurable performance problem directly related to creating connections or contexts, then you can try and deal with that, but don't try to optimize something that might not even be a problem.
If you want to get data without connecting to the database, you need to cache it - either in memory, in a file or in whatever mean of storage you want, but you need to keep it in front of the DB somehow. There is no other way known to me.
If by connecting you mean building a completely new SqlConnection to your DB, then you can either rely on connection pooling (EF is smart enough to keep your connections alive for some minutes even after you finish your business) or you can just create connections and keep them alive inside your application by not closing them instantly (i.e. keeping track of them inside a structure).
But you should definitely consider if this is REALLY what you want. The way EF does it internally is most of the time exactly what you want.
Some further reading:
https://learn.microsoft.com/en-us/aspnet/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
I need an ORM that is suitable for stateful application. I'm going to keep entities between requests in low-latency realtime game server with persistent client connections. There is an only 1 server instance connected to database so no data can be changed from "outside" and the server can rely on its cache.
When user remotely logs in to the server its whole profile is loaded to server memory. Several higher-level services are also created for each user to operate profile data and provide functionality. They can also have internal fields (state) to store temporary data. When user wants to change his signature he asks corresponding service to do so. The service tracks how frequently user changes his signature and allows it only once per ten minutes (for example) - such short interval is not tracked in db, this is a temporary state. This change should be stored to db executing only 1 query: UPDATE users SET signature = ... WHERE user_id = .... When user logs off it's unloaded from server memory after minutes/hours of inactivity. Db here is only a storage. This is what I call stateful.
Some entities are considered "static data" and loaded only once at application start. Those can be referenced from other "dynamic" entities. Loading "dynamic" entity should not require reloading referenced "static data" entity.
Update/Insert/Delete should set/insert/delete only changed properties/entities even with "detached" entity.
Write operations should not each time load data from database (perform Select) preliminary to detect changes. (A state can be tracked in dynamically generated inheritor.) I have a state locally, there is no sense to load anything. I want to continue tracking changes even outside of connection scope and "upload" changes when I want.
While performing operations references of persisted objects should not be changed.
DBConnection-per-user is not going to work. The expected online is thousands of users.
Entities from "static data" can be assigned to "dynamic" enitity properties (which represent foreign keys) and Update should handle it correctly.
Now I'm using NHibernate despite it's designed for stateless applications. It supports reattaching to session but that looks like very uncommon usage, requires me to use undocumented behavior and doesn't solve everything.
I'm not sure about Entity Framework - can I use it that way? Or can you suggest another ORM?
If the server will recreate (or especially reload) user objects each time user hits a button it will eat CPU very fast. CPU scales vertically expensively but have small effect. Contrary if you are out of RAM you can just go and buy more - like with horizontal scaling but easier to code. If you think that another approach should be used here I'm ready to discuss it.
Yes, you can use EF for this kind of application. Please keep in mind, that on heavy load you will have some db errors time to time. And typically, it's faster to recover after errors, when you application track changes, not EF. By the way, you can use this way NHibernate too.
I have used hibernate in a stateful desktop application with extremely long sessions: the session starts when the application launches, and remains open for as long as the application is running. I had no problems with that. I make absolutely no use of attaching, detaching, reattaching, etc. I know it is not standard practice, but that does not mean it is not doable, or that there are any pitfalls. (Edit: but of course read the discussion below for possible pitfalls suggested by others.)
I have even implemented my own change notification mechanism on top of that, (separate thread polling the DB directly, bypassing hibernate,) so it is even possible to have external agents modify the database while hibernate is running, and to have your application take notice of these changes.
If you have lots and lots of stuff already working with hibernate, it would probably not be a good idea to abandon what you already have and rewrite it unless you are sure that hibernate absolutely won't do what you want to accomplish.
I want to use SqlDependency in my project, but the table that I want dependency is being used by several programs for very important purposes. So they have to be able to insert this table while SqlDependency in action. Is that possible?
I've read this question but didn't find my answer.
To answer your question, SqlDependency will not 'lock' the table, but may increase lock contention in high-write environments as it uses the same mechanism as indexed views to detect changes to underlying data.
However, it should be a good fit unless:
The frequency of changes is likely to be high. To define 'high', you really need to test your ecosystem, but a suggested guideline is that if your data changes many times per second, it's probably not a good fit as you: the response time is not guaranteed for SqlDependency, and the callback mechanism is not designed to reliably handle many concurrent changes where you need to be notified of every change. In addition, the SqlDependency can increase blocking/contention on the underlying table as the index used to keep track of changes can form a bottle-neck with a high frequency of writes.
You are intending to build the SqlDependency into a client application (e.g. desktop app) which accesses the database directly, and of which there will be many instances. In this case, the sheer volume of listeners, queues and messages could impact database performance and is just inefficient. In this case you need to put some middleware in between your database and your app before thinking about SqlDependency.
You need to be reliably notified of every single change. The mechanism underlying SqlDependency within SQL Server will generate a notification for every change, but the .NET side of things is not inherently designed to handle them in a multi-threaded way: if a notification arrives while the SqlDependency's worker thread is already handling another notification, it will be missed. In this case, you may be able to use SqlNotificationRequest instead.
You need to be notified immediately of the change (i.e. guaranteed sub-second). SqlDependency is not designed to be low-latency; it's designed for a cache-invalidation scenario.
If SqlDependency is not a good fit, have a look at the Planning for Notifications and underlying Query Notifications pages on MSDN for more guidance and suggestions of alternatives. Otherwise see below for a bit more detail on how to assess performance based on the underlying technologies at play.
SqlDependency largely relies upon two key SQL Server technologies: query notifications (based on indexed views), and service broker. It effectively hooks into the mechanism that updates an indexed view whenever the underlying data changes. It adds a message to a queue for each change, and service broker handles the messaging and notifications. In cases where the write frequency is very high, SQL Server will work hard to handle the writes, keep its 'indexed view' up-to-date, as well as queueing and serving up the many resulting messages. If you need near-instant notification, this may still be the best approach, otherwise have a look at either polling, or using an After Update trigger which perhaps uses Service Broker as suggested on MSDN.
I have a requirement to monitor the Database rows continuously to check for the Changes(updates). If there are some changes or updates from the other sources the Event should be fired on my application (I am using a WCF). Is there any way to listen the database row continuously for the changes?
I may be having more number of events to monitor different rows in the same table. is there any problem in case of performance. I am using C# web service to monitor the SQL Server back end.
You could use an AFTER UPDATE trigger on the respective tables to add an item to a SQL Server Service Broker queue. Then have the queued notifications sent to your web service.
Another poster mentioned SqlDependency, which I also thought of mentioning but the MSDN documentation is a little strange in that it provides a windows client example but also offers this advice:
SqlDependency was designed to be used
in ASP.NET or middle-tier services
where there is a relatively small
number of servers having dependencies
active against the database. It was
not designed for use in client
applications, where hundreds or
thousands of client computers would
have SqlDependency objects set up for
a single database server.
Ref.
I had a very similar requirement some time ago, and I solved it using a CLR SP to push the data into a message queue.
To ease deployment, I created an CLR SP with a tiny little function called SendMessage that was just pushing a message into a Message Queue, and tied it to my tables using an AFTER INSERT trigger (normal trigger, not CLR trigger).
Performance was my main concern in this case, but I have stress tested it and it greatly exceeded my expectations. And compared to SQL Server Service Broker, it's a very easy-to-deploy solution. The code in the CLR SP is really trivial as well.
Monitoring "continuously" could mean every few hours, minutes, seconds or even milliseconds. This solution might not work for millisecond updates: but if you only have to "monitor" a table a few times a minute you could simply have an external process check a table for updates. (If there is a DateTime column present.) You could then process the changed or newly added rows and perform whatever notification you need to. So you wouldn't be listening for changes, you'd be checking for them. One benefit of doing the checking in this manner would be that you wouldn't risk as much of a performance hit if a lot of rows were updated during a given quantum of time since you'd bulk them together (as opposed to responding to each and every change individually.)
I pondered the idea of a CLR function
or something of the sort that calls
the service after successfully
inserting/updating/deleting data from
the tables. Is that even good in this
situation?
Probably it's not a good idea, but I guess it's still better than getting into table trigger hell.
I assume your problem is you want to do something after every data modification, let's say, recalculate some value or whatever. Letting the database be responsible for this is not a good idea because it can have severe impacts on performance.
You mentioned you want to detect inserts, updates and deletes on different tables. Doing it the way you are leaning towards, this would require you to setup three triggers/CLR functions per table and have them post an event to your WCF Service (is that even supported in the subset of .net available inside sql server?). The WCF Service takes the appropriate actions based on the events received.
A better solution for the problem would be moving the responsibility for detecting data modification from your database to your application. This can actually be implemented very easily and efficiently.
Each table has a primary key (int, GUID or whatever) and a timestamp column, indicating when the entry was last updated. This is a setup you'll see very often in optimistic concurrency scenarios, so it may not even be necessary to update your schema definitions. Though, if you need to add this column and can't offload updating the timestamp to the application using the database, you just need to write a single update trigger per table, updating the timestamp after each update.
To detect modifications, your WCF Service/Monitoring application builds up a local dictionay (preferably a hashtable) with primary key/timestamp pairs at a given time interval. Using a coverage index in the database, this operation should be really fast. The next step is to compare both dictionaries and voilá, there you go.
There are some caveats to this approach though. One of them is the sum of records per table, another one is the update frequency (if it gets too low it's ineffective) and yet another pinpoint is if you need access to the data previous to modification/insertion.
Hope this helps.
Why don't you use SQL Server Notification service? I think that's the exact thing you are looking for. Go through the documentation of notification services and see if that fits your requirement.
I think there's some great ideas here; from the scalability perspective I'd say that externalizing the check (e.g. Paul Sasik's answer) is probably the best one so far (+1 to him).
If, for some reason, you don't want to externalize the check, then another option would be to use the HttpCache to store a watcher and a callback.
In short, when you put the record in the DB that you want to watch, you also add it to the cache (using the .Add method) and set a SqlCacheDependency on it, and a callback to whatever logic you want to call when the dependency is invoked and the item is ejected from the cache.
Firstly, let me give a brief description of the scenario. I'm writing a simple game where pretty much all of the work is done on the server side with a thin client for players to access it. A player logs in or creates an account and can then interact with the game by moving around a grid. When they enter a cell, they should be informed of other players in that cell and similarly, other players in that cell will be informed of that player entering it. There are lots of other interactions and actions that can take place but it's not worth going in to detail on them as it's just more of the same. When a player logs out then back in or if the server goes down and comes back up, all of the game state should persist, although if the server crashes, it doesn't matter if I lose 10 minutes or so of changes.
I've decided to use NHibernate and a SQLite database, so I've been reading up a lot on NHibernate, following tutorials and writing some sample applications, and am thoroughly confused as to how I should go about this!
The question I have is: what's the best way to manage my sessions? Just from the small amount that I do understand, all these possibilities jump out at me:
Have a single session that's always opened that all clients use
Have a single session for each client that connects and periodically flush it
Open a session every time I have to use any of the persisted entities and close it as soon as the update, insert, delete or query is complete
Have a session for each client, but keep it disconnected and only reconnect it when I need to use it
Same as above, but keep it connected and only disconnect it after a certain period of inactivity
Keep the entities detached and only attach them every 10 minutes, say, to commit the changes
What kind of strategy should I use to get decent performance given that there could be many updates, inserts, deletes and queries per second from possibly hundreds of clients all at once, and they all have to be consistent with each other?
Another smaller question: how should I use transactions in an efficient manner? Is it fine for every single change to be in its own transaction, or is that going to perform badly when I have hundreds of clients all trying to alter cells in the grid? Should I try to figure out how to bulk together similar updates and place them within a single transaction, or is that going to be too complicated? Do I even need transactions for most of it?
I would use a session per request to the server, and one transaction per session. I wouldn't optimize for performance before the app is mature.
Answer to your solutions:
Have a single session that's always opened that all clients use: You will have performance issues here because the session is not thread safe and you will have to lock all calls to the session.
Have a single session for each client that connects and periodically flush it: You will have performance issues here because all data used by the client will be cached. You will also see problems with stale data from the cache.
Open a session every time I have to use any of the persisted entities and close it as soon as the update, insert, delete or query is complete: You won't have any performance problems here. A disadvantage are possible concurrency or corrupt data problems because related sql statements are not executed in the same transaction.
Have a session for each client, but keep it disconnected and only reconnect it when I need to use it: NHibernate already has build-in connection management and that is already very optimized.
Same as above, but keep it connected and only disconnect it after a certain period of inactivity: Will cause problems because the amount of sql connections is limited and will also limit the amount of users of your application.
Keep the entities detached and only attach them every 10 minutes, say, to commit the changes: Will cause problems because of stale data in the detached entities. You will have to track changes yourself, which makes you end up with a piece of code that looks like the session itself.
It would be useless to go into more detail now, because I would just repeat the manuals/tutorials/book. When you use a session per request, you probably won't have problems in 99% of the application you describe (and maybe not at all). Session is a lightweight not threadsafe class, that to live a very short. When you want to know exactly how the session/connection/caching/transaction management works, I recommend to read a manual first, and than ask some more detailed questions about the unclear subjects.
Read the 'ISessionFactory' on this page of NHibernate documentation. ISessions are meant to be single-threaded (i.e., not thread-safe) which probably means that you shouldn't be sharing it across users. ISessionFactory should be created once by your application and ISessions should be created for each unit of work. Remember that creating an ISessions does not necessarily result in opening a database connection. That depends on how your SessionFactory's connection pooling strategy is configured.
You may also want to look at Hibernate's Documentation on Session and Transaction.
I would aim to keep everything in memory, and either journal changes or take periodic offline snapshots.
Have a read through NHibernate Best Practices with ASP.NET, there are some very good tips in here for a start. As mentioned already be very careful with an ISession as it is NOT threadsafe, so just keep that in mind.
If you require something a little more complex then take a look into the NHibernate.Burrow contrib project. It states something like "the real power Burrow provides is that a Burrow conversation can span over multiple http requests".