NHibernate session management? - c#

Firstly, let me give a brief description of the scenario. I'm writing a simple game where pretty much all of the work is done on the server side with a thin client for players to access it. A player logs in or creates an account and can then interact with the game by moving around a grid. When they enter a cell, they should be informed of other players in that cell and similarly, other players in that cell will be informed of that player entering it. There are lots of other interactions and actions that can take place but it's not worth going in to detail on them as it's just more of the same. When a player logs out then back in or if the server goes down and comes back up, all of the game state should persist, although if the server crashes, it doesn't matter if I lose 10 minutes or so of changes.
I've decided to use NHibernate and a SQLite database, so I've been reading up a lot on NHibernate, following tutorials and writing some sample applications, and am thoroughly confused as to how I should go about this!
The question I have is: what's the best way to manage my sessions? Just from the small amount that I do understand, all these possibilities jump out at me:
Have a single session that's always opened that all clients use
Have a single session for each client that connects and periodically flush it
Open a session every time I have to use any of the persisted entities and close it as soon as the update, insert, delete or query is complete
Have a session for each client, but keep it disconnected and only reconnect it when I need to use it
Same as above, but keep it connected and only disconnect it after a certain period of inactivity
Keep the entities detached and only attach them every 10 minutes, say, to commit the changes
What kind of strategy should I use to get decent performance given that there could be many updates, inserts, deletes and queries per second from possibly hundreds of clients all at once, and they all have to be consistent with each other?
Another smaller question: how should I use transactions in an efficient manner? Is it fine for every single change to be in its own transaction, or is that going to perform badly when I have hundreds of clients all trying to alter cells in the grid? Should I try to figure out how to bulk together similar updates and place them within a single transaction, or is that going to be too complicated? Do I even need transactions for most of it?

I would use a session per request to the server, and one transaction per session. I wouldn't optimize for performance before the app is mature.
Answer to your solutions:
Have a single session that's always opened that all clients use: You will have performance issues here because the session is not thread safe and you will have to lock all calls to the session.
Have a single session for each client that connects and periodically flush it: You will have performance issues here because all data used by the client will be cached. You will also see problems with stale data from the cache.
Open a session every time I have to use any of the persisted entities and close it as soon as the update, insert, delete or query is complete: You won't have any performance problems here. A disadvantage are possible concurrency or corrupt data problems because related sql statements are not executed in the same transaction.
Have a session for each client, but keep it disconnected and only reconnect it when I need to use it: NHibernate already has build-in connection management and that is already very optimized.
Same as above, but keep it connected and only disconnect it after a certain period of inactivity: Will cause problems because the amount of sql connections is limited and will also limit the amount of users of your application.
Keep the entities detached and only attach them every 10 minutes, say, to commit the changes: Will cause problems because of stale data in the detached entities. You will have to track changes yourself, which makes you end up with a piece of code that looks like the session itself.
It would be useless to go into more detail now, because I would just repeat the manuals/tutorials/book. When you use a session per request, you probably won't have problems in 99% of the application you describe (and maybe not at all). Session is a lightweight not threadsafe class, that to live a very short. When you want to know exactly how the session/connection/caching/transaction management works, I recommend to read a manual first, and than ask some more detailed questions about the unclear subjects.

Read the 'ISessionFactory' on this page of NHibernate documentation. ISessions are meant to be single-threaded (i.e., not thread-safe) which probably means that you shouldn't be sharing it across users. ISessionFactory should be created once by your application and ISessions should be created for each unit of work. Remember that creating an ISessions does not necessarily result in opening a database connection. That depends on how your SessionFactory's connection pooling strategy is configured.
You may also want to look at Hibernate's Documentation on Session and Transaction.

I would aim to keep everything in memory, and either journal changes or take periodic offline snapshots.

Have a read through NHibernate Best Practices with ASP.NET, there are some very good tips in here for a start. As mentioned already be very careful with an ISession as it is NOT threadsafe, so just keep that in mind.
If you require something a little more complex then take a look into the NHibernate.Burrow contrib project. It states something like "the real power Burrow provides is that a Burrow conversation can span over multiple http requests".

Related

ORM for stateful application. Does EF fit? Or any?

I need an ORM that is suitable for stateful application. I'm going to keep entities between requests in low-latency realtime game server with persistent client connections. There is an only 1 server instance connected to database so no data can be changed from "outside" and the server can rely on its cache.
When user remotely logs in to the server its whole profile is loaded to server memory. Several higher-level services are also created for each user to operate profile data and provide functionality. They can also have internal fields (state) to store temporary data. When user wants to change his signature he asks corresponding service to do so. The service tracks how frequently user changes his signature and allows it only once per ten minutes (for example) - such short interval is not tracked in db, this is a temporary state. This change should be stored to db executing only 1 query: UPDATE users SET signature = ... WHERE user_id = .... When user logs off it's unloaded from server memory after minutes/hours of inactivity. Db here is only a storage. This is what I call stateful.
Some entities are considered "static data" and loaded only once at application start. Those can be referenced from other "dynamic" entities. Loading "dynamic" entity should not require reloading referenced "static data" entity.
Update/Insert/Delete should set/insert/delete only changed properties/entities even with "detached" entity.
Write operations should not each time load data from database (perform Select) preliminary to detect changes. (A state can be tracked in dynamically generated inheritor.) I have a state locally, there is no sense to load anything. I want to continue tracking changes even outside of connection scope and "upload" changes when I want.
While performing operations references of persisted objects should not be changed.
DBConnection-per-user is not going to work. The expected online is thousands of users.
Entities from "static data" can be assigned to "dynamic" enitity properties (which represent foreign keys) and Update should handle it correctly.
Now I'm using NHibernate despite it's designed for stateless applications. It supports reattaching to session but that looks like very uncommon usage, requires me to use undocumented behavior and doesn't solve everything.
I'm not sure about Entity Framework - can I use it that way? Or can you suggest another ORM?
If the server will recreate (or especially reload) user objects each time user hits a button it will eat CPU very fast. CPU scales vertically expensively but have small effect. Contrary if you are out of RAM you can just go and buy more - like with horizontal scaling but easier to code. If you think that another approach should be used here I'm ready to discuss it.
Yes, you can use EF for this kind of application. Please keep in mind, that on heavy load you will have some db errors time to time. And typically, it's faster to recover after errors, when you application track changes, not EF. By the way, you can use this way NHibernate too.
I have used hibernate in a stateful desktop application with extremely long sessions: the session starts when the application launches, and remains open for as long as the application is running. I had no problems with that. I make absolutely no use of attaching, detaching, reattaching, etc. I know it is not standard practice, but that does not mean it is not doable, or that there are any pitfalls. (Edit: but of course read the discussion below for possible pitfalls suggested by others.)
I have even implemented my own change notification mechanism on top of that, (separate thread polling the DB directly, bypassing hibernate,) so it is even possible to have external agents modify the database while hibernate is running, and to have your application take notice of these changes.
If you have lots and lots of stuff already working with hibernate, it would probably not be a good idea to abandon what you already have and rewrite it unless you are sure that hibernate absolutely won't do what you want to accomplish.

Blocking editing fields when already opened by another user [duplicate]

I have a SQL Server 2008 database and an asp.net frontend.
I would like to implement a lock when a user is currently editing a record but unsure of which is the best approach.
My idea is to have an isLocked column for the records and it gets set to true when a user pulls that record, meaning all other users have read only access until the first user finishes the editing.
However, what if the session times out and he/she never saves/updates the record, the record will remain with isLocked = true, meaning others cannot edit it, right?
How can I implement some sort of session time out and have isLocked be automatically set to false when the session times out (or after a predefined period)
Should this be implemented on the asp.net side or the SQL side?
Don't do it at all. Use optimistic concurrency instead.
Pessimistic locking is possible, but not from .Net applications. .Net app farms are not technically capable of maintaining a long lived session to keep a lock (obtained via sp_getapplock or, worse, obtained by real data locking) because .Net app farms:
load balance requests across instances
do not keep a request stack between HTTP calls
recycle the app domain
Before you say 'I don't have a farm, is only one IIS server' I will point out that you may only have one IIS server now and if you rely on it you will never be able to scale out, and you still have the problem of app-domain recycle.
Simulating locking via app specific updates (eg. 'is_locked' field) is deeply flawed in real use, for reasons you already started to see, and many more. When push comes to shove this is the only approach that can be made to work, but I never heard of anyone saying 'Gee, I'm really happy we implemented pessimistic locking with data writes!'. Nobody, ever.
App layer locking is also not workable, for exactly the same reasons .Net farms cannot use back-end locking (load-balancing, lack of context between calls, app-domain recycle). Writing a distributed locking app-protocol is just not going to work, that road is paved with bodies.
Just don't do it. Optimistic concurrency is sooooo much better in every regard.

How to rollback transaction at later stage?

I have a data entry ASP.NET application. During a one complete data entry many transactions occur. I would like to keep track of all those transactions so that if the user wants to abandon the data entry, all the transaction of which I have been keeping record can be rolled back.
SQL 2008 ,Framework version is 4.0 and I am using c#.
This is always a tough lesson to learn for people that are new to web development. But here it is:
Each round trip web request is a separate, stand-alone thread of execution
That means, simply put, each time you submit a page request (click a button, navigate to a new page, even refresh a page) then it can run on a different thread than the previous one. What's more, even if you do get the same thread twice, several other web requests may have been processed by the thread in the time between your two requests.
This makes it effectively impossible to span simple transactions across more than one web request.
Here's another concept that you should keep in mind:
Transactions are intended for batch operations, not interactive operations.
What this means is that transactions are meant to be short-lived, and to encompass several operations executing sequentially (or simultaneously) in which all operations are atomic, and intended to either all complete, or all fail. Transactions are not typically designed to be long-lived (meaning waiting for a user to decide on various actions interactively).
Web apps are not desktop apps. They don't function like them. You have to change your thinking when you do web apps. And the biggest lesson to learn, each request is a stand-alone unit of execution.
Now, above, I said "simple transactions", also known as lightweight or local transactions. There's also what's known as a Distributed Transaction, and to use those requires a Distributed Transaction Coordinator. MSDTC is pretty commonly used. However, DT's perform much more slowly than LWT's. Also, they require that the infrastructure be setup to use a DTC.
It's possible to span a transaction over web requests using a DTC. This is done by "Enlisting" in a Distribute Transaction, and then somehow sharing this transaction identifier between requests. But this is a lot of work to setup, and deal with, and has a lot of error prone situations. It's not something you want to do if you have other options.
In general, you're better off adding the data to a temporary table or tables, and then when the final save is done, transfer that data to the permanent tables. Another option is to maintain some state (such as using ViewState or Session) to keep track of the changes.
One popular way of doing this is to perform operations client-side using JavaScript and then submitting all the changes to the server when you are done. This is difficult to implement if you need to navigate to different pages, however.
From your question, it appears that the transactions are complete when the user exercises the option to roll them back. In such cases, I doubt if the DBMS's transaction rollback semantics would be available. So, I would provide such semantics at the application layer as follows:
Any atomic operation that can be performed on the database should be encapsulated in a Command object. Each command will implement the undo method that would revert the action performed by its execute method.
Each transaction would contain a list of commands that were run as part of it. The transaction is persisted as is for further operations in future.
The user would be provided with a way to view these transactions that can be potentially rolled back. Upon selection of a transaction by user to roll it back, the list of commands corresponding to such a transaction are retrieved and the undo method is called on all those command objects.
HTH.
You can also store them on temporary Table and move those records to your original table 'at later stage'..
If you are just managing transactions during a single save operation, use TransactionScope. But it doesn't sound like that is the case.
If the user may wish to abandon n number of previous save operations, it suggests that an item may exist in draft form. There might be one working draft or many. Subsequently, there must be a way to promote a draft to a final version, either implicitly or explicitly. Think of how an email program saves a draft. It doesn't actually send your message, you may abandon it at any time, and you may recall it at a later time. When you send the message, you have "committed the transaction".
You might also add a user interface to rollback to a specific version.
This will be a fair amount of work, but if you are willing to save and manage multiple copies of the same item it can be accomplished.
You may save the a copy of the same data in the same schema using a status flag to indicate that it is a draft, or you might store the data in an intermediate format in separate table(s). I would prefer the first approach in that it allows the same structures to be used.

What's the best way to manage concurrency in a database access application?

A while ago, I wrote an application used by multiple users to handle trades creation.
I haven't done development for some time now, and I can't remember how I managed the concurrency between the users. Thus, I'm seeking some advice in terms of design.
The original application had the following characteristics:
One heavy client per user.
A single database.
Access to the database for each user to insert/update/delete trades.
A grid in the application reflecting the trades table. That grid being updated each time someone changes a deal.
I am using WPF.
Here's what I'm wondering:
Am I correct in thinking that I shouldn't care about the connection to the database for each application? Considering that there is a singleton in each, I would expect one connection per client with no issue.
How can I go about preventing the concurrency of the accesses? I guess I should lock when modifying the data, however don't remember how to.
How do I set up the grid to automatically update whenever my database is updated (by another user, for example)?
Thank you in advance for your help!
Consider leveraging Connection Pooling to reduce # of connections. See: http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx
lock as late as possible and release as soon as possible to maximize concurrency. You can use TransactionScope (see: http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.aspx and http://blogs.msdn.com/b/dbrowne/archive/2010/05/21/using-new-transactionscope-considered-harmful.aspx) if you have multiple db actions that need to go together to manage consistency or just handle them in DB stored proc. Keep your query simple. Follow the following tips to understand how locking work and how to reduce resource contention and deadlock: http://www.devx.com/gethelpon/10MinuteSolution/16488
I am not sure other db, but for SQL, you can use SQL Dependency, see http://msdn.microsoft.com/en-us/library/a52dhwx7(v=vs.80).aspx
Concurrency is usually granted by the DBMS using locks. Locks are a type of semaphore that grant the exclusive lock to a certain resource and allow other accesses to be restricted or queued (only restricted in the case you use uncommited reads).
The number of connections itself does not pose a problem while you are not reaching heights where you might touch on the max_connections setting of your DBMS. Otherwise, you might get a problem connecting to it for maintenance purposes or for shutting it down.
DBMSes usually use a concept of either table locks (MyISAM) or row locks (InnoDB, most other DBMSes). The type of lock determines the volume of the lock. Table locks can be very fast but are usually considered inferior to row level locks.
Row level locks occur inside a transaction (implicit or explicit). When manually starting a transaction, you begin your transaction scope. Until you manually close the transaction scope, all changes you make will be attributes to this exact transaction. The changes you make will also obey the ACID paradigm.
Transaction scope and how to use it is a topic far too long for this platform, if you want, I can post some links that carry more information on this topic.
For the automatic updates, most databases support some kind of trigger mechanism, which is code that is run at specific actions on the database (for instance the creation of a new record or the change of a record). You could post your code inside this trigger. However, you should only inform a recieving application of the changes, not really "do" the changes from the trigger, even if the language might make it possible. Remember that the action which triggered the code is suspended until you finish with your trigger code. This means that a lean trigger is best, if it is needed at all.

Prevent duplicate editing / Locking DB records while editing - single backend server

Situation: multiple front-ends (e.g. Silverlight, ASP) sharing a single back-end server (WCF RIA or other web service).
I am looking for a standard to prevent multiple people from editing the same form. I understand that this is not an easy topic, but requirements are requirements.
Previously I used the DB last modified date against the submitted data and give a warning or error if the data was modified since it was loaded. The initial system simply overrode the data without any warning. The problem is that I have a new requirement to prevent both these situations. There will be many UIs, so a locking system might be a challenge, and there is obviously no guarantee that the client will not close the window/browser in the middle of an edit.
I would appreciate any help.
If I'm correct, it seems what you are talking about is a form of check-out/edit/check-in style workflow. You want when one user is editing a record, no other users can even begin to edit the same record.
This is a form of pessimistic concurrency. Many web and data access frameworks have support for (the related) optimistic concurrency - that is, they will tell you that someone else already changed the record when you tried to save. Optimistic has no notion of locking, really - it makes sure that no other user saved between the time you fetched and the time you save.
What you want is not an easy requirement over the web, since the server really has no way to enforce the check-in when a user aborts an edit (say, by closing the browser). I'm not aware of any frameworks that handle this in general.
Basically what you need is to hold checkout information on the server. A user process when editing would need to request a checkout, and the server would grant/deny this based on what they are checking out. The server would also have to hold the information that the resource is checked out. When a user saves the server releases the lock and allows a new checkout when requested. The problem comes when a user aborts the edit - if it's through the UI, no problem... just tell the server to release the lock.
But if it is through closing the browser, powering off the machine, etc then you have an orphaned lock. Most people solve this one of two ways:
1. A timeout. The lock will eventually be released. The upside here is that it is fairly easy and reliable. The downsides are that the record is locked for a while where it's not really in edit. And, you must make your timeout long enough that if the user takes a really, really long time to save they don't get an error because the lock timed out (and they have to start over).
2. A heartbeat. The user has a periodic ping back to the server to say "yep, still editing". This is basically the timeout option from #1, but with a really short timeout that can be refreshed on demand. The upside is that you can make it arbitrarily short. The downside is increased complexity and network usage.
Checkin/checkout tokens are really not that hard to implement if you already have a transacted persistant store (like a DB): the hard part is integrating it into your user experience.

Categories