C# Database Application Concurrency - c#

I wrote a multi user app in c# some time age using SQL Server 2005 express as back-end.
I have a Orders collection. In order to use this class you would need to instantiate it and just call the Load(CustomerCode) method in order to populate the collection with the specified customers`s orders.
My question:
How do I enforce concurrency, so that only 1 user can request a Orders collection for a specific customer? When the user is done with the object(when the object is set to null),
I will need to make it available again.

You need to implement the Pessimistic Offline Lock pattern.
Essentially you have a table that you put records in that represent a "lock" on records in other tables. When you want to edit a record, you check to see if there's a lock in the lock table first, and react accordingly in your domain logic/UI.
It doesn't have to be a database, it could be an in-memory cache. When I say "table" in my example, I mean a logical table, not necessarily a database one.
Pessimistic Offline Lock prevents
conflicts by avoiding them altogether.
It forces a business transaction to
acquire a lock on a piece of data
before it starts to use it, so that,
most of the time, once you begin a
business transaction you can be pretty
sure you'll complete it without being
bounced by concurrency control.

Related

SQL Server : are transaction locking table for other users?

Does a transaction lock my table when I'm running multiple queries?
Example: if another user will try to send data in same time which I use transaction, what will happen?
Also how can I avoid this, but also to be sure that all data has inserted successfully into database?
Begin Tran;
Insert into Customers (name) values(name1);
Update CustomerTrans
set CustomerName = (name2);
Commit;
You have to implement transaction smartly. Below are some performance related points :-
Locking Optimistic/Pessimistic. In pessimistic locking whole table is locked. but in optimistic locking only specific row is locked.
Isolation level Read Committed/Read Uncommitted. When table is locked it depends upon on your business scenario if it allowed you then you can go for dirty read using with NoLock.
Try to use where clause in update and do proper indexing. For any heavy query check the query plan.
Transaction timeout should be very less. So if the table is locked then it should throw error and In catch block you can retry.
These are few points you can do.
You cannot avoid that multiples users load data to the database. It is neither feasible nor clever to lock every time a single user requested the usage of a table. Actually you do not have to worry about it, because the DB itself will provide mechanism to avoid such issues. I would recommend you reading into ACID properties.
Atomicity
Consistency
Isolation
Durability
What may happen is that you could suffer a ghost read, which basically consist that you cannot read data unless the user who is inserting data commits. And even if you have finished inserting data and do not commit, there is a fair chance that you will not see the changes.
DDL operations such as creation, removal, etc. are themselves committed at the end. However DML operation, such as update, insert, delete, etc. are not committed at the end.

Avoiding concurrent access of data in MSSQL

We are developing a C# application that used to work as a single instance application. Now we need to change it to be a multi-user application, meaning the GUI front-end will be run on multiple workstations while accessing a single MS SQL Server 2008 R2 data store.
Part of the work this application manages is queue based, meaning there's a pool of workitems (the list of workitems is in a single SQL table) from which each user can "take" the next available workitem. What I want to accomplish are the following:
once a workitem is "taken" by a user, no other user should have access to it in any way (including reading) until the first user finished working,
handle timeouts (user goes home for the weekend while workitem is taken) and frozen clients (reset button is pressed on the station while workitem is taken).
I know this is a rather general question (much rather a research), so I'm not expecting a detailed solution, but useful links, best practices and/or some literature to read on the subject. Any help is really appreciated since I'm completely lost where to start.
I've seen this done with a transactional resource lock table or column. For example, you assign the record to someone (be it by setting a user ID or some other mechanism) and you simultaneously set a timestamped record as to when that resource was locked. When accessing the data, be it querying it or trying to update it, you first check this lock table/column to make sure it's available. If not, you don't take the changes.
This also supports timeouts then. If the timestamp is too old, the lock is released. You can automatically assumed release if the timestamp is too old, or you can write a scheduled service that will check for expired locks and unlock them. I'd prefer the second way, as it is less costly to check if a lock is there (boolean logic for row exists or if field value exists [i.e. is not null]). But I've seen it done both ways.

How to rollback transaction at later stage?

I have a data entry ASP.NET application. During a one complete data entry many transactions occur. I would like to keep track of all those transactions so that if the user wants to abandon the data entry, all the transaction of which I have been keeping record can be rolled back.
SQL 2008 ,Framework version is 4.0 and I am using c#.
This is always a tough lesson to learn for people that are new to web development. But here it is:
Each round trip web request is a separate, stand-alone thread of execution
That means, simply put, each time you submit a page request (click a button, navigate to a new page, even refresh a page) then it can run on a different thread than the previous one. What's more, even if you do get the same thread twice, several other web requests may have been processed by the thread in the time between your two requests.
This makes it effectively impossible to span simple transactions across more than one web request.
Here's another concept that you should keep in mind:
Transactions are intended for batch operations, not interactive operations.
What this means is that transactions are meant to be short-lived, and to encompass several operations executing sequentially (or simultaneously) in which all operations are atomic, and intended to either all complete, or all fail. Transactions are not typically designed to be long-lived (meaning waiting for a user to decide on various actions interactively).
Web apps are not desktop apps. They don't function like them. You have to change your thinking when you do web apps. And the biggest lesson to learn, each request is a stand-alone unit of execution.
Now, above, I said "simple transactions", also known as lightweight or local transactions. There's also what's known as a Distributed Transaction, and to use those requires a Distributed Transaction Coordinator. MSDTC is pretty commonly used. However, DT's perform much more slowly than LWT's. Also, they require that the infrastructure be setup to use a DTC.
It's possible to span a transaction over web requests using a DTC. This is done by "Enlisting" in a Distribute Transaction, and then somehow sharing this transaction identifier between requests. But this is a lot of work to setup, and deal with, and has a lot of error prone situations. It's not something you want to do if you have other options.
In general, you're better off adding the data to a temporary table or tables, and then when the final save is done, transfer that data to the permanent tables. Another option is to maintain some state (such as using ViewState or Session) to keep track of the changes.
One popular way of doing this is to perform operations client-side using JavaScript and then submitting all the changes to the server when you are done. This is difficult to implement if you need to navigate to different pages, however.
From your question, it appears that the transactions are complete when the user exercises the option to roll them back. In such cases, I doubt if the DBMS's transaction rollback semantics would be available. So, I would provide such semantics at the application layer as follows:
Any atomic operation that can be performed on the database should be encapsulated in a Command object. Each command will implement the undo method that would revert the action performed by its execute method.
Each transaction would contain a list of commands that were run as part of it. The transaction is persisted as is for further operations in future.
The user would be provided with a way to view these transactions that can be potentially rolled back. Upon selection of a transaction by user to roll it back, the list of commands corresponding to such a transaction are retrieved and the undo method is called on all those command objects.
HTH.
You can also store them on temporary Table and move those records to your original table 'at later stage'..
If you are just managing transactions during a single save operation, use TransactionScope. But it doesn't sound like that is the case.
If the user may wish to abandon n number of previous save operations, it suggests that an item may exist in draft form. There might be one working draft or many. Subsequently, there must be a way to promote a draft to a final version, either implicitly or explicitly. Think of how an email program saves a draft. It doesn't actually send your message, you may abandon it at any time, and you may recall it at a later time. When you send the message, you have "committed the transaction".
You might also add a user interface to rollback to a specific version.
This will be a fair amount of work, but if you are willing to save and manage multiple copies of the same item it can be accomplished.
You may save the a copy of the same data in the same schema using a status flag to indicate that it is a draft, or you might store the data in an intermediate format in separate table(s). I would prefer the first approach in that it allows the same structures to be used.

Long running Entity Framework transaction

when user opens edit form for some entity, I would like to LOCK this entity and let her make any changes. During editing she needs to be sure that nobody else does any edit operations on it.
How can I lock an entity in Entity Framework (C#) 4+, database MS SQL Server 2008?
Thank you so much in advance!
Bad idea, especially if you have many concurrent users. You will be killing scalability if you lock the rows in the database.
It is better to detect whether others have made edits and if so, inform the user and let them decide what to do.
The timestamp/rowversion data type is a good choice for a field to find out if any changes were made to a row data.
There are two ways to handle these situations:
Optimistic concurrency where you allow concurrent edits and inserts and catch exception if something violates concurrency rules. Optimistic concurrency is enforced by unique constraints guarding inserts of the same items and by timestamps / row version columns guarding concurrent updates to the same item. If somebody else updates row when current user is making changes the application will throw OptimisticConcurrencyException during saving and you will have to allow user to either overwrite other changes or reload new stored data.
Pessimistic concurrency where the record is locked for the duration of the operation executed by any client preventing other clients to update the same record. Pessimistic concurrency is usually enforced by custom columns added to your tables like LockedBy, LockedAt, etc. Once these columns are filled nobody else can select the record for edit. LockedAt can help you implement some automatic expiration of issued locks. Long running "EF transactions" are not long running database transactions.
Your initial description leads to second scenario which makes sense in some applications.

What's the best way to manage concurrency in a database access application?

A while ago, I wrote an application used by multiple users to handle trades creation.
I haven't done development for some time now, and I can't remember how I managed the concurrency between the users. Thus, I'm seeking some advice in terms of design.
The original application had the following characteristics:
One heavy client per user.
A single database.
Access to the database for each user to insert/update/delete trades.
A grid in the application reflecting the trades table. That grid being updated each time someone changes a deal.
I am using WPF.
Here's what I'm wondering:
Am I correct in thinking that I shouldn't care about the connection to the database for each application? Considering that there is a singleton in each, I would expect one connection per client with no issue.
How can I go about preventing the concurrency of the accesses? I guess I should lock when modifying the data, however don't remember how to.
How do I set up the grid to automatically update whenever my database is updated (by another user, for example)?
Thank you in advance for your help!
Consider leveraging Connection Pooling to reduce # of connections. See: http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx
lock as late as possible and release as soon as possible to maximize concurrency. You can use TransactionScope (see: http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.aspx and http://blogs.msdn.com/b/dbrowne/archive/2010/05/21/using-new-transactionscope-considered-harmful.aspx) if you have multiple db actions that need to go together to manage consistency or just handle them in DB stored proc. Keep your query simple. Follow the following tips to understand how locking work and how to reduce resource contention and deadlock: http://www.devx.com/gethelpon/10MinuteSolution/16488
I am not sure other db, but for SQL, you can use SQL Dependency, see http://msdn.microsoft.com/en-us/library/a52dhwx7(v=vs.80).aspx
Concurrency is usually granted by the DBMS using locks. Locks are a type of semaphore that grant the exclusive lock to a certain resource and allow other accesses to be restricted or queued (only restricted in the case you use uncommited reads).
The number of connections itself does not pose a problem while you are not reaching heights where you might touch on the max_connections setting of your DBMS. Otherwise, you might get a problem connecting to it for maintenance purposes or for shutting it down.
DBMSes usually use a concept of either table locks (MyISAM) or row locks (InnoDB, most other DBMSes). The type of lock determines the volume of the lock. Table locks can be very fast but are usually considered inferior to row level locks.
Row level locks occur inside a transaction (implicit or explicit). When manually starting a transaction, you begin your transaction scope. Until you manually close the transaction scope, all changes you make will be attributes to this exact transaction. The changes you make will also obey the ACID paradigm.
Transaction scope and how to use it is a topic far too long for this platform, if you want, I can post some links that carry more information on this topic.
For the automatic updates, most databases support some kind of trigger mechanism, which is code that is run at specific actions on the database (for instance the creation of a new record or the change of a record). You could post your code inside this trigger. However, you should only inform a recieving application of the changes, not really "do" the changes from the trigger, even if the language might make it possible. Remember that the action which triggered the code is suspended until you finish with your trigger code. This means that a lean trigger is best, if it is needed at all.

Categories