Sharing transactions between connections - c#

I have a FileShare crawler (getting permissions and dropping them somewhere for later Audit). Currently it is starting multiple threads to crawl the same folder (to speed up the process).
In C#, each SqlConnection object has its own SqlTransaction, initiated by the SqlConnection.BeginTransaction() call.
Here is the pseudo code of the current solution:
Get list of folders
For each folder get list of sub-folders
For each sub folder start a thread to collect file shares
Each thread will save collected data to database
Run Audit reports on the database
The problem arise when one of the sub folders threads fails. We end up with partial folder scanning which "cannot be detected easily". The main reason is that each thread is running on a separate connection.
I would like to have each folder to be committed in the same transaction rather than having incomplete scanning (current situation, when some threads fail). No transaction concept is implemented but I am evaluating the options.
Based on the comments of this answer, the producer/consumer queue would be an option but unfortunately memory is a limit (due to the number of started threads). In case the producer/consumer space is committed to disk to overcome the RAM limit, the execution time will go up (due to the very limited disk I/O compared to memory I/O). I guess I am stuck with a memory/time compromise. Any other suggestions?

It is possible to share the same transaction on multiple connections with SQL Server using the obsolete bind transaction feature. I have never used it and I wouldn't base new development on it. It also seems unnecessary here.
Can't you just have all the producers use the same connection and transaction? Put a lock around it. This obviously bottlenecks the process but it might still be fast enough.
You say you execute INSERT statements. For bulk inserts you can use the SqlBulkCopy class which is very much faster. Batch up the rows and only execute a bulk insert when you have >>1000 rows buffered.
I don't even see the need for producer/consumer here. It would indeed benefit performance by pipelining production with consumption but it also introduces far more complex threading. If you want to go this route you should probably give an IEnumerable<SqlDataRecord> to the SqlBulkCopy class to directly stream all rows that have been produced into it without intermediate buffering.

Related

How many concurrent statements does SqlConnection support

How many concurrent statements does C# SqlConnection support?
Let's say I am working on Windows service running 10 threads. All threads use the same SqlConnection object but different SqlCommand object and perform operations like select, insert, update and delete on either different tables or same table but different data. Will it work? Will a single SqlConnection object be able to handle 10 simultaneous statements?
How many concurrent statements does C# SqlConnection support?
You can technically have multiple "in-flight" statements, but only one acutally executing.
A single SqlConnection maps to a single Connection and Session in SQL Server. In Sql Server a Session can only have a single request active at-a-time. If you enable MultipeActiveResultsets you can start a new query before the previous one is finished, but the statements are interleaved, never run in parallel.
MARS enables the interleaved execution of multiple requests within a
single connection. That is, it allows a batch to run, and within its
execution, it allows other requests to execute. Note, however, that
MARS is defined in terms of interleaving, not in terms of parallel
execution.
And
execution can only be switched at well defined points.
https://learn.microsoft.com/en-us/sql/relational-databases/native-client/features/using-multiple-active-result-sets-mars?view=sql-server-ver15
So you can't even guarantee that another statement will run whenever one becomes blocked. So if you want to run statements in parallel, you need to use multiple SqlConnections.
Note also that a single query might use a parallel execution plan, and have multiple tasks running in parallel.
David Browne gave you the answer the ask, but there might be something else you need to know:
Let's say I am working on Windows service running 10 threads. All threads use the same SqlConnection object but different SqlCommand object and perform operations like select, insert, update and delete on either different tables or same table but different data.
This design just seems wrong on several fronts:
You keep a disposeable resource around and open. My rule for Disposeable stuff is: "Create. Use. Dispose. All in the same piece of code, ideally using a using block." Keeping disposeable stuff around or even sharing it between threads is jsut not worth the danger of forgetting to close it.
There is no performance advantage: SqlConnection uses internall connection pooling without any side effects. And even if there is a relevant speed advantage, they would not be worth the dangers.
You are using Mutltithreading with Database Access. Multithreading is one way to implement multitasking, but not one you should use until you need it. Multithreading is only usefull with CPU bound work. Otherweise you should generally be using async/await or similar appraoches. DB Operations are either disk or network bound.
There is one exception to this rule, and that is if your application is a Server. Servers are teh rare example of something being pleasingly parallel. So having a large Threadpool to process incomming requests in paralell is very common. It is rather rare that you write one of those, however. Mostly you just run your code in a existing server infrastructure that deals with that.
If you do have heavy CPU work, chances are you are retreiving to much. It is a very common beginners mistake to retreive a lot, then do filtering in C# code. Do not do that. Do as much filtering and processing as possible in the Query. You will not be able to beat the speed of the DB-Server, and at best you tie up your network pointlessly.

nHibernate locking - does LockMode UPGRADE block subsequent a NONE read on the same row

We have an card order process which can take a few seconds to run. Data in this process is run using the LockMode.UPGRADE in nHibernate.
A second (webhook) process with runs with LockMode.NONE is occasionally being triggered before the first order process completed creating a race condition and appears to be using the original row data. Its not being blocked until the first is complete so is getting old data.
Since our database is not running with NO WAIT or any of the other SNAPSHOT COMMITTED settings.
My Question is: Can lockmode.none somehow ignore the UPGRADE lock and read the old data (cache perhaps?)
Thanks.
Upgrade locks are for preventing another transaction to acquire the same data for upgrade too.
A transaction running in read committed mode without additional lock will still be able to read them. You need an exclusive lock for blocking read committed reads (provided they are without snapshot isolation level). Read more on Technet.
As far as I know, NHibernate does not provide a way to issue exclusive locks on entity reads. (You will have them by actually updating the entities in the transaction.) You may workaround this by using CreateSQLQuery for issuing your locks directly in DB. Or in your webhook process, acquire upgrade lock too.
But this path will surely lead to deadlocks or lock contentions.
More generally, avoiding explicitly locking by adapting code patterns is preferable. I have just written an example here, see the "no explicit lock" solution.
As for your webhook process, is there really any differences between those two situation?
It obtains its data, which was not having any update lock on it, but before it processes it, its data get updated by the order process.
It obtains its data, including some which was having an update lock on them.
Generally you will still have the same trouble to solve, and avoiding 2. will not resolve 1. Here too, that is the way the process work with its data that need to be adapted for supporting concurrency. (Hint: using long running transactions for solving 1. will surely lead to locks contention too.)

What's the best way to manage concurrency in a database access application?

A while ago, I wrote an application used by multiple users to handle trades creation.
I haven't done development for some time now, and I can't remember how I managed the concurrency between the users. Thus, I'm seeking some advice in terms of design.
The original application had the following characteristics:
One heavy client per user.
A single database.
Access to the database for each user to insert/update/delete trades.
A grid in the application reflecting the trades table. That grid being updated each time someone changes a deal.
I am using WPF.
Here's what I'm wondering:
Am I correct in thinking that I shouldn't care about the connection to the database for each application? Considering that there is a singleton in each, I would expect one connection per client with no issue.
How can I go about preventing the concurrency of the accesses? I guess I should lock when modifying the data, however don't remember how to.
How do I set up the grid to automatically update whenever my database is updated (by another user, for example)?
Thank you in advance for your help!
Consider leveraging Connection Pooling to reduce # of connections. See: http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx
lock as late as possible and release as soon as possible to maximize concurrency. You can use TransactionScope (see: http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.aspx and http://blogs.msdn.com/b/dbrowne/archive/2010/05/21/using-new-transactionscope-considered-harmful.aspx) if you have multiple db actions that need to go together to manage consistency or just handle them in DB stored proc. Keep your query simple. Follow the following tips to understand how locking work and how to reduce resource contention and deadlock: http://www.devx.com/gethelpon/10MinuteSolution/16488
I am not sure other db, but for SQL, you can use SQL Dependency, see http://msdn.microsoft.com/en-us/library/a52dhwx7(v=vs.80).aspx
Concurrency is usually granted by the DBMS using locks. Locks are a type of semaphore that grant the exclusive lock to a certain resource and allow other accesses to be restricted or queued (only restricted in the case you use uncommited reads).
The number of connections itself does not pose a problem while you are not reaching heights where you might touch on the max_connections setting of your DBMS. Otherwise, you might get a problem connecting to it for maintenance purposes or for shutting it down.
DBMSes usually use a concept of either table locks (MyISAM) or row locks (InnoDB, most other DBMSes). The type of lock determines the volume of the lock. Table locks can be very fast but are usually considered inferior to row level locks.
Row level locks occur inside a transaction (implicit or explicit). When manually starting a transaction, you begin your transaction scope. Until you manually close the transaction scope, all changes you make will be attributes to this exact transaction. The changes you make will also obey the ACID paradigm.
Transaction scope and how to use it is a topic far too long for this platform, if you want, I can post some links that carry more information on this topic.
For the automatic updates, most databases support some kind of trigger mechanism, which is code that is run at specific actions on the database (for instance the creation of a new record or the change of a record). You could post your code inside this trigger. However, you should only inform a recieving application of the changes, not really "do" the changes from the trigger, even if the language might make it possible. Remember that the action which triggered the code is suspended until you finish with your trigger code. This means that a lean trigger is best, if it is needed at all.

SQlite/Firebird: Does any of them support multiple concurrent write access?

Question: I currently store ASP.net application data in XML files.
Now the problem is I have asynchronous operations, which means I ran into the problem of simultanous write access on a XML file...
Now, I'm considering moving to an embedded database to solve the issue.
I'm currently considering SQlite and embeddable Firebird.
I'm not sure however if SQlite or Firebird can handle multiple concurrent write access.
And I certainly don't want the same problem again.
Anybody knows ?
SQlite certainly is better known, but which one is better - SQlite or Firebird ? I tend to say Firebird, but I don't really know.
No MS-Access or MS-SQL-express recommodations please, I'm a sane person.
I wll choose Firebird for many reasons and for this too
Although it is transactional, SQLite
does not support concurrent
transactions, so if your embedded
application needs two or more
connections, they must be serialized.
An embedded Firebird database is
simple to upgrade to a fully shared
database - just change the shared
library.
May be you can also check this
SQLITE can be configured to gracefully handle simultaneous writes in most situations. What happens is that when one thread or process begins a write to the db, the file is locked. When the second write is attempted, and encounters the lock, it backs off for a short period before attempting the write again, until it succeeds or times out. The timeout is configurable, but otherwise all this happens without the application code having to do anything special except enabling the option, like this:
// set SQLite to wait and retry for up to 100ms if database locked
sqlite3_busy_timeout( db, 100 );
All this works very well and without any difficulty, except in two circumstances:
If an application does a great many writes, say a thousand inserts, all in one transaction, then the database will be locked up for a significant period and can cause problems for any other application attempting to write. The solution is to break up such large writes into seperate transactions, so other applications can get access to the database.
If the database is shared by different processes running on different machines, sharing a network mounted disk. Many operating systems have bugs in network mounted disks that making file locking unreliable. There is no answer to this. If you need to share a db on a network mounted disk, you need another database engine such as MySQL.
I do not have any experience with Firebird. I have used SQLITE in situations like this for many applications over several years.
Have you looked into Berkeley DB with the SQLite API for SQL support?
It sounds like SQLite will be a good fit. We use SQLite in a number of production apps, it supports, actually, it prefers transactions which go a long way to handling concurrency.
transactional sqlite? in C#
I would add #3 to the list from ravenspoint above: if you have a large call-center or order-processing center, say, where dozens of people might be hitting the SAVE button at the same time, even if each is updating or inserting just one record, you can run into problems using the busy timeout approach.
For scenario #3, a true SQL engine that can serialize is ideal; less ideal but serviceable is a dbms that can do byte-range record locking of a shared-file. But be aware that even a byte-range record lock will be inadequate for a large number of concurrent writes when new records are appended to the end of the file like a caboose on the end of a freight train, so that multiple processes are trying at the same time to set a lock on the same byte-range. On the other hand, a byte-range record locking scheme coupled with a hashed-key sparse file approach (e.g. the old Revelation/OpenInsight database for LANs) will be far superior to ISAM for this scenario.

Will multiple connections to a database from mulitple threads of an application increase insert queries performance?

Here I am dealing with a database containing tens of millions of records. I have an application which connects to the database, gets all the data from a single column in a table and does some operation on it and updates it (for SQL Server - using cursors).
For millions of records it is taking very very ... long time to update. So I want to make it faster by
using multiple threads with an independent connection for each thread.
or
by using a single connection throughout all the threads to fire the update queries.
Which one is faster, or if you have any other ideas plz explain.
I need a solution which is independent of database type , or even if you know specific solutions for each type of db, please reply.
The speedup you're trying to achieve won't work. To the contrary, it will slow down the overall processing as the database now has also to keep multiple connections/sessions/transactions in sync.
Keep with as few connections/transactions as possible for repetitive and comparable operations.
If it takes too long for your taste, maybe try to analyze if the queries can be optimized somehow. Also have a look at database-specific extensions (ie bulk operations) suitable for your problem.
All depends on the database, and the hardware it is running on.
If the database can make use of concurrent processing, and avoids contention on shared resources (e.g. page base locks would span multiple records, record based would not). Shared resources in this case include hardware, a single core box will not be able to execute multiple CPU intensive activities (e.g. parsing SQL) truely in parallel.
Network latency is something you might help alleviate with concurrent inserts even if the database is itself not able to exploit concurrency.
As with any question of performance there is substitute for testing in your specific scenario.
If possible try to use the Stored procedure the do all the processing and update the records.

Categories