Make mysql Connection threadsafe - c#

I want to read / write into a DB from multiple threads.
After some research, I remembered the ACID rules. Do I need to call myTrans = myConnection.BeginTransaction(); every time I want to read/write from inside a thread, in order to keep this Transaction safe from dirty reads/writes (and myTrans.Commit();)? In normal SQL I would use SET TRANSACTION ISOLATION LEVEL SERIALIZABLE to secure it.
How do i do that in C# ?
Thanks in advance

You only need to call BeginTransaction() if you need multiple statements included in the same transaction. It's not normally necessary for ACID rules for single statements, as individual sessions — each call to ExecuteReader()/ExecuteScalar()/ExecuteNonQuery()/Fill() — gives you an implicit transaction.
Even across multiple statements, my tendancy is to put the statements into the same long SQL string (or stored procedure) and include any needed transaction instructions as part of the SQL.
In terms of thread-safety, the best thing to do is use a separate, brand new connection object for each transaction, and wrap it in a using block. Connections are not thread-safe, and so the way to protect them is giving each thread (or transaction within a thread) it's own connection it doesn't have to share.
Even within a thread, it's better NOT to re-use the same connection. There is a feature called Connection Pooling, where the connection object you see in the C# code is a light-weight wrapper for a much-heavier actual connection that is shared from a pool. Trying to re-use the same connection object throughout a thread or application optimizes for the light thing at the expense of the heavy thing.

Related

How many concurrent statements does SqlConnection support

How many concurrent statements does C# SqlConnection support?
Let's say I am working on Windows service running 10 threads. All threads use the same SqlConnection object but different SqlCommand object and perform operations like select, insert, update and delete on either different tables or same table but different data. Will it work? Will a single SqlConnection object be able to handle 10 simultaneous statements?
How many concurrent statements does C# SqlConnection support?
You can technically have multiple "in-flight" statements, but only one acutally executing.
A single SqlConnection maps to a single Connection and Session in SQL Server. In Sql Server a Session can only have a single request active at-a-time. If you enable MultipeActiveResultsets you can start a new query before the previous one is finished, but the statements are interleaved, never run in parallel.
MARS enables the interleaved execution of multiple requests within a
single connection. That is, it allows a batch to run, and within its
execution, it allows other requests to execute. Note, however, that
MARS is defined in terms of interleaving, not in terms of parallel
execution.
And
execution can only be switched at well defined points.
https://learn.microsoft.com/en-us/sql/relational-databases/native-client/features/using-multiple-active-result-sets-mars?view=sql-server-ver15
So you can't even guarantee that another statement will run whenever one becomes blocked. So if you want to run statements in parallel, you need to use multiple SqlConnections.
Note also that a single query might use a parallel execution plan, and have multiple tasks running in parallel.
David Browne gave you the answer the ask, but there might be something else you need to know:
Let's say I am working on Windows service running 10 threads. All threads use the same SqlConnection object but different SqlCommand object and perform operations like select, insert, update and delete on either different tables or same table but different data.
This design just seems wrong on several fronts:
You keep a disposeable resource around and open. My rule for Disposeable stuff is: "Create. Use. Dispose. All in the same piece of code, ideally using a using block." Keeping disposeable stuff around or even sharing it between threads is jsut not worth the danger of forgetting to close it.
There is no performance advantage: SqlConnection uses internall connection pooling without any side effects. And even if there is a relevant speed advantage, they would not be worth the dangers.
You are using Mutltithreading with Database Access. Multithreading is one way to implement multitasking, but not one you should use until you need it. Multithreading is only usefull with CPU bound work. Otherweise you should generally be using async/await or similar appraoches. DB Operations are either disk or network bound.
There is one exception to this rule, and that is if your application is a Server. Servers are teh rare example of something being pleasingly parallel. So having a large Threadpool to process incomming requests in paralell is very common. It is rather rare that you write one of those, however. Mostly you just run your code in a existing server infrastructure that deals with that.
If you do have heavy CPU work, chances are you are retreiving to much. It is a very common beginners mistake to retreive a lot, then do filtering in C# code. Do not do that. Do as much filtering and processing as possible in the Query. You will not be able to beat the speed of the DB-Server, and at best you tie up your network pointlessly.

Web API one SQL connection to all users

I have a SQL server database with 200 concurrent users limitation. I want to keep the first connection created by any user opened and use it with all other users through my C# Web API. Is that possible?
SqlConnection is not intended to be used concurrently, so to do what you want would mean synchronizing all access, especially if there are transactions involved, or anything involving temporary tables that live longer than a single command. It can be done, but it isn't a good idea.
Note that SqlConnection is disposable, and when disposed: the underlying connection (that you never see) usually goes back to a pool. If you use consecutively (not concurrently) 200 SqlConnection instances, you might have only used a single underlying connection.
If you must put a hard limit on your concurrent connections, you'll have to create your own pool (which might be a pool of one), with your own synchronization code while you lease and release connections. But: it won't be trivial.

How do I minimize or inform users of database connection lag / failure?

I'm maintaining a ASP/C# program that uses an MS SQL Server 2008 R2 for its database requirements.
On normal and perfect days, everything works fine as it is. But we don't live in a perfect world.
An Application (for Leave, Sick Leave, Overtime, Undertime, etc.) Approval process requires up to ten separate connections to the database. The program connects to the database, passes around some relevant parameters, and uses stored procedures to do the job. Ten times.
Now, due to the structure of the entire thing, which I can not change, a dip in the connection, or heck, if I put a debug point in VS2005 and let it hang there long enough, the Application Approval Process goes incomplete. The tables are often just joined together, so a data mismatch - a missing data here, a primary key that failed to update there - would mean an entire row would be useless.
Now, I know that there is nothing I can do to prevent this - this is a connection issue, after all.
But are there ways to minimize connection lag / failure? Or a way to inform the users that something went wrong with the process? A rollback changes feature (either via program, or SQL), so that any incomplete data in the database will be undone?
Thanks.
But are there ways to minimize connection lag / failure? Or a way to
inform the users that something went wrong with the process? A
rollback changes feature (either via program, or SQL), so that any
incomplete data in the database will be undone?
As we discussed in the comments, transactions will address many of your concerns.
A transaction comprises a unit of work performed within a database
management system (or similar system) against a database, and treated
in a coherent and reliable way independent of other transactions.
Transactions in a database environment have two main purposes:
To provide reliable units of work that allow correct recovery from failures and keep a database consistent even in cases of system
failure, when execution stops (completely or partially) and many
operations upon a database remain uncompleted, with unclear status.
To provide isolation between programs accessing a database concurrently. If this isolation is not provided, the program's outcome
are possibly erroneous.
Source
Transactions in .Net
As you might expect, the database is integral to providing transaction support for database-related operations. However, creating transactions from your business tier is quite easy and allows you to use a single transaction across multiple database calls.
Quoting from my answer here:
I see several reasons to control transactions from the business tier:
Communication across data store boundaries. Transactions don't have to be against a RDBMS; they can be against a variety of entities.
The ability to rollback/commit transactions based on business logic that may not be available to the particular stored procedure you are calling.
The ability to invoke an arbitrary set of queries within a single transaction. This also eliminates the need to worry about transaction count.
Personal preference: c# has a more elegant structure for declaring transactions: a using block. By comparison, I've always found transactions inside stored procedures to be cumbersome when jumping to rollback/commit.
Transactions are most easily declared using the TransactionScope (reference) abstraction which does the hard work for you.
using( var ts = new TransactionScope() )
{
// do some work here that may or may not succeed
// if this line is reached, the transaction will commit. If an exception is
// thrown before this line is reached, the transaction will be rolled back.
ts.Complete();
}
Since you are just starting out with transactions, I'd suggest testing out a transaction from your .Net code.
Call a stored procedure that performs an INSERT.
After the INSERT, purposely have the procedure generate an error of any kind.
You can validate your implementation by seeing that the INSERT was rolled back automatically.
Transactions in the Database
Of course, you can also declare transactions inside a stored procedure (or any sort of TSQL statement). See here for more information.
If you use the same SQLConnection, or other connection types that implement IDbConnection, you can do something similar to transactionscopes but without the need to create the security risk that is a transactionscope.
In VB:
Using scope as IDbTransaction = mySqlCommand.Connection.BeginTransaction()
If blnEverythingGoesWell Then
scope.Commit()
Else
scope.Rollback()
End If
End Using
If you don't specify commit, the default is to rollback the transaction.

Does TransactionScope makes a cross process lock on the database in LinQ queries?

I got several applications run onto the one machine, also with an MSSQL server on that machine.
Applications are various typed, like WPF, WCF Service, MVC App and so on.
All of them accessing the only database, which is located on the sql server.
The access mode is the simple LinQ-to-SQL class calls.
In each database concact I make some queries, some checks and some db-writes.
My question is:
Can I be sure that calls inside those transaction scopes are not running at the same time (are thread and process safe) by using simple TransactionScope instance?
Using a transaction scope will obviously make a particular connection transactional. The use of transaction scopes in itself doesn't stop two different processes on a machine doing the same thing at once. It does ensure that all actions performed are either committed or rolled back. The view of data each process sees depends on the isolation level, which by default is serializable, which can easily lead to deadlocks. A more practical isolation level is read comitted, preferably with snapshot isolation as this further reduces deadlocks and waits times.
If you want to ensure only one instance of application is doing something, you can use a mutex or use a database lock that all different processes will attempt to acquire and if necessary wait for.

Multiple isolation levels needed for a TransactionScope?

I am running into situations in my application where I need to use table lock hints or set the transaction isolation level to something other than the default Read Committed, in order to resolve deadlock issues. I am using a service oriented architecture, with each service call operating as an atomic operation, and Linq To Sql is serving as a lightweight DAL. Each service call calls my Business Layer and declares a new transaction like this:
using (var scope = new TransactionScope())
{
// Get datacontext
// do Business Logic stuff, including database operations via Linq To Sql
// Save transaction
scope.Complete();
}
The problem is sometimes I have complicated business logic that requires many database operations. Some reads, some writes, some reads for updating, etc, all within the same service call, and thus the same transaction.
I have read about the inability of Linq To Sql to add table lock hints to your linq query, with the suggested solution of using TransactionScope isolation levels instead. That's great and all, but in my situation, where each Transaction is for the purpose of an atomic service call, I don't see where this would work. For example, if I need to read one table without locking and dirty reads may be OK, and turn around and do another read for the purpose of updating, and do an update. I don't want to set Read Uncommitted for the entire transaction, only one particular read, so what do I do?
Is there not an extension I can implement that will allow me to add table lock hints, without using views or stored procedures, or using datacontext.ExecuteQuery("my raw sql string here")
I think the best answer here is to use multiple Transactions, and batch the transactions that only read "dirty" in one batch, and the updates that require read committed in another batch. If any information needs to cross batches, setup a temporary in memory cache for that data.

Categories