Working With ODP.NET Asynchronously - c#

Hay,
My system needs to execute several major SQL`s (on Oracle DB) using the same connection (asynchronous).
What`s the best practice for this issue?
1. open single connection and execute every SQL statement on different thread (does it thread safe?)
2. create new connection and “open + close” it for every SQL statement
Thanks,
Hec

We've been calling Oracle SQL statements on multiple threads, and this is probably best, if your DB can handle the load and won't be the bottleneck anyway. HOWEVER, I think you need to create the connection on the thread that will be issuing the SQL command. You can (and probably should) also use connection pooling so your connections will be reused, rather than being re-established (and Oracle seems to be fine with re-using these from one thread to another).

Related

How to manage SqlConnection in C# for high frequency transaction?

I have an application that connect to a SQL Server database with high frequency. Inside this service, there are many scheduled tasks that run every second, and each time I'm executing some query.
I don't understand which solution is better in this condition.
Opening a single SqlConnection and keeping it open while application is running and execute all query with that connection
Each time I want to execute query, opening a new connection and after query execution, close the connection (does this solution suitable for so many scheduled task that runs every 1 second?)
I tried second solution, but is there any better choice?
How do ORMs like EF manage connections?
As you see i have many service. I cant change interval and the interval is important for me. but the code makes so many calls and im following a better way manage connection over database. Also I'm making connection with Using Statement.
Is there any better solution?
you should use SQL Connection Pool feature for that.
It automatically manages in the background if a connection needs to be open or can be reused.
Documentation: https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling?source=recommendations
Example copied from that page
using (SqlConnection connection = new SqlConnection(
"Integrated Security=SSPI;Initial Catalog=Northwind"))
{
connection.Open();
// Pool A is created.
}
using (SqlConnection connection = new SqlConnection(
"Integrated Security=SSPI;Initial Catalog=pubs"))
{
connection.Open();
// Pool B is created because the connection strings differ.
}
using (SqlConnection connection = new SqlConnection(
"Integrated Security=SSPI;Initial Catalog=Northwind"))
{
connection.Open();
// The connection string matches pool A.
}
By using the "using" statement, application checks if a connection in this pool can be reused before opening a new connection. So the overhead of opening and closing the connections disappears.
But after your last edit you seem to have other problems in your current architecture. Like the other poster recommends you can try to use the "with (nolock)" parameter in your sql statements. It creates dirty reads, but maybe that's ok for your application.
Alternatively if all your services use the same select statement maybe a stored procedure or a caching mechanism could help.
I assume that you are already opening/closing your SQL connections in either a "using" statement or explicitly in your code ( try/catch/finally ). If so you are already making use of connection pooling as it is enabled in ADO.Net by default ("By default, connection pooling is enabled in ADO.NET").
Therefore I don't think that your problem is so much a connection/resource problem as it is a database concurrency issue. I assume it to be either 1 of 2 issues :
Your code is making so many calls to the SQL server that it is exhausting all the available connections and nobody else can get one
Your code is locking tables in SQL that is causing other code/applications to timeout
If it is case 1, try and redesign your code to be "less chatty" to the database. Instead of making several inserts/updates per second, perhaps buffer the changes and make a single insert/update every 3-5 seconds in batch mode ( obvs if possible ). Or maybe your SQL statements are taking longer than 1 second to execute and you are calling them every second causing in a backlog scenario?
If it is case 2, try and redesign the SQL tables in such a way that the "reading" applications are not influenced by the "writing" application. Normally this involves a service that periodically writes aggregated data to a read-only table for viewing or at very least adding a "WITH(NOLOCK)" hint to the select clauses to allow dirty reads ( i.e. it wont lock the table to read, but may result in slightly out of date dataset i.e. eventual consistency )
Good luck

Web API one SQL connection to all users

I have a SQL server database with 200 concurrent users limitation. I want to keep the first connection created by any user opened and use it with all other users through my C# Web API. Is that possible?
SqlConnection is not intended to be used concurrently, so to do what you want would mean synchronizing all access, especially if there are transactions involved, or anything involving temporary tables that live longer than a single command. It can be done, but it isn't a good idea.
Note that SqlConnection is disposable, and when disposed: the underlying connection (that you never see) usually goes back to a pool. If you use consecutively (not concurrently) 200 SqlConnection instances, you might have only used a single underlying connection.
If you must put a hard limit on your concurrent connections, you'll have to create your own pool (which might be a pool of one), with your own synchronization code while you lease and release connections. But: it won't be trivial.

Transaction escalates when using TransactionScopeAsyncFlowOption

I have a small web api server written in C# using async/await. The Net version is 4.5.2
Everything is working fine except that I use TransactionScope for some calls and the underlying transaction is escalated to a distributed one. Since I use async/await for my db calls I use TransactionScopeAsyncFlowOption. The SQL server is running version 2008 r2 so it should be able to handle multiple calls without escelating the transaction. All calls are made to the same database with the same connection string.
All SQL connections are done in using statements and I'm not nesting any of them. Each call to the database is awaited before another is done so there should never be two connections active a the same time in one transaction, unless I have misunderstood how async/await works. I'm using Dapper if that might impact things.
Am I missing something obvious or do I need to rewrite my code to use the same connection for all operation in the transaction?
Feel really stupid, missed that Pooling was disabled in the connection string. Removed Pooling=false and the transaction does not escalate to a distributed state.

C# - TSQL Parallel transactions on same connection instance

I am developing a C# ORM with php's Laravel-like Syntax.
When the ORM starts, it connects to the database and performs any query (it also supports two different connections for reading and writing), and reconnect to the db only if connection is losen or missing.
As this is a Web Framework ORM, how can I handle concurrent transactions on the same connection? Are they supported?
I saw I can manually assign the transaction object to the sqlcommand, but can I create parallel sqltransactions?
Example:
There is an URL of a REST action that will cause a transaction to be opened, some actions performed, and then transaction committed (e.g. perform an order on a shopping cart). What if, multiple users (so different WebOperationContext) call that URL? Is it possible to open and work with multiple "parallel" transactions and then commit them?
How do other ORM's handle this case? Did they use multiple connections?
Thanks for any support!
Mattia
SQL Server does not support parallel transactions on the same connection.
Normally, there is no need for that. Just open connections as you need them. This is cheap thanks to pooling.
A common model is to open connection and transaction right after another. Then, commit and dispose everything at the end.
That way concurrent HTTP requests do not interact at all which is good.

A new sql connection for each query?

I'm writing a server application that's communication with a local sql server.
Each client will need to read or write data to the database.
Would it be better to have a thread safe class which will enqueue and executes the sql commands on a single sql connection? Or should I open a new connection for each command? Does it matter much for performance?
If you have a batch of statements that have to be executed after each other, you should use the same SqlConnection.
As soon as you do not longer need the SqlConnection, and you do not know when you will need a connection again, you should close the connection.
So, if you have to execute 2 insert statements and one update statement after each other, for instance, you should use the same SqlConnection.
The most important advantage here, is that you can put those statement in a transaction if necessary. Transactions cannot be shared accross connections.
When you're finished working with the DB, you can close the connection. By default, connection pooling is used, and the connection will be returned to the pool, so that it can be reused the next time you need a connection to the DB.
Connection lifetime should be short, but you should not use a separate connection for each DbCommand.
If you are using any flavor of ADO.NET, connection pooling will automatically be used (at least with SQL Server) unless you explicitly disable it, so there's no reason to do anything special about that.
Be sure to remember to Close your connections after each used - this will simply return the connection to the connection pool.
Usually you should create a new connection for each command and take advantage of the in-built connection pooling.

Categories