I am developing a C# ORM with php's Laravel-like Syntax.
When the ORM starts, it connects to the database and performs any query (it also supports two different connections for reading and writing), and reconnect to the db only if connection is losen or missing.
As this is a Web Framework ORM, how can I handle concurrent transactions on the same connection? Are they supported?
I saw I can manually assign the transaction object to the sqlcommand, but can I create parallel sqltransactions?
Example:
There is an URL of a REST action that will cause a transaction to be opened, some actions performed, and then transaction committed (e.g. perform an order on a shopping cart). What if, multiple users (so different WebOperationContext) call that URL? Is it possible to open and work with multiple "parallel" transactions and then commit them?
How do other ORM's handle this case? Did they use multiple connections?
Thanks for any support!
Mattia
SQL Server does not support parallel transactions on the same connection.
Normally, there is no need for that. Just open connections as you need them. This is cheap thanks to pooling.
A common model is to open connection and transaction right after another. Then, commit and dispose everything at the end.
That way concurrent HTTP requests do not interact at all which is good.
Related
I have a SQL server database with 200 concurrent users limitation. I want to keep the first connection created by any user opened and use it with all other users through my C# Web API. Is that possible?
SqlConnection is not intended to be used concurrently, so to do what you want would mean synchronizing all access, especially if there are transactions involved, or anything involving temporary tables that live longer than a single command. It can be done, but it isn't a good idea.
Note that SqlConnection is disposable, and when disposed: the underlying connection (that you never see) usually goes back to a pool. If you use consecutively (not concurrently) 200 SqlConnection instances, you might have only used a single underlying connection.
If you must put a hard limit on your concurrent connections, you'll have to create your own pool (which might be a pool of one), with your own synchronization code while you lease and release connections. But: it won't be trivial.
I have a unique (or so I think) problem - we have an ASP.NET web app using MVC principles. The project will be at most single threaded (our business requires single point of control). We are using Entity Framework to connect to the database
Problem:
We want to query our database less frequently than every page load.
I have considered putting our database connection in a singleton but am worried about connecting to in too infrequently -- will a query still work if it connected a significant time ago? How would you recommend connecting to the database?
How would you recommend connecting to the database?
Do NOT use a shared connection. Connections are not thread-safe, and are pooled by .NET, so creating one generally isn't an expensive operation.
The best practice is to create a command and connection for every database request. If you are using Entity Framework, then this will be taken care of for you.
If you want to cache results using the built-in Session or Cache properties, then that's fine, but don't cache disposable resources like connections, EF contexts, etc.
If at some point you find you have a measurable performance problem directly related to creating connections or contexts, then you can try and deal with that, but don't try to optimize something that might not even be a problem.
If you want to get data without connecting to the database, you need to cache it - either in memory, in a file or in whatever mean of storage you want, but you need to keep it in front of the DB somehow. There is no other way known to me.
If by connecting you mean building a completely new SqlConnection to your DB, then you can either rely on connection pooling (EF is smart enough to keep your connections alive for some minutes even after you finish your business) or you can just create connections and keep them alive inside your application by not closing them instantly (i.e. keeping track of them inside a structure).
But you should definitely consider if this is REALLY what you want. The way EF does it internally is most of the time exactly what you want.
Some further reading:
https://learn.microsoft.com/en-us/aspnet/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
I have a small web api server written in C# using async/await. The Net version is 4.5.2
Everything is working fine except that I use TransactionScope for some calls and the underlying transaction is escalated to a distributed one. Since I use async/await for my db calls I use TransactionScopeAsyncFlowOption. The SQL server is running version 2008 r2 so it should be able to handle multiple calls without escelating the transaction. All calls are made to the same database with the same connection string.
All SQL connections are done in using statements and I'm not nesting any of them. Each call to the database is awaited before another is done so there should never be two connections active a the same time in one transaction, unless I have misunderstood how async/await works. I'm using Dapper if that might impact things.
Am I missing something obvious or do I need to rewrite my code to use the same connection for all operation in the transaction?
Feel really stupid, missed that Pooling was disabled in the connection string. Removed Pooling=false and the transaction does not escalate to a distributed state.
I am stuck using two db connections with entity framework contexts under a single transaction.
I am trying to use two db contexts under one transaction scope. I get "MSTDC not available". I read it's not an EF problem it's TDC which does not allow two connections.
Is there any answer for this problem?
This happens because the framework thinks that you are trying to have a transaction span multiple databases. This is called a distributed transaction.
To use distributed transactions, you need a transaction coordinator. In your case, the coordinator is the Microsoft Distributed Transaction Coordinator, which runs as a Widows Service on your server. You will need to make sure that this service is running:
Starting the service should solve your immediate issue.
Two-phase commit
From a purely theoretical point of view, distributed transactions are an impossibility* - that is, disparate systems cannot coordinate their actions in such a way that they can be absolutely certain that they either all commit or all roll back.
However, using a transaction coordinator, you get pretty darn close (and 'close enough' for any conceivable purpose). When using a distributed transaction, each party in the transaction will try to make the required changes and report back to the coordinator whether all went well or not. If all parties report success, the coordinator will tell all parties to commit. However, if one or more parties report a failure, the coordinator will tell all parties to roll back their changes. This is the "Two-phase commit protocol".
Watch out
It obviously takes time for the coordinator to communicate with the different parties of the transaction. Thus, using distributed transactions can hamper performance. Moreover, you may experience blocking and deadlocking among your transactions, and MSDTC obviously complicates your infrastructure.
Thus, before you turn on the Distributed Transaction Coordinator service and forge ahead with your project, you should first take a long, hard look at your architecture and convince yourself that you really need to use multiple contexts.
If you do need multiple contexts, you should investigate whether you can prevent transactions from being escalated to distributed transactions.
Further reading
You may want to read:
MSDN: "Managing Connections and Transactions" (specifically on EF)
Blog: "Avoid unwanted escalation to distributed transactions" (a bit dated, though)
***** See for example: Reasoning About Knowledge
You should run MSDTC (Distributed Transaction Coordinator) system service.
I am not sure if this is asked before or not (as I googled it).
Well I have written a web-service that will be hosted with SQLite database.
Many clients would be performing CRUD Operations on it. I planed to use this just for simplicity.
Now I have written my most methods and at this time I thought that there is no DBMS with that SQLite (I suppose) so there may be conflicts and data inconsistency issues if two or more client applications write to my application.
or Does SQLite supports managing of operation for multiple connections? or I have to switch to SQL Server 2008
SQLite "supports managing of operation for multiple connections" in the sense that it won't blow up or cause data corruption. It is not, however, designed to be as efficient as MS-SQL Server is with a high load of concurrent operations. So, what it boils down to is how many is "Many clients". If you are talking about tens of simultaneous requests, you will be fine with SQLite. If you are talking about hundreds of simultaneous requests, you will probably need to migrate to MS-SQL Server. Note that in order for two requests to be simultaneous the two clients must press the 'Submit' button at roughly the same few-millisecond time window. So it takes hundreds of simultaneously connected clients to get dozens of simultaneous requests.
The short answer is yes. Take a look at this Sqlite FAQ entry. The longer answer is a bit more complicated... Would you want to use Sqlite in an architecture that is meant to handle heavy transaction loads? Probably not. If you do want to move in that direction I would suggest starting with SQL Server Express. If you need to upgrade to a full-blown SQL Server it won't be an issue at all...
Sqlite Excerpt:
(5) Can multiple applications or multiple instances of the same application access a single database file at the same time?
Multiple processes can have the same database open at the same time.
Multiple processes can be doing a SELECT at the same time. But only
one process can be making changes to the database at any moment in
time, however.
SQLite uses reader/writer locks to control access to the database. [...]
Yes SQLite supports concurrency and locking