EF + Multiple transaction, Multiple Context - c#

I have been trying everything to make this situation working, but unable so far
Entity Framework
XUnit (testing library)
2 DbContext (2 differents databases, 2 connections strings)
Situation: Run integration test with AutoRollBack feature (The AutoRollback feature manily wrap the code in a Parent transaction which is rolled back at the end of the test)
The Test looks like:
[AutoRollBack]
Test(){
Operation1 against DB1
Operation2 against Db2
}
I enabled MSDTC on both SQL Servers, used the DTCPing tool to confirm that the communication is ok between them
I Enabled Distributed Transaction in Inbound and Outbound Firewall in both servers
I Added distributed Transaction in Allowed Programs in Firewalls of both servers
Both servers can ping each other using Netbios name
But the 2nd operation in the Test will always return "the Underlying provider failed on Open"
The MSDTC transaction manager was unable to pull the transaction from the source transaction manager due to communication problems. Possible causes are: a firewall is present and it doesn't have an exception for the MSDTC process, the two machines cannot find each other by their NetBIOS names, or the support for network transactions is not enabled for one of the two transaction managers. (Exception from HRESULT: 0x8004D02B)
I am looking for another way of debugging the problem. is there a way to have some logs of some sort for example

Related

Which transaction manager will be used in WCF?

I am going through the transactions exist in WCF service but seeking some more clarification on this. I am not sure about which transaction manager WCF will use for following scenarios:
If the WCF service is performing insert in table of one SQL server database and delete from table of another SQL server database (in same or different server)
If the same WCF service is performing insert in table of one SQL server database and delete from table oracle database.
If WCF service calling 2 different WCF service performing operation on same SQL server base database.
Kindly help me providing some understanding on this situations.
I think you're giving WCF more credit than it's due. WCF can do some amazing stuff, but there's nothing magical about it. It provides a set of interfaces for web services and allows you to provide an intermediary access layer for your data.
So let's tackle your scenarios:
If the WCF service is performing insert in table of one SQL server database and delete from table of another SQL server database (in same or different server)
We've got two RDBMS in use here, so you're going to have two transaction managers. The first transaction manager is in the RDBMS for the insert, and the second transaction manager is for the delete.
If the same WCF service is performing insert in table of one SQL server database and delete from table oracle database.
Again, we've got two RDBMS in use here, so you're going to have two transaction managers. The first transaction manager is in the RDBMS for the insert, and the second transaction manager is for the delete.
Note that we don't need to care about which type of RDBMS it is, we just track the number that are involved.
If WCF service calling 2 different WCF service performing operation on same SQL server base database.
This one is a little trickier because we don't know what the 2 WCF services are doing, and there is some unadvisable voodoo magic that could be done to coordinate transactions across the 2 services. I'm going to assume you're smarter than that and didn't mean that case.
So in this case, we have 1 RDBMS performing 2 separate transactions. We'll have 1 transaction manager from the 1 RDBMS, but the operations will complete under different transactions.
To wrap that up - to know how many transaction managers are involved, you need to look at the number of RDBMS that are being used. And to know how many transactions will be required, you need to look at the number of operations performed.
Notice that the use of WCF has no bearing on your concern about the managers. WCF just happens to be a tool that provides an additional way of accessing the data through a service. WCF is cool, but it's not magic.
Additional note
You asked in a comment:
my concern is that in all of this condition which transaction manager it will use a) The LTM b) The KTM c) The DTC?
And for the MS SQL Server transactions, it will either be the LTM or the DTC that handles the transaction. Per this MSDN Blog entry, it's not necessarily something you need to worry about until performance becomes a significant issue. And you should avoid premature optimization in favor of getting things working first.
And based upon this description of the KTM, it's very unclear how you think you'd be using the KTM in any of the cases you asked about.
The Kernel Transaction Manager (KTM) enables the development of applications that use transactions. The transaction engine itself is within the kernel, but transactions can be developed for kernel- or user-mode transactions, and within a single host or among distributed hosts.
Also note that Oracle DB has a separate transaction manager for its RDBMS that is different than the MS SQL Server transaction manager(s).

DTC issues using Oracle and .NET 4 - RM_COMMIT_DELIVERY_FAILED_DUE_TO_CONNECTION_DOWN

First a little intro to our setup:
WCF based app with EF 4 context injected using Unity (no singleton)
Oracle running on a seperate physical machine
NServiceBus handling messages that access Oracle through the same context as above
The problem we are experiencing, only on our UAT environment, is that we cannot send multiple messages without receiving distributed transaction locks on DTC. The DTC trace tells us this:
1. TRANSACTION_COMMITTED
2. RM_ISSUED_COMMIT
3. RM_ISSUED_COMMIT
4. RM_ACKNOWLEDGED_COMMIT
5. RM_COMMIT_DELIVERY_FAILED_DUE_TO_CONNECTION_DOWN
Any bright ideas?
It seems the problem lies within our client app WCF configuration.
Deep down in our framework we are setting TransactionFlow = true which tries to setup a transaction scope starting from the client. If we run our request and fire of a NServiceBus message we loose the link with our client and cannot commit the transaction.
So TransactionFlow = false in app.config saved us.

With sql service broker what happens if the target application crashes?

How will the target application get the messages send to it while it was unresponsive, stopped and restarting? Will they be sent again automatically when it comes back online?
How would you implement this with EF and C#? Where are the tutorials!
Service Broker sends from SQL Server to SQL Server. The protocol used is fully resilient to crashes, messages stay in the sender's sys.transmission_queue until acknowledged by the target, and the target only acknowledges them after committing them into the destination service queue. SQL Server also handles everything related to transient failures: unresponsive destination, network partitioning, servicing/patching outages. All this is handled by SQL Server itself, as it guarantees Exactly Once In Order delivery.
Now what happens if your application crashes, ie. while processing a RECEIVE statement, is very simple: you interact with Service Broker through T-SQL, in a database transaction context. If the application crashes, the normal behavior of ACID database transactions kick in: since the transaction did not commit, it will be rolled back and the application will have a chance to process the message again, after restart.
So, from your application point of view, you only interact with a database, queues and tables and all, within a database transaction context. Your questions are the same as 'what happens to an INSERT if the application crashes?'

Switch between databases, use two databases simultaneously

I have a web site which uses one SQL database but the hosting company is very slow sometimes and I got database timeout, login and similar errors. Can I implement my code to use two databases simultaneously? I have stored procedures and the data is updated periodically.
EDIT:
Simply: When dbDefault is down and inaccessible I need to use dbSecondary so the web app keeps running. Ant these two databases must be always same.
EDIT:
Some errors:
A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Cannot open database "db" requested by the login. The login failed. Login failed for user 'root'.
Load balancing and/or fail-over clustering database servers typically involves a lot of work.
You will need to make sure ALL data is merge replicated between the two database servers. Hosting providers rarely provide this option unless you have a dedicated server.
Allowing for merge replication might involve redesigning parts of your database; which may not be feasible.
Unless you are willing to invest a lot of time and money, you are much better off just switching hosting providers to one that has better db support. Considering there are literally thousands upon thousands of such companies out there this is an easy fix.
UPDATE
Almost of all the errors you identified in your edit are generally attributable to failing to properly dispose of connections, commands, and readers. You might want to go through your code to make sure you are accessing the sql server correctly. Every connection, command, and reader should be wrapped in a using clause in order to make sure they are properly released back to the connection pool.
If you provide a data access code sample (new question please) we can help you rewrite it.
Not really.
Data consistency and integrity:
How do you decide what data or what call to make at what time?
What happens on write?
Firewalls, remote server etc:
If you use another hosting company, how will you connect?
Misconception:
Two databases on one server = absolutely no advantage.
The server is probably overloaded, a 2nd db will make it worse
A database timeout could be code related of course and it may not help you to have 2 databases with the same poor code or design...
Not a nice answer, but if your host is providing poor service then your options are limited
ist of all find the reasons of the Timeout if it is in your code than rectify the code by optimizing query etc.
i think what you need is a Failover Server , where you can switch if the one server is down.
Alternatively
you can maintain two connection string in web.config and can switch to other server if one is down.
in both the method , you need to devise an strategey to sync the servers.
If both your database are in synch (which is an obvious requirement for what you are trying to do), the best solution is to rely on a loadbalancer but if you can't, I guess you goal is to run the query against both database in the same time and returns the first result otherwise you will have to wait for the timeout to run the request against the second server.
SO what you need is asynchronous sql command right ?

MSDTC attempts to enlist client machine in a distributed transaction

We're seeing the following intermittent warning logged by MSDTC:
A caller has attempted to propagate a
transaction to a remote system, but
MSDTC network DTC access is currently
disabled on machine 'X'. Please review
the MS DTC configuration settings.
However, MSDTC is disabled on machine X by design - it's a client machine, and has no business being enlisted in the transaction!
Several windows service endpoints hosting WCF services over TCP
Single SQL Server 2005 instance beneath
Linq to Sql
Remote client receives event callbacks over WCF/TCP
The issue is tricky to reproduce - usually following restart of services. We suspect a callback to the client machine is occurring within the context of a transaction.
Just wondering if anyone has seen similar issues??
Ken

Categories