I am going through the transactions exist in WCF service but seeking some more clarification on this. I am not sure about which transaction manager WCF will use for following scenarios:
If the WCF service is performing insert in table of one SQL server database and delete from table of another SQL server database (in same or different server)
If the same WCF service is performing insert in table of one SQL server database and delete from table oracle database.
If WCF service calling 2 different WCF service performing operation on same SQL server base database.
Kindly help me providing some understanding on this situations.
I think you're giving WCF more credit than it's due. WCF can do some amazing stuff, but there's nothing magical about it. It provides a set of interfaces for web services and allows you to provide an intermediary access layer for your data.
So let's tackle your scenarios:
If the WCF service is performing insert in table of one SQL server database and delete from table of another SQL server database (in same or different server)
We've got two RDBMS in use here, so you're going to have two transaction managers. The first transaction manager is in the RDBMS for the insert, and the second transaction manager is for the delete.
If the same WCF service is performing insert in table of one SQL server database and delete from table oracle database.
Again, we've got two RDBMS in use here, so you're going to have two transaction managers. The first transaction manager is in the RDBMS for the insert, and the second transaction manager is for the delete.
Note that we don't need to care about which type of RDBMS it is, we just track the number that are involved.
If WCF service calling 2 different WCF service performing operation on same SQL server base database.
This one is a little trickier because we don't know what the 2 WCF services are doing, and there is some unadvisable voodoo magic that could be done to coordinate transactions across the 2 services. I'm going to assume you're smarter than that and didn't mean that case.
So in this case, we have 1 RDBMS performing 2 separate transactions. We'll have 1 transaction manager from the 1 RDBMS, but the operations will complete under different transactions.
To wrap that up - to know how many transaction managers are involved, you need to look at the number of RDBMS that are being used. And to know how many transactions will be required, you need to look at the number of operations performed.
Notice that the use of WCF has no bearing on your concern about the managers. WCF just happens to be a tool that provides an additional way of accessing the data through a service. WCF is cool, but it's not magic.
Additional note
You asked in a comment:
my concern is that in all of this condition which transaction manager it will use a) The LTM b) The KTM c) The DTC?
And for the MS SQL Server transactions, it will either be the LTM or the DTC that handles the transaction. Per this MSDN Blog entry, it's not necessarily something you need to worry about until performance becomes a significant issue. And you should avoid premature optimization in favor of getting things working first.
And based upon this description of the KTM, it's very unclear how you think you'd be using the KTM in any of the cases you asked about.
The Kernel Transaction Manager (KTM) enables the development of applications that use transactions. The transaction engine itself is within the kernel, but transactions can be developed for kernel- or user-mode transactions, and within a single host or among distributed hosts.
Also note that Oracle DB has a separate transaction manager for its RDBMS that is different than the MS SQL Server transaction manager(s).
Related
I have a stage table which is like a queue, data keeps coming to this table.
Can I write a window service runs continuously and read data from queue table and apply some business logic on the records, for this approach could you please share some code link, etc?
Or should I consider SQL Server Service Broker?
Please suggest?
If you use a table as a queue, then use a table as a queue. I recommend you read Using tables as Queues.
I do not recommend using Service Broker unless you need activation. Service Broker is designed for distributed applications and comes with significant overhead when compared with a simple queue table (conversations, services and contracts etc).
I've got situation where I need to expose some services to outer entity. All services do is:
1. take arguments from service caller
2. query database
3. delete queried data from db
4. return queried data back to caller
I'm trying to decide if I need to make server application which will access database, or just make database procedures. Functionally, both ways satisfies me. What I'm concerned of is security and I don't have much experience administrating postgresql database.
If I expose database procedures, how much administration I can do? Can I limit number of queries(procedure calls) user can issue? Can I limit time between 2 queries(procedure calls) and amount of memory user can use?
Services would return approximately 5MB of data and they would be called about few times per hour. Even tho service user is trusted and connection between user and server would be VPN'd, I need some kind of query rate limiting, just to be safe.
I am going through the transactions exist in wcf service but seeking some more clarification on this. I am not sure about which transaction manager wcf will use fo r following scnarios
1) If the wcf service is performing insert in table of one sql server database and delete from table of another sql server database(In same or different server)
2) If the same wcf service is performing insert in table of one sql server database and delete from table oracle database.
3) If wcf service calling 2 different wcf service performing operation on same sql server base database.
Kindly help me providing some understanding on this situations
Please refer to the following tutorials:
http://www.codeproject.com/Articles/570915/TutorialplusonplusUnderstandingplusTransactionsplu
http://www.codeproject.com/Articles/38793/Steps-to-Enable-Transactions-in-WCF
I have a server and 'x' number of clients.
When the server is listening for an inbound connection, it will create a client handler instance (a class that manages the client communication) which will be spun off in a separate thread.
Depending on the command the client sends to the server, the server may need to access a SQL database to store information about that client.
The client handler instance will 'handle' this request. The only problem is, if multiple client handlers are wanting to access the SQL database to do the exact same thing then there is potential for read / write issues.
I was thinking about exposing a static method on the server, calling it from the client handle instances, then locking the function which accesses the SQL database (either read or write).
Is this a good approach or are there better approaches?
Thanks.
Well, you DO know that SQL has locks and a ton of internal mechanisms to serialize access? That this is part of the ACID conditions ingrained in SQL since the 1950's when SQL was created? That the locking mechanism in SQL is very very fine and basically you try to solve a problem that has been solved more than 60 years ago.... because it seems you need to read a book about the basics of SQL.
Under normal circumstances (standard connection string) resource access is serialized in SQL Server (TransactionIsolationLevel serialized), but that can be tuned. I really suggest learning some SQL fundamentals.
We have an application with approximately 60,000 client machines accessing it. Previously we had a distributed model but we are moving to SaaS by creating a BO Layer and having calls come up into it over the WAN. We use LINQ to Entities to access the database from the BO layer. Our multi-tenant model is federated so that 'enterprises' comprising of multiple stores are on distinct sql servers (which usually has about 200 'enterprises' per server).
Each BO server is dual processor 8 core with HT (32 logicals). IIS is setup to have 32 max worker processes.
The BO layer is working pretty well as each call pulls the connection string associated with that enterprise which then talks to the correct database. The problem I am having though is that we have 1/4 of our clients on and about 15 BO servers, I have noticed that we have 3000+ open connections to each database server and its growing.
Any idea why it is growing like this? What am I supposed to set where to make it re-use connections (connection pooling appears to be on) that will keep it from flooding each db server like this? Any other suggestions?
It could be purely architecture thing.
How many database servers you have in total? And is the problem about workload is heavy on certain database servers but not others?
If that's the case, then probably considering how to partition different enterprise to different database servers will help. Or further partition data in heavy loaded database servers. Another technical is to vertical partition different tables for enterprises to different databases given no joins across vertical partitioned tables.