Whats does MSSQL, when connection is lost, after SP call? - c#

I have a software operating via WLAN mounted on a moving device.
At the moment transactions are opened and closed in code. Between the businesslogic is happening.
Now i'm suffering of lost connection and staying open transactions on the database. (MSSQL 2012)
My solution was to move all transactions/logic to a sp.
So the client only calls the sp and transations are handled inside.
My question here is:
What happens to a sp wenn the connection is lost? Does it run to the end?

This is covered in the documentation Controlling Transactions (Database Engine), specifically in the Errors During Transaction Processing section:
If an error prevents the successful completion of a transaction, SQL
Server automatically rolls back the transaction and frees all
resources held by the transaction. If the client's network connection
to an instance of the Database Engine is broken, any outstanding
transactions for the connection are rolled back when the network
notifies the instance of the break. If the client application fails or
if the client computer goes down or is restarted, this also breaks the
connection, and the instance of the Database Engine rolls back any
outstanding connections when the network notifies it of the break. If
the client logs off the application, any outstanding transactions are
rolled back.
I've emphasised the relevant section.
So, moving the transactions to the SP won't stop the transaction being rolled back if your connection drops. I would suggest finding out why your connection is unstable and fixing that. Otherwise you'll need to work out a way to run the query locally on the instance (perhaps, using SQL Agent).

Related

Controlling SqlConnection Timeouts

I have a C# application that updates processing values every second, and updates the mssql database and an OPC server, if I lose connection to the server (which hosts the OPC and the database) I start storing the process values offline until I get the connection back then update the OPC, however I am running into MSSQL Server timeout issues which causes the client machine to freeze while connection is lost, and therefore losing process values from being stored to the OPC. I tried set the Sql connection and SqlCommand timeout to 5 seconds but without any luck, I think I am timing out on TCP connection to SQL.
How would you solve this issue? I even used the asynchronous call for my functions calls to the database but without real improvements.
the link below helped me a bit but still not great:
http://improve.dk/controlling-sqlconnection-timeouts/
Any help is appreciated.
Thanks

With sql service broker what happens if the target application crashes?

How will the target application get the messages send to it while it was unresponsive, stopped and restarting? Will they be sent again automatically when it comes back online?
How would you implement this with EF and C#? Where are the tutorials!
Service Broker sends from SQL Server to SQL Server. The protocol used is fully resilient to crashes, messages stay in the sender's sys.transmission_queue until acknowledged by the target, and the target only acknowledges them after committing them into the destination service queue. SQL Server also handles everything related to transient failures: unresponsive destination, network partitioning, servicing/patching outages. All this is handled by SQL Server itself, as it guarantees Exactly Once In Order delivery.
Now what happens if your application crashes, ie. while processing a RECEIVE statement, is very simple: you interact with Service Broker through T-SQL, in a database transaction context. If the application crashes, the normal behavior of ACID database transactions kick in: since the transaction did not commit, it will be rolled back and the application will have a chance to process the message again, after restart.
So, from your application point of view, you only interact with a database, queues and tables and all, within a database transaction context. Your questions are the same as 'what happens to an INSERT if the application crashes?'

Why should I use connection pooling?

In my C# application I connect to a MySQL database and run 10,000 queries. If I keep a connection to my database, these queries take roughly 14 seconds. However, if I rely on the connection pooling my queries take around 15 seconds. (I have run this test multiple times.)
// Connection pooling.
using (var connection = CreateConnection())
{
connection.ConnectionString = ConnectionString;
connection.Open();
Most samples on the net make use of the 'connect and close' construction above. However, it seems connection pooling is slower than keeping the connection. So the question is...
Q: Why should I use connection pooling?
Its a big debatable topic and would find many blog out there would tell that why we use Pool.
It will not slow things down. There is a lot of time spend on Connecting to DB Server and Hand shake and establishing communication between client and DB server.
So in multi request paradigm where many request are entertained by the server, it would be hard to establish and put on wait each client. POOL helps us that it gives us pre prepared connection and we use it and discard it. POOL get that connection and re-establish it for the next request.
But in a single threaded environment it is the other way around. POOL would be a very heavy resource for a single threaded env.
Q: Why should I use connection pooling?
Usually so that you can use more than one connection at a time. This is clearly important for web applications - you wouldn't want one user query to have to wait for another user's query to finish.
If you're writing a thick client application which talks straight to the database and you know you'll only ever have one query executing at a time, it's less important - but it's still global state, and that tends to be something you should avoid. You're doing several independent things - why would you want to constrain them to use the same connection?
Connection pooling is great for scalability - if you have 100 threads/clients/end-users, each of which need to talk to the database, you don't want them all to have a dedicated connection open to the database (connections are expensive resources), but rather to share connections (via pooling).
The using mini-pattern is also great for ensuring the connection is closed in a timely fashion which will end any transactions on the connection and thus ensure any locks taken by the transactions are released. This can be a great help for performance, and for minimising the potential for deadlocks.
If all your application does is run the 10,000 queries and then close again without any user interaction then it's fine to use one single connection.
However it's generally not a good idea to keep a database connection open while your application is just sitting there waiting for user input. This is where connection pooling is appropriate.
Pseudo code ...
<open connection>
<fetch data>
<close connection>
<user interaction with data ...>
<open connection>
<save updated data>
<close connection>
Depending on the language / database used, the second connection will be generated from the connection pool.

Connection pool management

I'm developing a high load web service that would provide as fast response as possible. The service should keep a bunch of connections to various databases for faster performance. I'm suggesting using connection pool for that. There may be connection problems to DB because we have a lot of remote access to the DB through VPN. As I have said, service should retain connection as long as possible.
What is the connection pool management algorithm?
I have a connection string:
Code:
User Id=inet;Password=somePassw0rd;Data Source=TEST11;Min Pool Size=5;Max Pool Size=15;Pooling=True
Then I simply open and close connection in my code. That's it.
At this moment everything is OK. There are five sessions on DB side. So I would kill a session to simulate connection problems. And in some cases the connection will be restored by pool manager and in some cases it won't.
If I kill all five connections they are never restored back.
How can I confiure pooling manager? Any settings for duration between checking DB connections?
I have used validate connection=true; it seems to work fine for me, but it would need some effort if reconnect to DB is needed, and therefore it would be more efficient to have an already good connection.
The component I used is devArt dotConnect for Oracle.
Thanks in advance!
I'm not sure what you're exactly looking after, but this can be useful: pools are automatically cleared if a connection is idle for some time or closed by the server. However you can force pool clearance, using OracleConnection's ClearPool or ClearAllPools methods (these methods usually exist on most ADO.NET providers, also it's not a requirement).
Note that if you're using Oracle 11g, DotConnect also supports Oracle's Database Resident Connection Pooling (DRCP) which is presumably the best way to do pooling, since it's provided by Oracle itself (I don't have any experience on this though).

I pull the plug on the client workstation, what happens to a long running database process? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
SQL Server and connection loss in the middle of a transaction
I have a .NET 3.5 client app that kicks off a long-running (5-10 m) stored proc on the MS SQL Server 2005. The stored proc starts with BEGIN TRAN and ends with COMMIT TRAN.
If I pull the plug on the workstation, what happens to the stored procedure, does it finish running? Does it finish running under all the circumstances? Or will the loss of connectivity with the workstation cause the database to abort the stored proc?
EDIT: The workstation and the SQL Server are on different boxes.
The loss of the workstation's power won't necessarily cause the SP to abort, but it could very well cause the transaction to roll back.
I say "could" because it does depend on exactly when the client loses its power. If a network connection is lost into a 'black hole' like this, the server won't be immediately notified that any disconnect happened at all; it has to rely on TCP eventually telling it that the connection is dead simply because the other side has not responded to anything in X time.
This is different from disconnecting the client application explicitly and 'normally'; in such a case, the client explicitly closes the connection, if applicable, and so SQL will know right away that the client is gone.
Since the stored procedure runs on the server, if the BEGIN/END TRANSACTION are part of that stored procedure the procedure should run to completion (barring any errors). The client will never receive any results, of course, since the connection was lost.
Somewhat similar; SQL Server and connection loss in the middle of a transaction
Be aware that connections aren't always shut down immediately, so unexpected behaviour must be anticipated.

Categories