Transfer SQL to MySQL C# Monitoring Program - c#

I currently have a working program that is monitoring a few SQL tables and transferring data to MySQL tables. Essentially, I have a loop that checks every 30 seconds. My main concern is that I currently need to close and open the connection every time I loop. The reason for this is because I was getting errors about multiple transactions. When I close my connections, I also need to dispose the connection. I thought that disposing the transactions would have solved this problem, but I was still getting errors about multiple transactions.
This all seems to be working fine but I was wondering if there was a better way to do this without closing the connection.

I am not sure about your errors but it seems that you have to increase the number of connections to the remote computer. Have a look here http://msdn.microsoft.com/en-us/library/system.net.configuration.connectionmanagementelement.maxconnection.aspx
Also you can try to do is use only one connection to realize multiple SQLs.
If it is doesn't help then please provide your code to check it...

Were you committing your transactions in your loop? transaction.Commit() ... that could have been the issue... Hard to say with no code. No need to worry about opening and closing connections anyways since ADO.NET uses connection pooling behind the scenes. You only actually 'open' a connection the first time, after that is kept open in the pool to be used again. As others have said though, Post some code!

Related

Problems with connection pool using Simple Data

I'm facing a very difficult problem with a web a application that I'am implementing with a group of developers. We are using Simple Data to connect to a Oracle database but after several connection or when we have a lot of users the connection pool gets full and the application doesn't work any more. The problem is that Simple Data opens the connection to make the transactions but it never close the connection so the application stops the transactions, we saw at the simple data documentation that it says that althought in code it's not necessary to close the connection the simple data do it itself but is not true.
We already try to change the number of available connection from 100 to 50 per user, but the problem continues, another solution that we implemented was to open a share connection, but it didn't work either. The question is, Is there a way in code to Close the connections in the simple data?.
var db=Database.Open();
return db.Table.FindById(Id:2);
In that sample code, you can see that I open the connection, but there is no method to close it. If someone can help me with this problem I'll be grateful. Thank you.
Info:
We are using, NancyFx framework, C# an Oracle11g database.
Old post but if anyone wondering about it! ...
As the docs on page (http://simplefx.org/simpledata/docs/pages/Start/OpeningAConnection.html) saya at the last line
Simple.Data is quite aggressive in closing connections and holds no open connections to a data store by default, so you can keep the Database object returned from the Open*() methods hanging around without worrying.

Is there any way to resume a (long) transaction after the underlying mysql connection has been lost?

I have a long running transaction performing a lot of delete queries on a database; the issue is that the mysql connection (to the server on the same machine) will be dropped for no reason every now and then.
Currently, my retry logic will detect the disconnection, reconnect, and restart the whole transaction from the beginning, which may never succeed if the connection's "dropping frequency" is too high.
Is it possible at all to reopen a lost connection to continue the transaction?
I am using MySQL Connector for .NET.
What you are asking is not possible for a Transaction. A transaction is to make sure that either each and every action performed on DataBase is completed or None are.
If your Connection Dropping frequency is too high and you don't have a control on fixing it then what you should do is to make simple queries without a transaction or Better Make the Number of Actions in your Transaction fewer and Send a Batch of Transactions instead of a Single Big Transaction.
And also add some data validation check codes to make sure every thing is right with entries.
Theoretically you can do exactly what you need with the XA transactions... but the limitations of mysql are rather drastic and make XA transactions on mysql a joke do be honest: both resume & join on start and end with suspend are not working (since 2006 when this was first released). So to answer you question no! No chance with mysql, forget it! Try increasing timeouts(both on client and server), memory pools, optimize the queries etc... mysql won't help you here.

Multiple Trips to the Db

I have those group of CheckBox Lists and Repeaters (about 8 controls needed to be loaded from my db) and for each control, I have a method in my DataAccess Layer to select the information and get it back to my control.
But There's a page that I need all those 8 controls to be loaded on the same time .. so each method will be a trip to the db and I understand that's what affects the performance. So can I have like a new method to create the connection and open it, then I can call multiple methods to access the db and load the info then close the connection at the end.
Any ideas if those 8 connections are okay for performance ? and what do you think about this idea and how can it be applied in a practical way ?
Unless your app is going to be on a high-traffic website, I wouldn't worry about it until it becomes an issue. It's relatively simple to go back and fix it later should problems arise, but this sounds like a case of premature optimization, to be honest.
If you're using the native SqlClient to access your database, using the exact same connection string, they will share a connection pool. . By default, connection pooling is enabled in ADO.NET. Unless you explicitly disable it, the pooler optimizes the connections as they are opened and closed in your application.
So based on your question if you do:
using (SqlConnection...)
{
// all your data calls
}
or 7 separate calls, each opening and closing (or using "using") as #Tim Coker mentioned, any differences in performance will be minimal
Edit: There are some dated articles on MSDN that do say "Open a connection as late as possible and close it as soon as possible", so you could do a rapid series of method calls that each open and quickly close the connection, but again, will be sharing the same pool anyway.
If the data is read-only, you could cache it.
Then, each trip to the database will only be made once.
There is very little overhead when you open and close a connection, so you should not worry about that.

SQL CONNECTION best Practices

Currently there is discussion as to what are the pros and cons of having a single sql connection architecture.
To elaborate what we are discussing is, at application creation open a sql connection and at application close or error closing the sql connection. And not creating another connection at all, but using just that one to talk with the DB.
We are wondering what the community thinks.
Close the connection as soon as you do not longer need it for an undefined amount of time. By doing so, the connection returns to the connection-pool (if connection pooling is enabled), and can be (re)used by someone else.
(Connections are expensive resources, and are sometimes limited).
If you keep hold on a connection for the entire lifetime of an application, and you have multiple users for that application (thus multiple instances of the app, and multiple connections), and if your DB server is limited to have only x number of concurrent connections, then you could have a problem ....
See also best practices for ado.net
Follow this simple rule... Open connection as late as possible and close it as soon as possible.
I think it's a bad idea, for several reasons.
If you have 10,000 users using your application, that's 10,000 connections open constantly
If you have to restart your Sql Server, all those 10,000 connections are invalidated and your application will suddenly - assuming you've included reconnect logic - be making 10000 near-simultaneous re-connect requests.
To expand on point 1, you should close connections as soon as you can because otherwise you're using up a finite resource for, potentially, an inifinite period of time. If you had Sql Server configured to allow a maximum of 10,001 simultaneous connections, then you can only have 10,001 users running your application at any one time. If you open/close connections on demand then your application will scale much further as the likelihood of all the active users making use of the database simultaneously is, realistically, low.
Under the covers, ADO.NET uses connection pooling to manage the connections to the database. I would suggest leaving it up to the connection pool to take care of your connection needs. Keeping a connection open for the duration of your application is a bad idea.
I use a helpdesk system called Richmond Systems that uses one connection for the life of the application, and as a laptop user, it is a royal pain in the behind. Even when I carry my laptop around open, the jumps between the wireless access points are enough to drop the DB conenction. The software then complains about the DB conenction, gets into an error state and won't close. It has to be killed manually from Task Manager.
In short, DON'T HOLD OPEN A DATABASE CONNECTION FOR LONGER THAN NECESSARY.
But on the flip side, I'd be cautious about opening and closing connections too often. This is a lot cheaper with connection pooling than without, but even with pooling, the pool manager may decide to grow or shrink the pool, turning it back into an expensive operation.
My general rule is to open a connection when the user initiates some action, do the work, then close the connection before waiting for the next user input. For any given "Update" button click or whatever, I'll generally have only one connection. But you definately do not want to keep connections open while waiting for user input if you can at all help it for all the reasons others have mentioned. You could literally wait for days before the user presses another key or touches another button -- what if he leaves his computer on and goes on vacation? Tying up a resource for unpredictable amounts of time like that is bad news. In most cases, the elapsed time waiting for user input will far exceed the time doing actual work.

Transaction commit executes succesfully but not getting done

I've encountered a strange problem in Sql Server.
I have a pocket PC application which connects to a web service, which in turn, connects to a database and inserts lots of data. The web service opens a transaction for each pocket PC which connects to it. Everyday at 12 P.M., 15 to 20 people with different pocket PCs get connected to the web service simultaneously and finish the transfer successfully.
But after that, there remains one open transaction (visible in Activity Monitor) associated with 4000 exclusive locks. After a few hours, they vanish (probably something times out) and some of the transfered data is deleted. Is there a way I can prevent these locks from happening? Or recognize them programmatically and wait for an unlock?
Thanks a lot.
You could run sp_lock and check to see if there are any exclusive locks held on tables you're interested in. That will tell you the SPID of the offending connection, and you can use sp_who or sp_who2 to find more information about that SPID.
Alternatively, the Activity Monitor in Management Studio will give you graphical versions of this information, and will also allow you to kill any offending processes (the kill command will allow you to do the same in a query editor).
You can use SQL Server Profiler to monitor the statements that occuring including begin and end of transactions. There are also some tools from Microsoft Support which are great since they run profiler and blocking scripts. I'm looking to see if I can find these will update if I do/.
If you have an open transaction you should be able to see this in the activity monitor, so you can check if there are any open transactions before you restart the server.
Edit
It sounds like this problem happens at roughly the same time every day. You will want to turn it on before the problem happens.
I suspect you are doing something wrong in code, do you have command timeouts set to a large enough value to do their work, or possibly an error is skipping a COMMIT?
You can inspect what transactions are open by running:
DBCC OPENTRAN
The timeout on your select indicates that the transaction is still open with a lock on atleast part of the table.
How are you doing transactions over web services? How / where in your code are you commiting the transaction?
Doing lots of tests, I found out a deadlock is happening. But I couldn't find the reason, as I'm just inserting so many records in some independent tables.
These links helped a bit, but to no luck:
http://support.microsoft.com/kb/323630
http://support.microsoft.com/kb/162361
I even broke my transactions to smaller ones, but I still got the deadlock. I finally removed the transactions and changed the code to not delete them from the source database, and didn't get the deadlocks anymore.
As a lesson, now I know if you have some (more than one) large transactions getting executed on the same database at the same time, you'll sure have problems in SQL Server, I don't know about Oracle.

Categories