I've encountered a strange problem in Sql Server.
I have a pocket PC application which connects to a web service, which in turn, connects to a database and inserts lots of data. The web service opens a transaction for each pocket PC which connects to it. Everyday at 12 P.M., 15 to 20 people with different pocket PCs get connected to the web service simultaneously and finish the transfer successfully.
But after that, there remains one open transaction (visible in Activity Monitor) associated with 4000 exclusive locks. After a few hours, they vanish (probably something times out) and some of the transfered data is deleted. Is there a way I can prevent these locks from happening? Or recognize them programmatically and wait for an unlock?
Thanks a lot.
You could run sp_lock and check to see if there are any exclusive locks held on tables you're interested in. That will tell you the SPID of the offending connection, and you can use sp_who or sp_who2 to find more information about that SPID.
Alternatively, the Activity Monitor in Management Studio will give you graphical versions of this information, and will also allow you to kill any offending processes (the kill command will allow you to do the same in a query editor).
You can use SQL Server Profiler to monitor the statements that occuring including begin and end of transactions. There are also some tools from Microsoft Support which are great since they run profiler and blocking scripts. I'm looking to see if I can find these will update if I do/.
If you have an open transaction you should be able to see this in the activity monitor, so you can check if there are any open transactions before you restart the server.
Edit
It sounds like this problem happens at roughly the same time every day. You will want to turn it on before the problem happens.
I suspect you are doing something wrong in code, do you have command timeouts set to a large enough value to do their work, or possibly an error is skipping a COMMIT?
You can inspect what transactions are open by running:
DBCC OPENTRAN
The timeout on your select indicates that the transaction is still open with a lock on atleast part of the table.
How are you doing transactions over web services? How / where in your code are you commiting the transaction?
Doing lots of tests, I found out a deadlock is happening. But I couldn't find the reason, as I'm just inserting so many records in some independent tables.
These links helped a bit, but to no luck:
http://support.microsoft.com/kb/323630
http://support.microsoft.com/kb/162361
I even broke my transactions to smaller ones, but I still got the deadlock. I finally removed the transactions and changed the code to not delete them from the source database, and didn't get the deadlocks anymore.
As a lesson, now I know if you have some (more than one) large transactions getting executed on the same database at the same time, you'll sure have problems in SQL Server, I don't know about Oracle.
Related
I am developing an app using C# and MySql (stored procedure). After running the app for certain time it shows Too many connection. Then I used commands like SHOW STATUS WHERE variable_name = 'Threads_connected'; and SHOW PROCESSLIST; to debug the problem. It seems each time I run any action on my app, mysql creates new thread and the thread is marked as Sleep. Moreover the thread does not close on time. I found one solution i.e setting mysql environment variables as below.
interactive_timeout=180
wait_timeout=180
Is this solution have any impact on the app as it automatically kills the connection? What happens if data fetching time from database is a bit long?
I am expecting huge traffic about 1000 at a time. So what should be the max connection number in mysql? Will that degrade the mysql performance?
[Note: There is no problem in my app as I have closed every mysql connection]
Thanks in advance.
I hope below artical will help you to get answer of your 2nd point
http://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
In cases where an application doesn’t close connections properly, wait_timeout is an important parameter to tune and discard unused or idle connections to minimize the number of active connections to your MySQL server – and this will ultimately help to avoid the “Too many connections” error.
Threads_running is a valuable metric to monitor as it doesn’t count sleeping threads – it shows active and the amount of queries currently processing, while threads_connected status variables show all connected threads value including idle connections as well
Background - I have a website & a windows scheduled job which are a part of an MSI and get installed on the same server. The website is used by the end-user to create some rules and the job is scheduled to run on a daily basis to create flat files for the rules created by end-user. The actual scenarios are way more complex than explained above.
Problem (with the website) - The website is working fine most of the times, but some times it just wont load the rule creation page - and the exception being logged it 'query timeout or SQL server not responding'
Problem (with the job) - The job is behaving just like the website and fails some times with the exception - 'query timeout or SQL server not responding'
What I've tried -
I've added 'Connection Timeout' to the SQL connection string - doesn't seem to help with the logging - which would tell me if it was a SQL connection timeout or a query timeout.
I've also run the stored procedures which are called by the website & job - and ALL the stored procedures complete well within the business defined timeout of 3600 seconds. The stored procedures actually complete in under a minute.
I've also run SQL profiler - but the TRACES also didn't help me - though I could see a lot of transactions but I couldn't justify something being wrong with the server.
What I seek - Are there any other reasons which could cause this? Is there something which I could look for?
Technology - SQL Server 2008 R2, ASP.Net, C#.Net
Restrictions - The code details can't be revealed due to client confidentiality, though I'm open to questions - which I'd try to answer keeping client confidentiality in mind.
Note - There is already a query timeout (3600s) & Connection Timeout
(30s) defined in the applicaiton config file.
So, I tried a few things here and there and was able to figure out root cause -
The SQL stored procedure was joining 2 tables from 2 different databases - one of which had varying number of records - these records were being updated/inserted by a different (3rd party) job. Since the time of the 3rd party job and my job was not same - no issue came up due to table locks, but the sheer volume of records caused my job to timeout when my timeout was not enough.
But, as I said I've given the business standard command timeout of 3600 seconds - somehow Enterprise Library was overriding my custom timeout with its own default command timeout of 30s - and hence the C# code part would come throw an exceptions even before the stored procedure had completed executing.
What I did - This may be of help for some of us -
I removed the reference of Enterprise Library from the project
Cleaned up my solution and checked into SVN.
Then cleaned up SVN as well.
I didn't build the application after removing Enterprise Library reference - obviously it wouldn't build due to reference errors.
After that, I took a clean checkout and added Enterprise Library again.
Now it seems to work even with varying number of records.
Just had the same problem also yesterday. Had a huge query taking 18 sec in SQL Server but was running out in C# even after 200 sec. I rebooted my computer disconnect the DB and even disconnect the server... nothing changed.
After reading some threads, I've notice a common feed about indexes. So I removed some indexes in my database, put some back and voilà!. Back to normal.
Here's maybe I thought could had happened. While I was running some test, I probably still had some zombie connections left and my colleague was creating some tables in the DB at the same time and linked them to tables used in my stored procedure. Even if the newly created tables had nothing to do with the stored procedure, having them linked with the other ones seems to have messed up with the indexes. Why only the C# couldn't work properly? My guess is there a memory cache in SQL Server not accessible when connecting some place else than SQL Server directly.
N.B. In my case, just altering the stored procedure didn't have any effect at all, even if it was a common "solution" among some threads.
Hope this helps if someone has the same problem. If anyone can find a better solution/explanation, please share!!!
Cheers,
I had similar problem with mssql and did not find any particular reason for this unstable behavior. My solution was to have the db re-indexed with
sp_updatestats
every hour.
You can use WITH RECOMPILE in your stored procedure definition to avoid the error of 'query timeout or SQL server not responding'
Here's the Microsoft article:
http://technet.microsoft.com/en-us/library/ms190439.aspx
Also see this for reference:
SQL Server: Effects of using 'WITH RECOMPILE' in proc definition?
Sample Code:
CREATE PROCEDURE [dbo].[sp_mystoredproc] (#param1 varchar(20) ,#param2 int)
WITH RECOMPILE
AS
... proc code ...
I have a long running transaction performing a lot of delete queries on a database; the issue is that the mysql connection (to the server on the same machine) will be dropped for no reason every now and then.
Currently, my retry logic will detect the disconnection, reconnect, and restart the whole transaction from the beginning, which may never succeed if the connection's "dropping frequency" is too high.
Is it possible at all to reopen a lost connection to continue the transaction?
I am using MySQL Connector for .NET.
What you are asking is not possible for a Transaction. A transaction is to make sure that either each and every action performed on DataBase is completed or None are.
If your Connection Dropping frequency is too high and you don't have a control on fixing it then what you should do is to make simple queries without a transaction or Better Make the Number of Actions in your Transaction fewer and Send a Batch of Transactions instead of a Single Big Transaction.
And also add some data validation check codes to make sure every thing is right with entries.
Theoretically you can do exactly what you need with the XA transactions... but the limitations of mysql are rather drastic and make XA transactions on mysql a joke do be honest: both resume & join on start and end with suspend are not working (since 2006 when this was first released). So to answer you question no! No chance with mysql, forget it! Try increasing timeouts(both on client and server), memory pools, optimize the queries etc... mysql won't help you here.
I currently have a working program that is monitoring a few SQL tables and transferring data to MySQL tables. Essentially, I have a loop that checks every 30 seconds. My main concern is that I currently need to close and open the connection every time I loop. The reason for this is because I was getting errors about multiple transactions. When I close my connections, I also need to dispose the connection. I thought that disposing the transactions would have solved this problem, but I was still getting errors about multiple transactions.
This all seems to be working fine but I was wondering if there was a better way to do this without closing the connection.
I am not sure about your errors but it seems that you have to increase the number of connections to the remote computer. Have a look here http://msdn.microsoft.com/en-us/library/system.net.configuration.connectionmanagementelement.maxconnection.aspx
Also you can try to do is use only one connection to realize multiple SQLs.
If it is doesn't help then please provide your code to check it...
Were you committing your transactions in your loop? transaction.Commit() ... that could have been the issue... Hard to say with no code. No need to worry about opening and closing connections anyways since ADO.NET uses connection pooling behind the scenes. You only actually 'open' a connection the first time, after that is kept open in the pool to be used again. As others have said though, Post some code!
I am experiencing the exact same issue as a user reports on eggheadcafe, but don't know what steps to take after reading the following answer.:
Two problems you should chase down:
1. Why is the website leaking resources to the finalizers. That is
bad
2. What is Oracle code waiting on -- work with Oracle's support on it
This is the issue:
I have an intermittent problem with a
web site hosted on IIS6 (w2k3 sp2).
I appears to occur randomly to users
when they click on a hyperlink within
a page. The request is sent to the
web server but a response is never
returned. If the user tries to
navigate to another hyperlink they are
not able to (i.e. the web site appears
to hang for that user). Other users
of the website at the time are not
affected by this hang and if the user
with the problem opens a new http
session (closing IE and opening the
web site again) they no longer
experience the hang.
I've placed a debugger (IISState) on
the w3wp process with the following
output. Entries with "Thread is
waiting for a lock to be released.
Looking for lock owner." look like
they might be causing the issue. Can
anyone tell what lock the process is
waiting on?
Thanks
http://www.eggheadcafe.com/software/aspnet/33799697/session-hangs.aspx
In my case my .Net C# MVC application runs against a MySQL database for data and a MS SQL database for .Net membership.
I hope someone with more knowledge of IIS can help resolve this problem.
It sounds like you have a race condition in your database calls resulting in a deadlock at the database level. You may want to look at the settings you have in your application pool for database connections. Likely you will need to put some checks in somewhere or redefine procedures in order to reduce the likelihood of the race:
http://msdn.microsoft.com/en-us/library/ms178104.aspx
I would explain the experienced hang due to session serialization. Not the part about saving/loading it from some source, but that ASP.NET does not allow the same session to execute two parallel pages simultaneously, unless they execute with a readonly-session. The later is done either in the page directive, or in web.config, by setting EnableSessionState="ReadOnly".
Your problem still exists, this wont change that the first thread hangs. I would verify that your database connections are disposed correctly. However, you never mention any Oracle database in your question (only Mysql and SQL Server). Why are you using the Oracle drivers at all? (This seems like a valid place to start debugging.)
However, as stated by David Wang in his answer in your linked question, part two of your problem is a lock that's never released. You'll need support from Oracle (or their source code) to debug this further.
IIS hang is not something surprising. IISState is out of date, and you may use Debug Diag,
http://support.microsoft.com/kb/919791 (if CPU usage is high)
http://support.microsoft.com/kb/919792 (otherwise)
The hang dumps should tell you what is the root cause.
Microsoft support can help analyze the dumps, if you are not familiar with the tricks. http://support.microsoft.com