ALTER TABLE NOCHECK CONSTRAINT timeout randomly - c#

I have an C# application that performance ETL process. For self referencing table, the application will run "ALTER TABLE [tableName] NOCHECK CONSTRAINT [constraintName]" which turns off any FK constraint(s ) check of this table. Once all the data is loaded, the constraint(s) are enable again.
The database time out is set to 3 minutes, however, the above SQL command would fail because of database would timeout in 30 sec.
What could be the cause of this timeout?
Are there database system tables I should check for abnormality?
Other information:
I checked the app, it only has one active thread doing the ETL, so I don't think the application locks any database resource. In addition, the database runs on the same machine as the application.
Event the application closes all its database connections, it would timeout again if it runs ETL process the next time. If I run the sql manually using SQL Manager Studio, it has no problem at all.
Thanks
UPDATE - The application is turning off a number of constraints. It turns out the time out only happens to 1 particular constraint. This constraint is referencing to the Date Dimension table.
UPDATE - It looks like there is some weird abnormality for the testing database that I was working. I tried the same ETL process with other data warehouse and it has no problem so far. Other developers in the team also haven't encounter this issue. This application runs on every midnight. I will keep it running overnight and hopefully I can reproduce the same issue on other databases. So far no luck on figuring out what is going on.

Altering a table requires an exclusive lock for the table. If there is another process reading/writing to the table in question, the schema change can't take place until that process releases its lock.
When you experience a long run time for the table, run sp_who2 in a different connection and see if any connections are blocking your ETL connection. You can then look at the command buffer for that connection to determine what its doing.

Related

MS Sync Framework occasionally misses an object to sync/incomplete syncs [duplicate]

I am using the Microsoft Sync Framework to synchronize a table on two Microsoft SQL Servers. I have created a test application which generates one row per second in the table on the remote server. The application making use of the Sync Framework runs on the local server. The test application created about 52000 entries in the database over one night. The syncing application executed a call to the SyncOrchestrator.Synchronize Method every 15 seconds. When I checked the outcome of the synchronization application by executing a count statement on the synchronized table and the remote table, the result was that 295 rows were missing in the synchronized table. I used the tablediff Utility to determine the Ids of the missing rows and then queried the tracking table with those ids. On the remote database, there is an entry for every single missing id in the tracking table whereas on the local database the ids of the missing rows are nowhere to be found in the tracking table. When I restart the synchronization application, the missing entries don't get updated either. I thought the Sync Framework took care of these inconsistencies automatically but unfortunately I seem to be wrong.
Is there any built-in method I can use to verify that the synchronization process has taken place successfully? Is there another way of verifying data consistency?
there's an issue with change enumeration when DMLs and synchronization are running concurrently. have a look at this hotfix if it helps.

SQL Server Application - Resetting database to original state

Background
I need to write some integration tests in C# (about 120 of them) for a C#/SQL Server application. Now, initially before any test, the database will already be there, the reason it will be there is because lot of scripts will be run to set it up (about 20 minutes running time). Now when I run my test, few tables will be updated (CRUD operations). For e.g. in 10-11 tables, few rows will be added and say in 15-16 tables, few rows will be updated and in 4-5 tables few rows will be deleted.
Problem
After every test is run, the database needs to be reset to it's original state. How can I achieve that?
Bad Solution
After every run of a test, re-run the database creation scripts (20 minutes of running time). Since there will be around 120 tests, this comes to 40 hours which is not an option. Secondly there is a process that has several connections open against this database so the database cannot be dropped/re-created.
Good Solution?
I would like to know if there is any other way of solving this problem? Another problem I have is that, for each of those tests, I don't even know what tables will be updated and I will have to manually go and check to see what tables were updated anyway if I were to delete, revert the database to it's original state manually by writing queries.
You should take a look at the possibilities MSSQl gives you with taking a snapshot of your database. It is potentially a lot faster than reverting to a backup or recreating the database.
Managing a test database
In a testing environment, it can be useful when repeatedly running a test protocol for the database to contain identical data at the start of each round of testing. Before running the first round, an application developer or tester can create a database snapshot on the test database. After each test run, the database can be quickly returned to its prior state by reverting the database snapshot.

SQLite loses its data

I created .Net C# software using SQLite3.0 database.
In the software, data is inserted almost per minute, every day.
But very rare times (about once in a month), it loses its data like it never was.
My all tables have identity columns.
To check the loss, I writes inserted time of new row in every table.
When the loss occurred, I observed rows of an hour was lost. But the identity values were not skipped and continued finely.
I checked my transactions but they were fine.
There is one connection created and opened when the program started.
And that connection is used whole runtime without close and reopen. At runtime, there are many db actions such as insert, update, delete, select.
Can it be a reason for the loss?
Should I open and close a connection for every db actions?
Since your unique keys aren't being skipped, it is more likely that you are not writing to the database rather than losing data.
I've never had a problem leaving a connection open for long periods though(locally of course). I would scrutinize what's writing to the database.

How to make sure synchronization using the Microsoft Sync Framework was successful?

I am using the Microsoft Sync Framework to synchronize a table on two Microsoft SQL Servers. I have created a test application which generates one row per second in the table on the remote server. The application making use of the Sync Framework runs on the local server. The test application created about 52000 entries in the database over one night. The syncing application executed a call to the SyncOrchestrator.Synchronize Method every 15 seconds. When I checked the outcome of the synchronization application by executing a count statement on the synchronized table and the remote table, the result was that 295 rows were missing in the synchronized table. I used the tablediff Utility to determine the Ids of the missing rows and then queried the tracking table with those ids. On the remote database, there is an entry for every single missing id in the tracking table whereas on the local database the ids of the missing rows are nowhere to be found in the tracking table. When I restart the synchronization application, the missing entries don't get updated either. I thought the Sync Framework took care of these inconsistencies automatically but unfortunately I seem to be wrong.
Is there any built-in method I can use to verify that the synchronization process has taken place successfully? Is there another way of verifying data consistency?
there's an issue with change enumeration when DMLs and synchronization are running concurrently. have a look at this hotfix if it helps.

Transaction commit executes succesfully but not getting done

I've encountered a strange problem in Sql Server.
I have a pocket PC application which connects to a web service, which in turn, connects to a database and inserts lots of data. The web service opens a transaction for each pocket PC which connects to it. Everyday at 12 P.M., 15 to 20 people with different pocket PCs get connected to the web service simultaneously and finish the transfer successfully.
But after that, there remains one open transaction (visible in Activity Monitor) associated with 4000 exclusive locks. After a few hours, they vanish (probably something times out) and some of the transfered data is deleted. Is there a way I can prevent these locks from happening? Or recognize them programmatically and wait for an unlock?
Thanks a lot.
You could run sp_lock and check to see if there are any exclusive locks held on tables you're interested in. That will tell you the SPID of the offending connection, and you can use sp_who or sp_who2 to find more information about that SPID.
Alternatively, the Activity Monitor in Management Studio will give you graphical versions of this information, and will also allow you to kill any offending processes (the kill command will allow you to do the same in a query editor).
You can use SQL Server Profiler to monitor the statements that occuring including begin and end of transactions. There are also some tools from Microsoft Support which are great since they run profiler and blocking scripts. I'm looking to see if I can find these will update if I do/.
If you have an open transaction you should be able to see this in the activity monitor, so you can check if there are any open transactions before you restart the server.
Edit
It sounds like this problem happens at roughly the same time every day. You will want to turn it on before the problem happens.
I suspect you are doing something wrong in code, do you have command timeouts set to a large enough value to do their work, or possibly an error is skipping a COMMIT?
You can inspect what transactions are open by running:
DBCC OPENTRAN
The timeout on your select indicates that the transaction is still open with a lock on atleast part of the table.
How are you doing transactions over web services? How / where in your code are you commiting the transaction?
Doing lots of tests, I found out a deadlock is happening. But I couldn't find the reason, as I'm just inserting so many records in some independent tables.
These links helped a bit, but to no luck:
http://support.microsoft.com/kb/323630
http://support.microsoft.com/kb/162361
I even broke my transactions to smaller ones, but I still got the deadlock. I finally removed the transactions and changed the code to not delete them from the source database, and didn't get the deadlocks anymore.
As a lesson, now I know if you have some (more than one) large transactions getting executed on the same database at the same time, you'll sure have problems in SQL Server, I don't know about Oracle.

Categories