SQL Server CE not picking up updates from another process? - c#

I've got two processes with connections to the same SQL CE .sdf database file. One inserts items into a table and the other reads all the records from the table. After the insert I can confirm the rows are there with the Server Explorer but my query from the second process does not show them:
this.traceMessages.Clear();
SqlCeCommand command = new SqlCeCommand("SELECT AppName, Message, TraceId FROM Messages", this.connection);
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
this.traceMessages.Add(
new TraceMessage
{
AppName = reader.GetString("AppName"),
Message = reader.GetString("Message"),
TraceId = reader.GetString("TraceId")
});
}
}
It can generally load up correctly the first time but doesn't pick up updates, even after restarting the process. The connection string just has a simple Data Source that I've confirmed is pointing to the same file on both processes.
Anyone know why this is happening? Is there some setting I can enable to get updates from separate processes to work?

This is because unlike "traditional" databases, the data that you write is not flushed to disk immediately, it is deferred and happens some time later.
You have two choices in the writing program:
1) Add the Flush Interval parameter to your connection string and set it to 1. This will have a lag of up to a second before the data is flushed to the sdf.
2) When you call Commit, use the parameterized overload that allows you to specify CommitMode.Immediate. This will flush data to disk immediately.

Related

How can I tell if a SQLIte update/transaction was prevented due to database being locked by another write?

Like the title says, my SQLite query is below. I am using c# and writing an application. I need to know if the query below was prevented from writing to the database due to the database being locked by another write. Since I understand SQLite only allows one write at a time. Do the others just fail? Or do they get queued or something and wait for the lock to be removed and then begin?
using (var cmd = SQLite_Connection.CreateCommand())
using (var SQLite_Transaction= SQLite_Connection.BeginTransaction())
cmd.CommandText = UPDATE testDb.testTable SET testAge= 5 WHERE testName = Joseph
cmd.ExecuteNonQuery();
SQLite_Transaction.Commit();

Get the execution time of a ADO.NET SQL Command

I have been searching over to find if there is any easy way to get the Execution time of a ADO.NET command object.
I know i can manually do a StopWatch start and stop. But wanted to if there are any easy way to do it in ADO.NET
There is a way, but using SqlConnection, not command object. Example:
using (var c = new SqlConnection(connectionString)) {
// important
c.StatisticsEnabled = true;
c.Open();
using (var cmd = new SqlCommand("select * from Error", c)) {
cmd.ExecuteReader().Dispose();
}
var stats = c.RetrieveStatistics();
var firstCommandExecutionTimeInMs = (long) stats["ExecutionTime"];
// reset for next command
c.ResetStatistics();
using (var cmd = new SqlCommand("select * from Code", c))
{
cmd.ExecuteReader().Dispose();
}
stats = c.RetrieveStatistics();
var secondCommandExecutionTimeInMs = (long)stats["ExecutionTime"];
}
Here you can find what other values are contained inside dictionary returned by RetrieveStatistics.
Note that those values represent client-side statistics (basically internals of ADO.NET measure them), but seems you asked for analog of Stopwatch - I think that's fine.
The approach from the answer of #Evk is very interesting and smart: it's working client side and one of the main key of such statistics is in fact NetworkServerTime, which
Returns the cumulative amount of time (in milliseconds) that the
provider spent waiting for replies from the server once the
application has started using the provider and has enabled statistics.
so it includes the network time from the DB server to the ADO NET client.
An alternative, more DB server oriented, would be running SET STATISTICS TIME ON and then retrieve the InfoMessage.
A draft of the code of the delegate (where I'm simply writing to the debug console, but you may want to replace it with a StringBuilder Append)
internal static void TrackInfo(object sender, SqlInfoMessageEventArgs e)
{
Debug.WriteLine(e.Message);
foreach (var element in e.Errors) {
Debug.WriteLine(element.ToString());
}
}
and usage
conn.InfoMessage += TrackInfo;
using (var cmd = new SqlCommand(#"SET STATISTICS TIME ON", conn)) {
cmd.ExecuteNonQuery();
}
using (var cmd = new SqlCommand(yourQuery, conn)) {
var RD = cmd.ExecuteReader();
while (RD.Read()) {
// read the columns
}
}
I suggest you move to SQL Server 2016 and use the Query Store feature. This will track execution time and performance changes over time for each query you submit. Requires no changes in your application. Track all queries, including those executed inside stored procedures. Track any application, not only your own. Is available in all editions, including Express, and including the Azure SQL DB Service.
If you track on the client side, you must measure the time yourself, using a wall clock. I would add and expose performance counters and then use the performance counters infrastructure to capture and store the measurements.
As a side not, simply tracking the execution time of a batch sent to SQL Server yields very coarse performance info and is seldom actionable. Read How to analyse SQL Server performance.

ORA-01000: maximum open cursors exceeded error

I have the below code:
using (System.Data.OracleClient.OracleConnection dataConn = new System.Data.OracleClient.OracleConnection(_connectionString))
{
using (System.Data.OracleClient.OracleCommand cmd = new System.Data.OracleClient.OracleCommand())
{
cmd.Connection = dataConn;
cmd.CommandText = "DELETE FROM Employees WHERE LOCATIONID= :LOCATIONID";
cmd.Parameters.AddWithValue(":LOCATIONID", locationId);
dataConn.Open();
retVal += cmd.ExecuteNonQuery();
dataConn.Close();
}
using (System.Data.OracleClient.OracleCommand cmd = new System.Data.OracleClient.OracleCommand())
{
cmd.Connection = dataConn;
cmd.CommandText = string.Format("DELETE FROM Locations WHERE LocationId = :LOCATIONID";
cmd.Parameters.AddWithValue(":LOCATIONID", locationId);
dataConn.Open();
retVal += cmd.ExecuteNonQuery();
dataConn.Close();
}
}
Just FYI,
I am calling the above block in a loop of say 50 iterations.
In each iteration, I am passing a new locationid.
-The first query, for each iteration, is potentially deleting 500 records on avg, as one location is assigned to 500 + employees.
As per this link:, I think I am doing things correctly, can anyone please point why I am still getting the ORA-01000: maximum open cursors exceeded error?
Any help will be highly appreciated.
Thanks.
As per the accepted answer in the link in your post (ORA-01000: maximum open cursors exceeded in asp.net), when you call dataConn.Close(), the connection is not really being closed, but is being left open in the connection pool. This is a hidden optimisation that makes it faster to open additional connections, but can cause problems with Oracle when you exceed some of it limits. I suggest you investigate ways of limited the size of the connection pool--this depends on what is hosting your code (IIS? Something else?).
You could also change your SQL to "DELETE FROM TABLE WHERE key IS IN (...list of values...)". This will remove the need to open 50 logical connections (and who knows how many physical connections--potentially lots).
Or doing the looping within the dataConn.Open...dataConn.Close--just use the same open connection for all the cmds.
Edit: depending on which data provider you are using, the size of the connection pool may be controllable from within the connection string. See https://msdn.microsoft.com/en-us/library/ms254502(v=vs.110).aspx for an example.

Slow opening SQLite connection in C# app using System.Data.SQLite

Edit 3:
I guess my issue is resolved for the moment... I changed both my service and test app to run as the SYSTEM account instead of the NetworkService account. It remains to be seen if the benefits of changing the user account will persist, or if it will only be temporary.
Original Question:
I've noticed that my small 224kB SQLite DB is very slow to open in my C# application, taking anywhere from some small number of ms, to 1.5 seconds or more. Below is my code, with all the extra debugging statements I've added this afternoon. I've narrowed it down to the call to cnn.Open(); as shown in the logs here:
2014-03-27 15:05:39,864 DEBUG - Creating SQLiteConnection...
2014-03-27 15:05:39,927 DEBUG - SQLiteConnection Created!
2014-03-27 15:05:39,927 DEBUG - SQLiteConnection Opening...
2014-03-27 15:05:41,627 DEBUG - SQLiteConnection Opened!
2014-03-27 15:05:41,627 DEBUG - SQLiteCommand Creating...
2014-03-27 15:05:41,627 DEBUG - SQLiteCommand Created!
2014-03-27 15:05:41,627 DEBUG - SQLiteCommand executing reader...
2014-03-27 15:05:41,658 DEBUG - SQLiteCommand executed reader!
2014-03-27 15:05:41,658 DEBUG - DataTable Loading...
2014-03-27 15:05:41,767 DEBUG - DataTable Loaded!
As you can see, in this instance it took 1.7 SECONDS to open the connection. I've tried repeating this, and cannot predict whether subsequent connections will open nearly immediately, or be delayed like this.
I've considered using some form of connection pooling, but is it worthwhile to pursue that for a single-instance single-threaded application? Right now, I'm creating an instance of my SQLiteDatabase class, and calling the below function for each of my queries.
public DataTable GetDataTable(string sql)
{
DataTable dt = new DataTable();
try
{
Logging.LogDebug("Creating SQLiteConnection...");
using (SQLiteConnection cnn = new SQLiteConnection(dbConnection))
{
Logging.LogDebug("SQLiteConnection Created!");
Logging.LogDebug("SQLiteConnection Opening...");
cnn.Open();
Logging.LogDebug("SQLiteConnection Opened!");
Logging.LogDebug("SQLiteCommand Creating...");
using (SQLiteCommand mycommand = new SQLiteCommand(cnn))
{
Logging.LogDebug("SQLiteCommand Created!");
mycommand.CommandText = sql;
Logging.LogDebug("SQLiteCommand executing reader...");
using (SQLiteDataReader reader = mycommand.ExecuteReader())
{
Logging.LogDebug("SQLiteCommand executed reader!");
Logging.LogDebug("DataTable Loading...");
dt.Load(reader);
Logging.LogDebug("DataTable Loaded!");
reader.Close();
}
}
cnn.Close();
}
}
catch (Exception e)
{
throw new Exception(e.Message);
}
return dt;
}
Edit:
Sure, dbConnection is the connection string, set by the following function. inputFile is just the string path of the filename to open.
public SqLiteDatabase(String inputFile)
{
dbConnection = String.Format("Data Source={0}", inputFile);
}
And at this point, I think sql is irrelevant, as it's not making it to that point when the cnn.Open() stalls.
Edit 2:
Ok, I've done some more testing. Running the testing locally, it completes a 1000 iteration loop in ~5 seconds, for about 5ms per call to cnn.Open(). Running the test from the same windows installer that I did on my local PC, it completes in ~25 minutes, averaging 1468ms per call to cnn.Open().
I made a small test program that only calls the TestOpenConn() function from the service program (same exact code that is running in the Windows service), running against a copy of the file located in a test directory. Running this on the server or my local PC results in acceptable performance (1.95ms per call on the server, 4ms per call on my local PC):
namespace EGC_Timing_Test
{
class Program
{
static void Main(string[] args)
{
Logging.Init("log4net.xml", "test.log");
var db = new SqLiteDatabase("config.sqlite");
db.TestOpenConn();
}
}
}
Here's the test function:
public void TestOpenConn()
{
// TODO: Remove this after testing loop of opening / closing SQLite DB repeatedly:
const int iterations = 1000;
Logging.LogDebug(String.Format("Running TestOpenConn for {0} opens...", iterations));
var startTime = DateTime.Now;
for (var i = 0; i < iterations; i++)
{
using (SQLiteConnection cnn = new SQLiteConnection(dbConnection))
{
Logging.LogDebug(String.Format("SQLiteConnection Opening, iteration {0} of {1}...", i, iterations));
var startTimeInner = DateTime.Now;
cnn.Open();
var endTimeInner = DateTime.Now;
var diffTimeInner = endTimeInner - startTimeInner;
Logging.LogDebug(String.Format("SQLiteConnection Opened in {0}ms!", diffTimeInner.TotalMilliseconds));
cnn.Close();
}
}
var endTime = DateTime.Now;
var diffTime = endTime - startTime;
Logging.LogDebug(String.Format("Done running TestOpenConn for {0} opens!", iterations));
Logging.LogInfo(String.Format("{0} iterations total:\t{1}", iterations, diffTime));
Logging.LogInfo(String.Format("{0} iterations average:\t{1}ms", iterations, diffTime.TotalMilliseconds/iterations));
}
I guess my issue is resolved for the moment... I changed both my service and test app to run as the SYSTEM account instead of the NetworkService account. It remains to be seen if the benefits of changing the user account will persist, or if it will only be temporary.
I'm assuming you're using the open source System.Data.SQLite library.
If that's the case, it's easy to see through the Visual Studio Performance Profiler that the Open method of the SQLiteConnection class has some serious performance issues.
Also, have a lookthrough the source code for this class here: https://system.data.sqlite.org/index.html/artifact/97648754af51ffd6
There's an awful lot of disk access being made to read XML configuration and Windows environment variable(s).
My suggestion is to try and call Open() as seldom as possible, and try and keep a reference to this open SQLiteConnection object around in memory.
A performance ticket is raised with SQLite Forum
Having had the same problem, I was looking into this and it seems to be related to permissions on the file or it's parent folders, who created it, and/or how it was created. In my case, the SQLite database file was being created by a script run as a regular user, and then an IIS-hosted service would access the file under a different domain service account.
Every time the service opened a connection, it took over 1.5 seconds, but otherwise operated correctly (it could eventually access the file). A stand-alone program running as the regular user could open a connection to the same file in the same place in a few milliseconds.
Analysis of a procmon trace revealed that in the case of the service, we were getting several ACCESS DENIED logs on the file over the course of about 1.5 seconds, that were not present in the trace when running as the regular user.
Not sure what's going on there. The service worked fine and was able to eventually query the data in the file, albeit slowly.
When we made the service account the owner of the parent folder of the file, and gave him write permission, the ACCESS DENIED logs disappeared and the service operated at full speed.
You can add "Modify" permissions of appropriate user to folder with your database.
Right Click on folder > Properties > Security > Edit > Add (I added IIS_Users) > Select "Modify" checkbox > OK

MSDTC Exception during Transaction: C#

I am getting a MSDTC exception in a Transaction in a C# application. The functionality is to upload one lakh (one hundred thousand) zipcode records into database tables after reading from a csv file. This operation is done in around 20 batch database operations (each batch containing 5000 records). The functionality is working fine, if I don’t use transaction.
The interesting part is that other functionalities that uses transactions are able to complete their transactions. This leads me to an assumption that the exception message is a misleading one.
Any thoughts on what could be the issue?
Exception: “Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool.”
Source: System.Transactions
Inner Exception: “The transaction manager has disabled its support for remote/network transactions. (Exception from HRESULT: 0x8004D024)”
Note: There is a for loop inside the transaction. Is it causing any issue?
The actual requirement is: There are some existing zipcodes in zipcode table. Each month the administrator will upload the new zipcode csv file. The new items from csv get inserted. Zipcodes which are not available in csv (but present in database) are considered to be retired and is to be deleted. The list of retired zip codes is to be returned to the User Interface. The newly added zip codes also need to be returned.
private void ProcessZipCodes(StringBuilder dataStringToProcess, int UserID)
{
int CountOfUnchangedZipCode = 0;
string strRetiredZipCode = "";
string strNewZipCode = "";
dataStringToProcess.Remove(dataStringToProcess.Length - 1, 1);
if (dataStringToProcess.Length > 0)
{
List<string> batchDataStringList = GetDataStringInBatches(dataStringToProcess);
//TimeSpan.FromMinutes(0) - to make transaction scope as infinite.
using (TransactionScope transaction = TransactionScopeFactory.GetTransactionScope(TimeSpan.FromMinutes(0)))
{
foreach (string dataString in batchDataStringList)
{
PerformDatabaseOperation(dataString, UserID);
}
transaction.Complete();
}
}
}
private List<string> GetDataStringInBatches(StringBuilder dataStringToProcess)
{
List<string> batchDataStringList = new List<string>();
int loopCounter = 0;
string currentBatchString = string.Empty;
int numberOfRecordsinBacth = 5000;
int sizeOfTheBatch = 0;
List<string> individualEntriesList = new List<string>();
string dataString = string.Empty;
if (dataStringToProcess != null)
{
dataString = dataStringToProcess.ToString();
}
individualEntriesList.AddRange(dataString.Split(new char[] { '|' }));
for (loopCounter = 0; loopCounter < individualEntriesList.Count; loopCounter++)
{
if (String.IsNullOrEmpty(currentBatchString))
{
currentBatchString = System.Convert.ToString(individualEntriesList[loopCounter]);
}
else
{
currentBatchString = currentBatchString+"|"+System.Convert.ToString(individualEntriesList[loopCounter]);
}
sizeOfTheBatch = sizeOfTheBatch + 1;
if (sizeOfTheBatch == numberOfRecordsinBacth)
{
batchDataStringList.Add(currentBatchString);
sizeOfTheBatch = 0;
currentBatchString = String.Empty;
}
}
return batchDataStringList;
}
private void PerformDatabaseOperation(string dataStringToProcess, int UserID)
{
SqlConnection mySqlConnection = new SqlConnection("data source=myServer;initial catalog=myDB; Integrated Security=SSPI; Connection Timeout=0");
SqlCommand mySqlCommand = new SqlCommand("aspInsertUSAZipCode", mySqlConnection);
mySqlCommand.CommandType = CommandType.StoredProcedure;
mySqlCommand.Parameters.Add("#DataRows", dataStringToProcess.ToString());
mySqlCommand.Parameters.Add("#currDate", DateTime.Now);
mySqlCommand.Parameters.Add("#userID", UserID);
mySqlCommand.Parameters.Add("#CountOfUnchangedZipCode", 1000);
mySqlCommand.CommandTimeout = 0;
mySqlConnection.Open();
int numberOfRows = mySqlCommand.ExecuteNonQuery();
}
Dev Env: Visual Studion 2005
Framework: .Net 3.0
DB: SQL Server 2005
When I run the query SELECT [Size],Max_Size,Data_Space_Id,[File_Id],Type_Desc,[Name] FROM MyDB.sys.database_files WHERE data_space_id = 0 --It says the size (of log) is 128
UPDATE
We have three different databases used in our application. One for data, one for history and one for logging. When I put enlist = false in the above connectionstring, for the time being, it is working. But it is in my development environment. I am skeptic about whether it will work in production also. Any thought on potential risks?
Thanks
Lijo
When you are opening more than one connection within a TransactionScope, the running transaction will automatically be escalated to a distributed transaction. For distributed transactions to work, the MSDTC on both SQL Server and the machine running the application must be configured to allow network access. SQL Server and the local DTC communicate when running distributed transactions.
The problem in your case is most likely that MSDTC on the machine running your application does not allow network access because this is the default for workstations. To fix this do the following:
Go to "Control Panel" -> "Aministration" -> "Component Services".
Browse through the tree until you get to a node called "Local DTC" or something like that.
Right-click and choose "Properties".
Go to "Security" and make sure that you allow network access and also allow inbound and outbound communication with DTC.
Click "Ok".
You will probably be prompted to restart DTC. There seems to be a bug in the UI because even though you accept a restart of the DTC, it will not be restarted. Instead you have to restart the DTC service manually in the service manager.
BTW, remember to close the connection after use in PerformDatabaseOperation. It is good practice to put it in a using block:
using (SqlConnection mySqlConnection = new .....)
{
// Some code here...
mySqlConnection.Open();
// Some more code ...
}
Is it possible that aspInsertUSAZipCode interacts with a linked server? If it does then it will try and promote your current local transaction to a distributed transaction (with transactional integrity preserved between your server and the linked server).
It may be necessary to do this outside of a transaction, if the MSDTC on the remote server cannot be configured for distributed transactions. I think the best way to do this is to create a temporary table and then SqlBulkCopy your records into it and then execute aspInsertUSAZipCode using the temporary table on the server. You may need to use a cursor.
The purpose of the temporary table is so that if something goes wrong, the table is removed at the termination of your connection.
You are probably hitting some max amount of data in a transaction limit.
Check your event log for any msdtc errors http://technet.microsoft.com/en-us/library/cc774415(WS.10).aspx
Having 100,000 rows in a single transaction is going to give you problems.
I do not think that a single transaction is going to work in this case, you need to look at why you are using a transaction here, and find a different way to accomplish it.

Categories