UPDATE faster in SQLite + BEGIN TRANSACTION - c#

This one is related to spatilite also (not only SQLite)
I have a file database (xyz.db) which I am using by SQLiteconnection (SQLiteconnection is extends to spatialite).
I have so many records needs to update into database.
for (int y = 0; y < castarraylist.Count; y++)
{
string s = Convert.ToString(castarraylist[y]);
string[] h = s.Split(':');
SQLiteCommand sqlqctSQL4 = new SQLiteCommand("UPDATE temp2 SET GEOM = " + h[0] + "WHERE " + dtsqlquery2.Columns[0] + "=" + h[1] + "", con);
sqlqctSQL4.ExecuteNonQuery();
x = x + 1;
}
At above logic castarraylist is Arraylist which contains value which need to process into database.
When I checked above code updating around 400 records in 1 minute.
Is there any way by which I can able to improve performance ?
NOTE :: (File database is not thread-safe)
2. BEGIN TRANSACTION
Let's suppose I like to run two (or millions) update statement with single transaction in Spatialite.. is it possible ?
I read online and prepare below statement for me (but not get success)
BEGIN TRANSACTION;
UPDATE builtuparea_luxbel SET ADMIN_LEVEL = 6 where PK_UID = 2;
UPDATE builtuparea_luxbel SET ADMIN_LEVEL = 6 where PK_UID = 3;
COMMIT TRANSACTION;
Above statement not updating records in my database.
is SQLite not support BEGIN TRANSACTION ?
is there anything which I missing ?
And If I need to run individual statement then it's taking too much time to update as said above...

SQLite support Transaction, you can try below code.
using (var cmd = new SQLiteCommand(conn))
using (var transaction = conn.BeginTransaction())
{
for (int y = 0; y < castarraylist.Count; y++)
{
//Add your query here.
cmd.CommandText = "INSERT INTO TABLE (Field1,Field2) VALUES ('A', 'B');";
cmd.ExecuteNonQuery();
}
transaction.Commit();
}

The primary goal of a database transaction to get everything done, or nothing if something fails inside;
Reusing the same SQLiteCommand object by changing its CommandText property and execute it again and again might be faster, but leads to a memory overhead: If you have an important amount of queries to perform, the best is to dispose the object after use and create a new one;
A common pattern for an ADO.NET transaction is:
using (var tra = cn.BeginTransaction())
{
try
{
foreach(var myQuery in myQueries)
{
using (var cd = new SQLiteCommand(myQuery, cn, tra))
{
cd.ExecuteNonQuery();
}
}
tra.Commit();
}
catch(Exception ex)
{
tra.Rollback();
Console.Error.Writeline("I did nothing, because something wrong happened: {0}", ex);
throw;
}
}

Related

MySql Deadlocks / Commit not unlocking in c#?

Can anyone tell me why the following code can deadlock? I'm simulating our webserver on multiple threads in a console app.
The console app has 5 threads and updates 250 records on each thread.
I am finding that transaction.Commit() is not enough, I will get deadlocks, so it clearly isn't releasing the locks at that point.
Unless I put the transaction.Dispose() in and the Sleep(50ms), I consistently get deadlocks on innodb. If I turn the code into a sproc, then the sleep needs to be bigger to avoid deadlocks. I'm not sure it does avoid them totally actually, need to run it with more threads.
Closing the connection after the transaction is more reliable but in the web app ideally we want to have a connection per request for performance.
Also putting transaction.Dispose() is far more reliable in terms of avoiding deadlocks, than using (var transaction = ...
We are using .NET currently, not .NET core.
I would bet if I write the same program using SqlClient for Sql/Server it will work - I'm going to try that tomorrow.
Can anyone explain this? What am I doing wrong?
static void Main(string[] args)
{
Console.WriteLine("GenerateBarcodesTestConsoleApp");
var connectionString = ConfigurationManager.ConnectionStrings["MyConnection"].ConnectionString;
var threads = Enumerable.Range(1, 5);
Parallel.ForEach(threads, t =>
{
GenerateBarcodes2(t, connectionString, 250);
});
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
static void GenerateBarcodes2(int thread, string connectionString, int numberToGenerate)
{
using (var con = new MySqlConnection(connectionString))
{
con.Open();
var sql1 = "SELECT p.barcode, p..barcode_id " +
"FROM p_barcode p " +
"WHERE p.company_id = 1 " +
"AND SUBSTRING(p.barcode,1,2) = 'OK' " +
"AND players.in_use = 0 " +
"LIMIT 1 " +
"FOR UPDATE;";
var sql2 = "UPDATE p_barcode SET in_use = 1 WHERE company_id = 1 AND barcode_id = ?barcode_id AND in_use = 0";
for (int b = 0; b < numberToGenerate; b++)
{
using (var transaction = con.BeginTransaction(System.Data.IsolationLevel.RepeatableRead))
{
string barcode = string.Empty;
int barcodeId = 0;
using (var cmd = new MySqlCommand(sql1, con, transaction))
{
var rdr = cmd.ExecuteReader();
if (rdr.Read())
{
barcode = (string)rdr["barcode"];
barcodeId = (int)rdr["barcode_id"];
}
rdr.Close();
Console.WriteLine(barcode);
}
if (barcodeId != 0)
{
using (var cmd = new MySqlCommand(sql2, con, transaction))
{
cmd.Parameters.AddWithValue("barcode_id", barcodeId);
cmd.ExecuteNonQuery();
}
}
transaction.Commit();
System.Threading.Thread.Sleep(50);
}
//transaction.Dispose();
}
con.Close();
}
}
In MariaDb, SKIP LOCKED is the solution to prevent deadlocks.
There isn't a perfect solution to prevent deadlocks, without redesigning the system to avoid two threads trying to update the same record at the same time, however adding a small sleep after commit transaction appears to help massively. 20ms was about right on my dev machines.
Does this suggest that commit returns before the database has actually committed the transaction and released the locks? Either way, this behaviour is the same for INNODB and MARIADB.

MySQL -> Transaction Context -> Code Review

I am hoping someone could check my context of how I am using Transaction with MySql. I believe this should work with the outline below. Can someone look at my code and tell me if I am doing it correctly? Thank you.
I believe this should:
Instantiate the db connection.
Iterate through the DataTable rows of the given DataTable.
Check to see if the table exists and if it does not it will Execute the Create Table.
Execute Insert Command with Parameters of information into the newly created or existing table.
Commit the Transaction and then close the connection.
//Open the SQL Connection
var dbConnection = new MySqlConnection(GetConnectionString(WowDatabase));
dbConnection.Open();
//Instantiate the Command
using (var cmd = new MySqlCommand())
{
//Create a new Transaction
using (var transaction = dbConnection.BeginTransaction())
{
uint lastId = 999999;
for (int i = 0; i < dt.Rows.Count; i++)
{
//var identifier = dt.Rows[i].Field<int>("Identifier");
var id = dt.Rows[i].Field<uint>("Entry");
var name = dt.Rows[i].Field<string>("Name");
var zone = dt.Rows[i].Field<uint>("ZoneID");
var map = dt.Rows[i].Field<uint>("MapID");
var state = dt.Rows[i].Field<Enums.ItemState>("State");
var type = dt.Rows[i].Field<Enums.ObjectType>("Type");
var faction = dt.Rows[i].Field<Enums.FactionType>("Faction");
var x = dt.Rows[i].Field<float>("X");
var y = dt.Rows[i].Field<float>("Y");
var z = dt.Rows[i].Field<float>("Z");
string dataTableName = "entry_" + id;
//Create Table if it does not exist.
if (id != lastId)
{
cmd.CommandText = $"CREATE TABLE IF NOT EXISTS `{dataTableName}` (" +
"`identifier` int NOT NULL AUTO_INCREMENT COMMENT 'Auto Incriment Identifier' ," +
"`zone_id` int NULL COMMENT 'Zone Entry' ," +
"`x_axis` float NULL COMMENT 'X Axis on Map' ," +
"`y_axis` float NULL COMMENT 'Y Axis on Map' ," +
"`z_axis` float NULL COMMENT 'Z Axis on Map' ," +
"`situation` enum('') NULL COMMENT 'Location of the item. Underground, Indoors, Outdoors)' ," +
"`faction` enum('') NULL COMMENT 'Specifies the Faction which can safely access the item.' ," +
"PRIMARY KEY(`identifier`)" +
")";
cmd.ExecuteNonQuery();
lastId = id;
}
//Create command to execute the insertion of Data into desired Table
cmd.CommandText = $"INSERT INTO [{dataTableName}] " +
"([identifier], [zone_id], [x_axis], [y_axis], [z_axis], [situation], [faction], [Create_Date], [Update_Date]) " +
"VALUES (#Identifier, #Zone_Id, #X_Axis, #Y_Axis, #Z_Axis, #Situation, #Faction, #Create_Date, #Update_Date)";
//Add data value with Parameters.
cmd.CommandType = CommandType.Text;
//cmd.Parameters.AddWithValue("#Identifier", identifier);
cmd.Parameters.AddWithValue("#Identifier", id);
cmd.Parameters.AddWithValue("#Zone_Id", zone);
cmd.Parameters.AddWithValue("#X_Axis", x);
cmd.Parameters.AddWithValue("#Y_Axis", y);
cmd.Parameters.AddWithValue("#Z_Axis", z);
cmd.Parameters.AddWithValue("#Situation", state);
cmd.Parameters.AddWithValue("#Faction", faction);
cmd.Parameters.AddWithValue("#Create_Date", DateTime.Now.Date);
cmd.Parameters.AddWithValue("#Update_Date", DateTime.Now.Date);
cmd.ExecuteNonQuery();
} //for (int i = 0; i < dt.Rows.Count; i++)
//Commit the Transaction
transaction.Commit();
} //using (var transaction = dbConnection.BeginTransaction())
} //using (var cmd = new MySqlCommand())
//Close the Connection
dbConnection.Close();
I don't think this will work (as expected) with MySql. There are a few statements that cause an implicit commit - CREATE TABLE is one of them.
http://dev.mysql.com/doc/refman/5.7/en/implicit-commit.html
Consider a using statement
You can actually wrap your existing dbConnection within a using statement to ensure that it is safely disposed of (similar to how you are handling your transactions, commands, etc.) :
//Open the SQL Connection
using(var dbConnection = new MySqlConnection(GetConnectionString(WowDatabase))
{
// Other code omitted for brevity
}
Consistent String Interpolation
You have a few spots where you simply concatenate strings via + but you are mostly taking advantage of C# 6's String Interpolation feature. You might want to consider using it everywhere :
string dataTableName = $"entry_{id}";
No Need for Setting CommandType
Additionally, you could remove the the setting of your CommandType property for your actual cmd object as CommandType.Text is the default :
//cmd.CommandType = CommandType.Text;

SQLite UPDATE performance

I have table that contains 350632 records currently. I have recently added a new column to the table which I am trying to populate using this code in C#:
List<int> listOfInts = new List<int>();
dbConnect.Open();
int counter = 1;
string toExecute = "select * from tempwords";
string insertQuery = "update tempwords set rownum=#toInsert";
using (SQLiteTransaction transaction = dbConnect.BeginTransaction())
{
using (SQLiteCommand newCommand = new SQLiteCommand(toExecute, dbConnect))
{
using (SQLiteDataReader reader = newCommand.ExecuteReader())
{
while (reader.Read())
{
listOfInts.Add(counter);
counter++;
}
}
}
transaction.Commit();
dbConnect.Dispose();
}
Console.WriteLine(listOfInts.Count.ToString());
dbConnect.Open();
int iterator = 0;
using (SQLiteTransaction transactionx = dbConnect.BeginTransaction())
{
using (SQLiteCommand command = new SQLiteCommand(insertQuery, dbConnect))
{
command.Transaction = transactionx;
while (iterator <= listOfInts.Count - 1)
{
command.Parameters.AddWithValue("#toInsert", listOfInts[iterator]);
command.ExecuteNonQuery();
iterator++;
Console.WriteLine((iterator + 1).ToString() + Environment.NewLine);
}
}
transactionx.Commit();
dbConnect.Dispose();
}
I think the logic is fine and it would all be done properly but the update is so slow(even though I have an index onn the rownum column). Is there any way I can speed it up to some realistic time?
Thanks in advance.
This command:
update tempwords set rownum=#toInsert
updates all 360632 rows (with the same value).
When you execute this command 360632 times, you end up updating 122942799424 rows.
If you want to update only a single row with each command execution, you have to tell the database which row that is:
update tempwords set rownum = #toInsert where _id = #id_of_the_row

Fastest way to update more than 50.000 rows in a mdb database c#

I searched on the net something but nothing really helped me. I want to update, with a list of article, a database, but the way that I've found is really slow.
This is my code:
List<Article> costs = GetIdCosts(); //here there are 70.000 articles
conn = new OleDbConnection(string.Format(MDB_CONNECTION_STRING, PATH, PSW));
conn.Open();
transaction = conn.BeginTransaction();
using (var cmd = conn.CreateCommand())
{
cmd.Transaction = transaction;
cmd.CommandText = "UPDATE TABLE_RO SET TABLE_RO.COST = ? WHERE TABLE_RO.ID = ?;";
for (int i = 0; i < costs.Count; i++)
{
double cost = costs[i].Cost;
int id = costs[i].Id;
cmd.Parameters.AddWithValue("data", cost);
cmd.Parameters.AddWithValue("id", id);
if (cmd.ExecuteNonQuery() != 1) throw new Exception();
}
}
transaction.Commit();
But this way take a lot of minutes something like 10 minutes or more. There are another way to speed up this updating ? Thanks.
Try modifying your code to this:
List<Article> costs = GetIdCosts(); //here there are 70.000 articles
// Setup and open the database connection
conn = new OleDbConnection(string.Format(MDB_CONNECTION_STRING, PATH, PSW));
conn.Open();
// Setup a command
OleDbCommand cmd = new OleDbCommand();
cmd.Connection = conn;
cmd.CommandText = "UPDATE TABLE_RO SET TABLE_RO.COST = ? WHERE TABLE_RO.ID = ?;";
// Setup the paramaters and prepare the command to be executed
cmd.Parameters.Add("?", OleDbType.Currency, 255);
cmd.Parameters.Add("?", OleDbType.Integer, 8); // Assuming you ID is never longer than 8 digits
cmd.Prepare();
OleDbTransaction transaction = conn.BeginTransaction();
cmd.Transaction = transaction;
// Start the loop
for (int i = 0; i < costs.Count; i++)
{
cmd.Parameters[0].Value = costs[i].Cost;
cmd.Parameters[1].Value = costs[i].Id;
try
{
cmd.ExecuteNonQuery();
}
catch (Exception ex)
{
// handle any exception here
}
}
transaction.Commit();
conn.Close();
The cmd.Prepare method will speed things up since it creates a compiled version of the command on the data source.
Small change option:
Using StringBuilder and string.Format construct one big command text.
var sb = new StringBuilder();
for(....){
sb.AppendLine(string.Format("UPDATE TABLE_RO SET TABLE_RO.COST = '{0}' WHERE TABLE_RO.ID = '{1}';",cost, id));
}
Even faster option:
As in first example construct a sql but this time make it look (in result) like:
-- declaring table variable
declare table #data (id int primary key, cost decimal(10,8))
-- insert union selected variables into the table
insert into #data
select 1121 as id, 10.23 as cost
union select 1122 as id, 58.43 as cost
union select ...
-- update TABLE_RO using update join syntax where inner join data
-- and copy value from column in #data to column in TABLE_RO
update dest
set dest.cost = source.cost
from TABLE_RO dest
inner join #data source on dest.id = source.id
This is the fastest you can get without using bulk inserts.
Performing mass-updates with Ado.net and OleDb is painfully slow. If possible, you could consider performing the update via DAO. Just add the reference to the DAO-Library (COM-Object) and use something like the following code (caution -> untested):
// Import Reference to "Microsoft DAO 3.6 Object Library" (COM)
string TargetDBPath = "insert Path to .mdb file here";
DAO.DBEngine dbEngine = new DAO.DBEngine();
DAO.Database daodb = dbEngine.OpenDatabase(TargetDBPath, false, false, "MS Access;pwd="+"insert your db password here (if you have any)");
DAO.Recordset rs = daodb.OpenRecordset("insert target Table name here", DAO.RecordsetTypeEnum.dbOpenDynaset);
if (rs.RecordCount > 0)
{
rs.MoveFirst();
while (!rs.EOF)
{
// Load id of row
int rowid = rs.Fields["Id"].Value;
// Iterate List to find entry with matching ID
for (int i = 0; i < costs.Count; i++)
{
double cost = costs[i].Cost;
int id = costs[i].Id;
if (rowid == id)
{
// Save changed values
rs.Edit();
rs.Fields["Id"].Value = cost;
rs.Update();
}
}
rs.MoveNext();
}
}
rs.Close();
Note the fact that we are doing a full table scan here. But, unless the total number of records in the table is many orders of magnitude bigger than the number of updated records, it should still outperform the Ado.net approach significantly...

sqlite commit performance problem with indexes

I have run into a problem where the time to do a commit starts taking
longer and longer. We are talking on the orders of 250ms for a table
with ~ 20k lines and a disc size of around 2-3mb. And it just keeps getting worse. I have tracked the
performance problem down to something to do with indexs. It's almost
as if sqlite is creating the index on every commit. The commit consists of
100 INSERTS. I have made a as small program as I could where I can
reproduce the problem and have tried running this on Linux as well.
There the problem doesn't seem to occur. The problem exists with both
WAL and truncate journaling mode. The problem doesn't seem to exist
when I use a memory database instead of a file. I have tried both
version 3.6.23.1 and 3.7.6.3.
On Windows where I'm experiencing the problem I run sqlite in a C#
program. I have checked the implementation of transaction support in
the System.Date.Sqlite wrapper and it does absolutely nothing else
than simply to a COMMIT. Sadly I don't have a C compiler for Windows
so I can't check it when not running the wrapper, but it should be the
same.
System.IO.File.Delete("test.db");
var db_connection = new SQLiteConnection(#"Data Source=test.db");
db_connection.Open();
using (var cmd = db_connection.CreateCommand())
{
cmd.CommandText = "CREATE TABLE test (id integer primary key, dato integer)";
cmd.ExecuteNonQuery();
cmd.CommandText = "CREATE INDEX i on test(dato)";
cmd.ExecuteNonQuery();
}
SQLiteTransaction trans = null;
List<string> paths = new List<string>();
var random = new Random();
for (var j = 0; j < 150; ++j)
{
for (var i = 0; i < 1000; ++i)
{
if (i % 100 == 0)
{
trans = db_connection.BeginTransaction();
}
using (var cmd = db_connection.CreateCommand())
{
cmd.CommandText = String.Format("INSERT INTO test (dato) values ({0})", random.Next(1, 100000000));
cmd.ExecuteNonQuery();
}
if (i % 100 == 99 && trans != null)
{
var now = DateTime.Now;
trans.Commit();
trans.Dispose();
System.Console.WriteLine("commit {0}", (DateTime.Now - now).TotalMilliseconds);
}
}
}
Did you try reducing hard disk access, for example adding this command before creating any table:
cmd.CommandText = "PRAGMA locking_mode = EXCLUSIVE";
cmd.ExecuteNonQuery();
Providing your app allows exclusive locking of the database.
Also can help:
PRAGMA Synchronous=OFF

Categories