I am programming in C# wpf application and using Sqlite database.Let's say I have 25 secondary threads all waiting to perform an insert operation on the database. Now The Main Thread performs a select operation on the database. I may be wrong, but this main thread will wait for some time. How do I ensure. So i get database locked in my log files. How do i ensure that main thread gets highest priority so that my UI is not blocked. I am using DBContext object to perform database operations.?
A clever use of ReaderWriterLockSlim will definitely help you improve performance.
private ReaderWriterLockSlim _readerWriterLock = new ReaderWriterLockSlim();
private DataTable RunSelectSQL(string Sql)
{
DataTable selectDataTable = null;
try
{
_readerWriterLock.EnterReadLock();
//Function to acess your database and return the selected results
}
finally
{
_readerWriterLock.ExitReadLock();
}
return selectDataTable;
}
private DataTable RunInsertSQL(string Sql)
{
DataTable selectDataTable = null;
bool isbreaked = false;
try
{
_readerWriterLock.EnterWriteLock();
if (_readerWriterLock.WaitingReadCount > 0)
{
isbreaked = true;
}
else
{
//Function to insert data in your database
}
}
finally
{
_readerWriterLock.ExitWriteLock();
}
if (isbreaked)
{
Thread.Sleep(10);
return RunInsertSQL(Sql);
}
return selectDataTable;
}
Try this it will, improve your responsiveness and you have Select query to fire having higher priority over Insert SQL.
Please note, if some insertion is already running then Select will at least wait for that insertion to complete.
This code will always give priority to SELECT over INSERT.
One more point, never perform the long ongoing operation on main thread like you have selecting from database, rather perform the operation in background and then reflect the latest results on UI using main thread. This will ensure that your UI will never freeze.
EDIT There can be a starvation case where all INSERT may be waiting, if there are continuous SELECT queries being fired without any gap.
But I believe in your case, this will not happen as the UI will not always be refreshing to get the latest changes so frequently without any time slice in between.
What mode are you running the databae in?
SQLite support three different threading modes:
Single-thread. In this mode, all mutexes are disabled and SQLite is unsafe to use in more than a single thread at once.
Multi-thread. In this mode, SQLite can be safely used by multiple threads provided that no single database connection is used
simultaneously in two or more threads.
Serialized. In serialized mode, SQLite can be safely used by multiple threads with no restriction.
The default mode is serialized.
http://www.sqlite.org/threadsafe.html
It would seem that Multi-Thread is the one you want. Serializing database access is slow.
A had exactly the same problem in my multithreaded caching subsystem
It looks like it is something like only 'System.Data.SQLite' library issue
Adding this (found with reflector)
"...;Version=3;Pooling=True;Max Pool Size=100;"
to connection string solved the issue.
Related
I have written a following piece of code:
public void BulkUpdateItems(List<Items> items)
{
var bulk = new BulkOperations();
using (var trans = new TransactionScope())
{
using (SqlConnection conn = new SqlConnection(#"connstring"))
{
bulk.Setup()
.ForCollection(items)
.WithTable("Items")
.AddColumn(x => x.QuantitySold)
.BulkUpdate()
.MatchTargetOn(x => x.ItemID)
.Commit(conn);
}
trans.Complete();
}
}
With using a SQLBulkTools library... But the problem here is when I run this procedure from multiple threads at a time I run on deadlocks...
And the error states that a certain process ID was deadlocked or something like that....
Is there any alternative to perform a bulk update of 1 table from multiple threads in an efficient way?
Can someone help me out?
I don't know much about that API but a quick read suggests a few things you could try. I would try them in the order listed.
Use a smaller batch size, and/or set the batch timeout higher. This will let each thread take turns.
Use a temporary table. This will allow the threads to work independently.
Set the options to use a table lock. If you lock the whole table, different threads won't be able to lock different rows, so you shouldn't get any deadlocks.
The deadlock message is coming from SQL Server - it means that one of your connections is waiting on a resource locked by another, and that second connection is waiting on a resource held on the first.
If you are trying to update the same table, you are likely running into a simple SQL locking issue and nothing really to do with C#. You need to think more thoroughly about the implications of doing a bulk update on multiple threads; its probably (depending on the percentage of the table you are updating) better to do this on a single connection and use a queue style of mechanism to de-conflict the individual calls.
Try
lock
{
....
}
What this will do is, when one process is executing the code within the curly braces, it will cause other processes to wait until the first one is finished. In that way, only one process will execute the block at a time.
In a web application, we provide paginated search panels for various database tables in our application. We currently allow users to select individual rows, and via a UI, execute some operation in each selected instance.
For example, a panel of document records offers an ability to delete documents. A user may check 15 checkboxes representing 15 document identifiers, and choose Options > Delete. This works just fine.
I wish to offer the users an option to execute some operation for all rows matching the query used to display the data in the panel.
We may have 5,000 documents matching some search criteria, and wish to allow a user to delete all 5,000. (I understand this example is a bit contrived; let's ignore the 'wisdom' to allowing users to delete documents in bulk!)
Execution of a method for thousands of rows is a long-running operation, so I will queue the operation instead. Consider this an equivalent of Gmail's ability to apply a filter to all email conversations matching some search criteria.
I need to execute a query that will return an unknown number of rows, and for each row, insert a row into a queue (in the code below, the queue is represented by ImportFileQueue).
I coded it as follows:
using (var reader = await source.InvokeDataReaderAsync(operation, parameters))
{
Parallel.ForEach<IDictionary<string, object>>(reader.Enumerate(), async properties =>
{
try
{
var instance = new ImportFileQueueObject(User)
{
// application tier calculation here; cannot do in SQL
};
await instance.SaveAsync();
}
catch (System.Exception ex)
{
// omitted for brevity
}
});
}
When running this in a unit test that wraps the call with a Transaction, I receive a System.Data.SqlClient.SqlException: Transaction context in use by another session. error.
This is easily resolved by either:
Change the database call from async to sync, or
Removing the Parallel.Foreach, and iterating through the reader in a serial manner.
I opted for this former:
using (var reader = await source.InvokeDataReaderAsync(operation, parameters))
{
Parallel.ForEach<IDictionary<string, object>>(reader.Enumerate(), properties =>
{
try
{
var instance = new ImportFileQueueObject(User)
{
// Omitted for brevity
};
instance.Save();
}
catch (System.Exception ex)
{
// omitted for brevity
}
});
}
My thought process is, in typical use cases:
the outer reader will often have thousands of rows
the instance.Save() call is "lightweight"; inserting a single row into the db
Two questions:
Is there a reasonable way to use async/await inside the Parallel.Foreach, where the inner code is using SqlConnection (avoiding the TransactionContext error)
If not, given my expected typical use case, is my choice to leverage TPL and forfeit async/await for the single-row saves reasonable
The answer suggested in What is the reason of “Transaction context in use by another session” says:
Avoid multi-threaded data operations if it's possible (no matter
loading or saving). E.g. save SELECT/UPDATE/ etc... requests in a
single queue and serve them with a single-thread worker;
but I'm trying to minimize total execution time, and figured the Parallel.Foreach was more likely to reduce execution time.
It's almost always a bad idea to open a transaction and then wait for I/O while holding it open. You'll get much better performance (and fewer deadlocks) by buffering the data first. If there's more total data than you can easily buffer in memory, buffer it into chunks of a thousand or so rows at a time. Put each of those in a separate transaction if possible.
Whenever you open a transaction, any locks taken remain open until it is committed (and locks get taken whether you want to or not when you're inserting data). Those locks cause other updates or reads without WITH(NOLOCK) to sit and wait until the transaction is committed. In a high-performance system, if you're doing I/O while those locks are held, it's pretty-much guaranteed to cause problems as other callers start an operation and then sit and wait while this operation does I/O outside the transaction.
In a c# program, I have 2 threads which launches a stored procedure.
This store procedure reads and writes data in some tables.
When I start my program, I have sometimes a SQL server exception (lock trouble).
To avoid deadlock, I tried to add a lock(this){ ... } in my program to avoid simultaneous calls of this stored procedure but without success (same exception)
How can fix that ?
lock(this) will not solve your concurrency problems, if more than one instance of the class is running, as the locks will refer to different this references, i.e.
public class Locker
{
public void Work()
{
lock (this)
{
//do something
}
}
}
used as (assume these codes are run in parallel)
Locker first = new Locker(); Locker second = new Locker();
first.Work() // <-- locks on first second.Work() // <-- locks on second
will lock on different objects and not really lock at all.
Using this pattern
public class Locker
{
private static object lockObject = new object();
// a static doodad for locking
public void Work()
{
lock (lockObject)
{
//do something
}
}
}
will lock on the same thing in both cases, and make the second call wait.
However, in most cases from my experience, lock problems in SQL Server procedures were the fault of the procedure itself, holding transactions open longer than neccessary, opening unneeded transactions, having suboptimal queries, etc. Making your sp calls wait in line in the C# code, instead of waiting in line at the SQL Server, does not solve those problems.
Also, deadlocks are a specific category of concurency issues that almost always can be solved by refactoring the solution with data access in mind. Give us more info about the problem, there might be a solution that does not need application-level locks at all.
As explained by #SWeko, C#'s lock will only resolve concurrency issue among threads of the current AppDomain, so if more than one AppDomains are running, let us say two desktop clients for simplicity, then they will run into deadlock. See Cross-Process Locking in C# and What is the difference between lock and Mutex? for more details.
It would be much better, even in case of desktop application, that you deal with deadlock issue within your stored procedure. The default behavior would be that your second request will wait till timeout for the first to finish and if you don't want to wait then use WITH(NOWAIT). Explore more
Database : SQL server 2005
Programming language : C#
I have a method that does some processing with the User object passed to it. I want to control the way this method works when it is called by multiple threads with the same user object. I have implemented a simple locking which make use of database. I can't use the C#'s lock statement as this method is on a API which will be delivered on different machines. But the database is centralized.
Following code shows what I have. (Exception handled omitted for clarity)
Eg:
void Process(User user)
{
using(var transaction = BeginTransaction())
{
if(LockUser()) {
try {
/* Other processing code */
}
finally {
UnLockUser();
}
}
}
}
LockUser() inserts a new entry into a database table. This table has got a unique constraint on the user id. So when the second thread tries to insert the same data, constraint gets violated and will be an exception. LockUser() catches it and return false. UnlockUser just deletes the entry from the lock table.
Note: Please don't consider the possibilities of lock not getting deleted correctly. We have a SQL job that cleans items that are locked for long time.
Question
Consider two threads executing this method at same time and both of them started the transaction. Since transaction is committed only after all processing logic, will the transaction started on thread2 see the data inserted by thread1 to the lock table?
Is this locking logic OK? Do you see any problems with this approach?
If the acquisition of the lock - by virtue of inserting an entry into the database table - is part of the same transaction then either all or none of the changes of that transaction will become visible to the second thread. This is true for the default isolation level (ReadCommitted).
In other words: Whichever thread has a successful commit of that single transaction has also successfully acquired the lock (= inserted successfully the entry into the database).
In your code example I'm missing the handling of Commit()/Rollback(). Make sure you consider this as part of your implementation.
It depends on the transaction isolation level that you use.
The default isolation (ReadCommitted) level assures that other connections cannot see the uncommitted changes that a connection is making.
When executing your SQL statement, you can explicitly acquire a lock by using locking hints.
I have a GUI C# application that has a single button Start/Stop.
Originally this GUI was creating a single instance of a class that queries a database and performs some actions if there are results and gets a single "task" at a time from the database.
I was then asked to try to utilize all the computing power on some of the 8 core systems. Using the number of processors I figure I can create that number of instances of my class and run them all and come pretty close to using a fair ammount of the computing power.
Environment.ProccessorCount;
Using this value, in the GUI form, I have been trying to go through a loop ProccessorCount number of times and start a new thread that calls a "doWork" type method in the class. Then Sleep for 1 second (to ensure the initial query gets through) and then proceed to the next part of the loop.
I kept on having issues with this however because it seemed to wait until the loop was completed to start the queries leading to a collision of some sort (getting the same value from the MySQL database).
In the main form, once it starts the "workers" it then changes the button text to STOP and if the button is hit again, it should execute on each "worker" a "stopWork" method.
Does what I am trying to accomplish make sense? Is there a better way to do this (that doesn't involve restructuring the worker class)?
Restructure your design so you have one thread running in the background checking your database for work to do.
When it finds work to do, spawn a new thread for each work item.
Don't forget to use synchronization tools, such as semaphores and mutexes, for the key limited resources. Fine tuning the synchronization is worth your time.
You could also experiment with the maximum number of worker threads - my guess is that it would be a few over your current number of processors.
While an exhaustive answer on the best practices of multithreaded development is a little beyond what I can write here, a couple of things:
Don't use Sleep() to wait for something to continue unless ABSOLUTELY necessary. If you have another code process that you need to wait for completion, you can either Join() that thread or use either a ManualResetEvent or AutoResetEvent. There is a lot of information on MSDN about their usage. Take some time to read over it.
You can't really guarantee that your threads will each run on their own core. While it's entirely likely that the OS thread scheduler will do this, just be aware that it isn't guaranteed.
I would assume that the easiest way to increase your use of the processors would be to simply spawn the worker methods on threads from the ThreadPool (by calling ThreadPool.QueueUserWorkItem). If you do this in a loop, the runtime will pick up threads from the thread pool and run the worker threads in parallel.
ThreadPool.QueueUserWorkItem(state => DoWork());
Never use Sleep for thread synchronization.
Your question doesn't supply enough detail, but you might want to use a ManualResetEvent to make the workers wait for the initial query.
Yes, it makes sense what you are trying to do.
It would make sense to make 8 workers, each consuming tasks from a queue. You should take care to synchronize threads properly, if they need to access shared state. From your description of your problem, it sounds like you are having a thread synchronization problem.
You should remember, that you can only update the GUI from the GUI thread. That might also be the source of your problems.
There is really no way to tell, what exactly the problem is, without more information or a code example.
I'm suspecting you have a problem like this: You need to make a copy of the loop variable (task) into currenttask, otherwise the threads all actually share the same variable.
<main thread>
var tasks = db.GetTasks();
foreach(var task in tasks) {
var currenttask = task;
ThreadPool.QueueUserWorkItem(state => DoTask(currenttask));
// or, new Thread(() => DoTask(currentTask)).Start()
// ThreadPool.QueueUserWorkItem(state => DoTask(task)); this doesn't work!
}
Note that you shouldn't Thread.Sleep() on the main thread to wait for the worker threads to finish. if using the threadpool, you can continue to queue work items, if you want to wait for the executing tasks to finish, you should use something like an AutoResetEvent to wait for the threads to finish.
You seem to be encountering a common issue with multithreaded programming. It's called a Race Condition, and you'd do well to do some research on this and other multithreading issues before proceeding too far. It's very easy to quickly mess up all your data.
The short of it is that you must ensure all your commands to your database (eg: Get an available task) are performed within the scope of a single transaction.
I don't know MySQL Well enough to give a complete answer, however a very basic example for T-SQL might look like this:
BEGIN TRAN
DECLARE #taskid int
SELECT #taskid=taskid FROM tasks WHERE assigned = false
UPDATE tasks SET assigned=true WHERE taskid = #taskID
SELECT * from tasks where taskid = #taskid
COMMIT TRAN
MySQL 5 and above has support for transactions too.
You could also do a lock around the "fetch task from DB" code, that way only one thread will query the database at a time - but obviously this decrease the performance gain somewhat.
Some code of what you're doing (and maybe some SQL, this really depends) would be a huge help.
However assuming you're fetching a task from DB, and these tasks require some time in C#, you likely want something like this:
object myLock;
void StartWorking()
{
myLock = new object(); // only new it once, could be done in your constructor too.
for (int i = 0; i < Environment.Processorcount; i++)
{
ThreadPool.QueueUserWorkItem(null => DoWork());
}
}
void DoWork(object state)
{
object task;
lock(myLock)
{
task = GetTaskFromDB();
}
PerformTask(task);
}
There are some good ideas posted above. One of the things that we ran into is that we not only wanted a multi-processor capable application but a multi-server capable application as well. Depending upon your application we use a queue that gets wrapped in a lock through a common web server (causing others to be blocked) while we get the next thing to be processed.
In our case, we are processing lots of data, we to keep things single, we locked an object, get the id of the next unprocessed item, flag it as being processed, unlock the object, hand the record id to be processed back to the main thread on the calling server, and then it gets processed. This seems to work well for us since the time it takes to lock, get, update, and release is very small, and while blocking does occur, we never run into a deadlock situation while waiting for reasources (because we are using lock(object) { } and a nice tight try catch inside to ensure we handle errors gracefully inside.
As mentioned elsewhere, all of this is handled in the primary thread. Given the information to be processed, we push it to a new thread (which for us goes and retrieve 100mb's of data and processes it per call). This approach has allowed us to scale beyond the single server. In the past we had to through high end hardware at the problem, now we can throw several cheaper, but still very capable servers. We can also through this across our virtualization farm in low utilization periods.
On other thing I failed to mention, we also use locking mutexes inside our stored proc as well so if two apps on two servers call it at the same time, it's handled gracefully. So the concept above applies to our app and to the database. Our clients backend is MySql 5.1 series and it is done with just a few lines.
One of this things that I think people forget when they are developing is that you want to get in and out of the lock relatively quickly. If you want to return large chunks of data, I personally wouldn't do it in the lock itself unless you really had to. Otherwise, you can't really do much mutlithreading stuff if everyone is waiting to get data.
Okay, found my MySql code for doing just what you will need.
DELIMITER //
CREATE PROCEDURE getnextid(
I_service_entity_id INT(11)
, OUT O_tag VARCHAR(36)
)
BEGIN
DECLARE L_tag VARCHAR(36) DEFAULT '00000000-0000-0000-0000-000000000000';
DECLARE L_locked INT DEFAULT 0;
DECLARE C_next CURSOR FOR
SELECT tag FROM workitems
WHERE status in (0)
AND processable_date <= DATE_ADD(NOW(), INTERVAL 5 MINUTE)
;
DECLARE EXIT HANDLER FOR NOT FOUND
BEGIN
SET L_tag := '00000000-0000-0000-0000-000000000000';
DO RELEASE_LOCK('myuniquelockis');
END;
SELECT COALESCE(GET_LOCK('myuniquelockis',20), 0) INTO L_locked;
IF L_locked > 0 THEN
OPEN C_next;
FETCH C_next INTO I_tag;
IF I_tag <> '00000000-0000-0000-0000-000000000000' THEN
UPDATE workitems SET
status = 1
, service_entity_id = I_service_entity_id
, date_locked = NOW()
WHERE tag = I_tag;
END IF;
CLOSE C_next;
DO RELEASE_LOCK('myuniquelockis');
ELSE
SET I_tag := L_tag;
END IF;
END
//
DELIMITER ;
In our case, we return a GUID to C# as an out parameter. You could replace the SET at the end with SELECT L_tag; and be done with it and loose the OUT parameter, but we call this from another wrapper...
Hope this helps.