Thread order execution? - c#

I have this simple code : (which i run in linqpad)
void Main()
{
for ( int i=0;i<10;i++)
{
int tmp=i;
new Thread (() =>doWork(tmp)).Start();
}
}
public void doWork( int h)
{
h.Dump();
}
the int tmp=i; line is for capture variable - so each iteration will have its own value.
2 problems :
1) the numbers are not sequential , while thread execution is !
2) sometimes i get less than 10 numbers !
here are some executions outputs:
questions :
1) why case 1 is happening and how can i solve it ?
2) why case 2 is happening and how can i solve it ?

It should not be expected that they are sequential. Each thread gets priority as the kernel chooses. It might happen that they look sequential, purely by the nature of when each is started, but that is pure chance.
For ensuring that they all complete - mark each new thread as IsBackground = false, so that it keeps the executable alive. For example:
new Thread(() => doWork(tmp)) { IsBackground = false }.Start();

Threads execute in unpredictable order, and if the main thread finishes before others you'll not get all the numbers (dump() will not execute). If you mark your threads as IsBackground = false you'll get them all. There's no real solution for the first one except not using threads (or joining threads, which is same thing really).

You shouldn't expect any ordering between threads.
If you start a new thread, it is merely added to the operating system's management structures. Eventually the thread scheduler will come around and allocate a time slice for the thread. It may do this in a round-robin fashion, pick a random one, use some heuristics to determine which one looks most important (eg. one that owns a Window which is in the foreground) and so on.
If the order of the outputs is relevant, you can either sort it afterwards or - if you know the ordering before work begins already - use an array where each thread is given an index into which it should write its result.
Creating new threads the way your example does is also very slow. For micro tasks, using the thread pool is at least one order of magnitude faster.

The nature of thread management is random. You can solve both task, but overhead is too big.
Problem appears that multiple thread concurs on console (or what you use for dump), overriding of sync mechanism is possible but complicated and will cause reduce of performance
You exit before all threads are invoked (see answer by #Marc Gravell)

if ordering is important you may want to avail of a shared queue and use a semaphore to ensure only one thread operates on the top of the queue at a time

You can order thread execution, but it has to be done specifically by you for the specific problem with a specific solution.
E.g.: you would like that thread 1,2,3 complete phase 1 of your code,
and then they proceed to the next phase in the order of their IDs (these IDs you have assign).
You can use semaphores to achieve the behavior - search for block synchronization and mutual exclusion and Test-and-set method.

Related

How to achieve sequential blocking behavior in multithread application?

I'm writing an application that should simulate the behavior of a PLC. This means I have to run several threads making sure only one thread at a time is active and all others are suspended.
For example:
thread 1 repeats every 130ms and blocks all other threads. The effective runtime is 30ms and the remaining 100ms before the thread restarts can be used by other threads.
thread 2 repeats every 300ms and blocks all threads except for thread 1. The effective runtime is 50ms (the remaining 250ms can be used by other threads). Thread 2 is paused until thread 1 has finished executing code (the remaining 100ms of thread 1) and once thread 1 is asleep it resumes from where it has been paused
thread 3 repeats every 1000ms. The effective runtime is 100ms. This thread continues execution only if all other threads are suspended.
The highest priority is to complete the tasks before they are called again, otherwise I have to react, therefore a thread that should be blocked should not run until a certain point, otherwise multicore processing would elaborate the code and only wait to pass the results.
I read several posts and learned that Thread.suspend is not recomended and semaphore or monitor operations mean that the code is executed until a specific and fixed point in the code while I have to pause the threads exactly where the execution has arrived when an other thread (with higher "priority") is called.
I also looked at the priority setting but it doesn't seem to be 100% relevant since the system can override priorities.
Is there a correct or at least solid way to code the blocking mechanism?
I don't think you need to burden yourself with Threads at all. Instead, you can use Tasks with a prioritised TaskScheduler (it's not too hard to write or find by googling).
This makes the code quite easy to write, for example the highest priority thread might be something like:
while (!cancellationRequested)
{
var repeatTask = Task.Delay(130);
// Do your high priority work
await repeatTask;
}
Your other tasks will have a similar basic layout, but they will be given a lower priority in the task scheduler (this is usually handled by the task scheduler having a separate queue for each of the task priorities). Once in a while, they can check whether there is a higher priority task, and if so, they can do await Task.Yield();. In fact, in your case, it seems like you don't even need real queues - that makes this a lot easier, and even better, allows you to use Task.Yield really efficiently.
The end result is that all three of your periodic tasks are efficiently run on just a single thread (or even no thread at all if they're all waiting).
This does rely on coöperative multi-tasking, of course. It's not really possible to handle full blown real-time like pre-emption on Windows - and partial pre-emption solutions tend to be full of problems. If you're in control of most of the time spent in the task (and offload any other work using asynchronous I/O), the coöperative solution is actually far more efficient, and can give you a lot less latency (though it really shouldn't matter much).
I hope I don't missunderstand your question :)
One possibility to your problem might be to use a concurrent queue: https://msdn.microsoft.com/de-de/library/dd267265(v=vs.110).aspx
For example you create a enum to control your state and init the queue:
private ConcurrentQueue<Action> _clientActions ;
private enum Statuskatalog
{
Idle,
Busy
};
Create a timer to start and create a timerfunktion.
Timer _taskTimer = new Timer(ProcessPendingTasks, null, 100, 333);
private void ProcessPendingTasks(object x)
{
_status = Statuskatalog.Busy;
_taskTimer.Change(Timeout.Infinite, Timeout.Infinite);
Action currentTask;
while( _clientActions.TryDequeue( out currentTask ))
{
var task = new Task(currentTask);
task.Start();
task.Wait();
}
_status=Statuskatalog.Idle;
}
Now you only have to add your tasks as delegates to the queue:
_clientActions.Enqueue(delegate { **Your task** });
if (_status == Statuskatalog.Idle) _taskTimer.Change(0, 333);
On this base, you can manage your special requirements you were asking for.
Hope this was, what you were searching for.

Thread priority (how to get fixed order)

in console because threads sleep with randoms it will show the order of threads
3,2,1 or 1,2,3 or ...
how can I have fixed order?
and why when I set priority it doeasn't effect the code?
// ThreadTester.cs
// Multiple threads printing at different intervals.
using System;
using System.Threading;
namespace threadTester
{
// class ThreadTester demonstrates basic threading concepts
class ThreadTester
{
static void Main(string[] args)
{
// Create and name each thread. Use MessagePrinter's
// Print method as argument to ThreadStart delegate.
MessagePrinter printer1 = new MessagePrinter();
Thread thread1 =
new Thread(new ThreadStart(printer1.Print));
thread1.Name = "thread1";
MessagePrinter printer2 = new MessagePrinter();
Thread thread2 =
new Thread(new ThreadStart(printer2.Print));
thread2.Name = "thread2";
MessagePrinter printer3 = new MessagePrinter();
Thread thread3 =
new Thread(new ThreadStart(printer3.Print));
thread3.Name = "thread3";
Console.WriteLine("Starting threads");
// call each thread's Start method to place each
// thread in Started state
thread1.Priority = ThreadPriority.Lowest;
thread2.Priority = ThreadPriority.Normal;
thread3.Priority = ThreadPriority.Highest;
thread1.Start();
thread2.Start();
thread3.Start();
Console.WriteLine("Threads started\n");
Console.ReadLine();
} // end method Main
} // end class ThreadTester
// Print method of this class used to control threads
class MessagePrinter
{
private int sleepTime;
private static Random random = new Random();
// constructor to initialize a MessagePrinter object
public MessagePrinter()
{
// pick random sleep time between 0 and 5 seconds
sleepTime = random.Next(5001);
}
// method Print controls thread that prints messages
public void Print()
{
// obtain reference to currently executing thread
Thread current = Thread.CurrentThread;
// put thread to sleep for sleepTime amount of time
Console.WriteLine(
current.Name + " going to sleep for " + sleepTime);
Thread.Sleep(sleepTime);
// print thread name
Console.WriteLine(current.Name + " done sleeping");
} // end method Print
} // end class MessagePrinter
}
You use threads precisely because you do not care about having things happen in a particular order, but want either:
At the same time, if there are enough cores to allow them to happen together.
With some making progress while others are waiting for something.
Interleaved with paying attention to I/O or user-input, so as to continue being responsive.
In each of these cases, you just don't care that you don't know just which bit of what will happen when.
However:
You may still care about the order of certain sequences. In the simplest case, you just have these things happen in sequence within the same thread, while other things happen in other threads. More complicated cases can be served by chaining tasks together.
You may want the results from different threads to finally be put into a different order. The simplest approach is to put them all into order after they've all finished, though you can also sort results as they come (tricky though).
For ideal performance, there should be one thread running on each core (or possibly two on a hyperthreaded core, but that has further complications) at all times. Let's say you have a machine with 4 cores and 8 tasks you need done.
If the tasks involved a lot of waiting on I/O, then four will start, each will reach a point where it's waiting on that I/O, and allow one of the other tasks to make some progress. Chances are that even with the number of tasks being twice the number of cores, it'll still end up with plenty of idle time. If each task was going to take 20seconds, then doing them on different threads will probably have them all done in just a little over 20seconds, since all of them were spending most of their 20seconds waiting on something else.
If you are doing tasks that keep the CPU busy all the time (not much waiting for memory and certainly not for I/O) then you will be able to have four such tasks going at a time, while the others are waiting for them to either finish, or give up their slice of time. Here if each took 20seconds, the best you could hope for is a total time of about 40seconds (and that's assuming that no other thread from any process on the system wants the CPU, that you've a perfect lack of overhead in setting up the threads, etc).
In cases where there is more work to do (active work to do, rather than waiting for I/O to complete, another thread to release a lock, etc.) than cores, the OSs scheduler will swap around between different threads that want to be active. The exact details differs from OS to OS (different Windows versions, including some important differences between desktop and server set ups, take different approaches, different Linux versions with some particularly big changes from 2.4 to 2.6 and different Unixes, etc. all have different strategies).
One thing they all have in common is the common goal of making sure stuff gets done.
Thread priorities and process priorities are ways to influence this scheduling. With Windows, whenever there's more threads waiting to work than cores to work, those of the highest priority get given CPU time in a round-robin fashion. Should there be no threads of that priority, then those of the next lowest are given CPU time, then the next and so on.
This is a great way to have things grind things to a halt. It can lead to complications where a thread that was given high priority (presumably because it's work is considered particularly crucial) is waiting on a thread given low priority (presumably because its work is considered less important and one wants it to always cede time to the others), and the low-priority thread keeps not being given CPU time, because there's always more threads of higher priority than available cores. Hence the supposedly high-priority thread gets no CPU time at all.
To fix this situation, windows will occasionally promote the threads that haven't run in a long time. This fixes things, but now means you've got the supposedly low-priority threads bursting along as super-high priority to the detriment not just of the rest of the application but also the rest of the sytem.
(One of the best things about having a multi-core system, is it means your computing experience is less affected by people who set the priority of threads!)
If you use a debugger to stop a multi-threaded .NET application and examine the threads you'll probably find that all of them are at normal except for one at highest. This one at highest will be the finalizer thread and its running at highest priority is one of the reasons its important that finalizers should not take a long time to execute - having work done at highest priority is a bad thing and while it is justified in this case, it must end as soon as possible.
At least 95% of all other cases where someone sets the priority of a thread is a logical bug - it'll do nothing most of the time and allows things to get very messed up the rest. They can be used well (or we wouldn't have that ability at all), but should definitely be put in the "advanced techniques" category. (I like to spend my free time experimenting with multi-threading techniques that would count as excessive and premature optimisation most of the time, and I still hardly ever touch priorities).
In your example, priority will have little effect because each thread spends most of its time sleeping, so whichever thread does want CPU time can get it for the few nano-seconds it needs to run. What it could do though is cause the whole thing to become needlessly slower should you run it on a machine where the cores are also busy with other normal threads. In this case thead1 wouldn't get any CPU time at first (because there's always a higher priority thread that wants the CPU), then after 3seconds the scheduler would realise its been starved for an eternity the terms of CPU speeds (9billion CPU cycles or so) and give it a burst to highest priority for long enough to let it screw with the timing of vital windows services! Luckily it then sleeps and then does a minute amount of work before finishing, so it does no harm, but if it was doing anything real it could have some really nasty effects on the entire system's performance.
You can't guarantee when windows will execute a particular thread. You can make suggestions to the OS (I.E. the priority level) but ultimately Windows will decide when, what and where.
If you want to ensure that 1 starts before 2 which in turns starts before 3 you should make thread 1 start thread 2 and thread 2 start thread 3.
Threads are considered lightweight processes, in that they run completely independent of each other. If your task relies heavily on the order in which threads execute, you probably shouldn't be using threads.
Otherwise, you need to look at the thread synchronization constructs that the .NET framework provides.
You can not synchronize threads like this. If you need the work done in a certain order, don't use seperate threads, or use ResetEvents or something similar.
Thread scheduling is never guaranteed. Order is never preserved unless you explicitly force it through your code via locks/etc.

Use a limited set of threads to fire off similar tasks at regular intervals

I have asked a similar question before here, but after much thought, and implementations from those that answered me, I found that my approach might have been incorrect.
When I implement the solution given to me on this previous question the following test result appeared:
When I 'simulate' multiple tasks running concurrently on multiple threads from the threadpool (by making the threads sleep at random times from 1 to 20 seconds for instance), then the model seems to work fine. I set the system to poll every 1 second to see if it can spawn another thread and all seems fine. Longer running (sleeping) threads would complete later on and threads would start and die all over the place. If I happen to run out of threads (I set it to spawn no more than 10) it would sit and wait for one to become available.
When I however make the system do actual processing in each thread (which would take anything from 3 seconds upwards), which would involve reading data, generating XMLs saving data, sending emails and the like, the system would spawn 1, 2 or 3 threads, do processing and then just close the threads (3...2...1...) and then say 0 threads running (I added console.writelines everywhere to document the process). It would then hang around 0 threads, never picking any more work!
So I decided to state my issue again the hopes that someone has a solution. I have tried various solutions so far:
ThreadPool: There's always the mention that you shouldn't over-work the ThreadPool and jobs has to be 'quick', but what is the definition of 'quick'? How do I know how big/busy the ThreadPool is?
Threads: It's always stated that Threads are expensive and you have to handle them starting up and ending, but how do I limit them, I have tried Semaphores, 'lock' objects, public variables, but it no no avail
So here is what I would like to accomplish:
I have the same job that needs to run at regular intervals, i.e. like gmail would check it's server for new email for you every 5 seconds.
If there is work to be done (i.e. you have new emails to be sent to your inbox), then spawn an async thread and make it start the work. This work will typically take longer than the interval stated in (1), hence the async thread, if an interval passes and the system checks again to see if there's new work and see you have more work, it will spawn another thread and make it start the work.
As in my example, all the jobs are the same kind of job (check of new mail), and are totally independent of eachother, they do not influence each other. If one of them fails, the rest can continue on working with no issue.
I need there to be a limit of how many concurrent threads and maximum threads I can have. If I pick '10', then the system should start checking for jobs as in (1), and keep on spawning threads as in (1), until it reaches 10 threads. All new attempts on an interval to spawn a new thread should just fail (do nothing) until a thread is released again. Here I suppose the choice will be: (a) when it's released there will already be some work queued waiting to be given to the new open thread or (b) on the next interval check if there's new work and assign it to the new open thread.
If there is no work, then typically the system should sit and wait, having no threads and in essence the only thing that should be running is some sort of timer
I currently use the sample in the previous question to do the following:
I start a timer, that ticks every 1 sec
On every tick I 'ThreadPool.QueueUserWorkItem(new WaitCallback(DoWork)'
In DoWork I I instantiate a class and call various methods that does some work
...but this leads to what I mentioned before, only 3 threads that die off and then nothing.
I as thinking of doing the following:
Set the ThreadPool to 10 thread's
Start a timer and in each tick ThreadPool.QueueUserWorkItem', and just keep on doing this, hoping that the ThreadPool will handle everything else. Isn't this what the ThreadPool is supposed to do?
Any help will be fantastic! (Sorry for the involved explanation!)
Try to have a look at the Semaphore class. You can use that to set a limit to how many threads can concurrently access a particular resource (and when I say resource, it can be anything).
Ok, edited for details:
In your class managing the threads, you create:
Semaphore concurrentThreadsEnforcer = new Semaphore(value1, value2);
Then, each thread you start will call:
concurrentThreadsEnforcer.WaitOne();
That will either take one slot from the semaphore and give it to the new thread, or block the new thread until a slot becomes available.
Whenever your new thread finishes its work, he (I like personalizing) MUST call, for obvious reasons:
concurrentThreadsEnforcer.Release().
Now, regarding the constructor, the second parameter is fairly simple: states how many concurrent threads can access the resource at any given time.
The first one is a bit trickier. The difference between the second parameter and the first one will state how many semaphore slots are reserved for the calling thread. That is, all your newly spawned threads will have access to the number of slots stated by the first parameter, and the rest of them up to the second parameter's value will be reserved for the original thread that created the semaphore (calling thread).
In your case, for 10 max threads, you would use:
... = new Semaphore(10, 10);
Since I posted a story anyway, let me gibe more details.
The way I will do it in the new threads, will be like this:
bool aquired = false;
try
{
aquired = concurrentThreadsEnforcer.WaitOne();
// Do some work here
} // Optional catch statements
finally
{
if (aquired)
concurrentThreadsEnforcer.Release();;
}
I would use a combination of BlockingCollection and Parallel.ForEach
Something like this:
private BlockingCollection<Job> jobs = new BlockingCollection<Job>();
private Task jobprocessor;
public void StartWork() {
timer.Start();
jobprocessor = Task.Factory.StartNew(RunJobs);
}
public void EndWork() {
timer.Stop();
jobs.CompleteAdding();
jobprocessor.Wait();
}
public void TimerTick() {
var job = new Job();
if (job.NeedsMoreWork())
jobs.Add(job);
}
public void RunJobs() {
var options = new ParallelOptions { MaxDegreeOfParallelism = 10 };
Parallel.ForEach(jobs.GetConsumingPartitioner(), options,
job => job.DoSomething());
}

Comparison of Join and WaitAll

For multiple threads wait, can anyone compare the pros and cons of using WaitHandle.WaitAll and Thread.Join?
WaitHandle.WaitAll has a 64 handle limit so that is obviously a huge limitation. On the other hand, it is a convenient way to wait for many signals in only a single call. Thread.Join does not require creating any additional WaitHandle instances. And since it could be called individually on each thread the 64 handle limit does not apply.
Personally, I have never used WaitHandle.WaitAll. I prefer a more scalable pattern when I want to wait on multiple signals. You can create a counting mechanism that counts up or down and once a specific value is reach you signal a single shared event. The CountdownEvent class conveniently packages all of this into a single class.
var finished = new CountdownEvent(1);
for (int i = 0; i < NUM_WORK_ITEMS; i++)
{
finished.AddCount();
SpawnAsynchronousOperation(
() =>
{
try
{
// Place logic to run in parallel here.
}
finally
{
finished.Signal();
}
}
}
finished.Signal();
finished.Wait();
Update:
The reason why you want to signal the event from the main thread is subtle. Basically, you want to treat the main thread as if it were just another work item. Afterall, it, along with the other real work items, is running concurrently as well.
Consider for a moment what might happen if we did not treat the main thread as a work item. It will go through one iteration of the for loop and add a count to our event (via AddCount) indicating that we have one pending work item right? Lets say the SpawnAsynchronousOperation completes and gets the work item queued on another thread. Now, imagine if the main thread gets preempted before swinging around to the next iteration of the loop. The thread executing the work item gets its fair share of the CPU and starts humming along and actually completes the work item. The Signal call in the work item runs and decrements our pending work item count to zero which will change the state of the CountdownEvent to signalled. In the meantime the main thread wakes up and goes through all iterations of the loop and hits the Wait call, but since the event got prematurely signalled it pass on by even though there are still pending work items.
Again, avoiding this subtle race condition is easy when you treat the main thread as a work item. That is why the CountdownEvent is intialized with one count and the Signal method is called before the Wait.
I like #Brian's answer as a comparison of the two mechanisms.
If you are on .Net 4, it would be worthwhile exploring Task Parallel Library to achieve Task Parellelism via System.Threading.Tasks which allows you to manage tasks across multiple threads at a higher level of abstraction. The signalling you asked about in this question to manage thread interactions is hidden or much simplified, and you can concentrate on properly defining what each Task consists of and how to coordinate them.
This may seem offtopic but as Microsoft themselves say in the MSDN docs:
in the .NET Framework 4, tasks are the
preferred API for writing
multi-threaded, asynchronous, and
parallel code.
The waitall mechanism involves kernal-mode objects. I don't think the same is true for the join mechanism. I would prefer join, given the opportunity.
Technically though, the two are not equivalent. IIRC Join can only operate on one thread. Waitall can hold for the signalling of multiple kernel objects.

C# creating as many instances of a class as there are processors

I have a GUI C# application that has a single button Start/Stop.
Originally this GUI was creating a single instance of a class that queries a database and performs some actions if there are results and gets a single "task" at a time from the database.
I was then asked to try to utilize all the computing power on some of the 8 core systems. Using the number of processors I figure I can create that number of instances of my class and run them all and come pretty close to using a fair ammount of the computing power.
Environment.ProccessorCount;
Using this value, in the GUI form, I have been trying to go through a loop ProccessorCount number of times and start a new thread that calls a "doWork" type method in the class. Then Sleep for 1 second (to ensure the initial query gets through) and then proceed to the next part of the loop.
I kept on having issues with this however because it seemed to wait until the loop was completed to start the queries leading to a collision of some sort (getting the same value from the MySQL database).
In the main form, once it starts the "workers" it then changes the button text to STOP and if the button is hit again, it should execute on each "worker" a "stopWork" method.
Does what I am trying to accomplish make sense? Is there a better way to do this (that doesn't involve restructuring the worker class)?
Restructure your design so you have one thread running in the background checking your database for work to do.
When it finds work to do, spawn a new thread for each work item.
Don't forget to use synchronization tools, such as semaphores and mutexes, for the key limited resources. Fine tuning the synchronization is worth your time.
You could also experiment with the maximum number of worker threads - my guess is that it would be a few over your current number of processors.
While an exhaustive answer on the best practices of multithreaded development is a little beyond what I can write here, a couple of things:
Don't use Sleep() to wait for something to continue unless ABSOLUTELY necessary. If you have another code process that you need to wait for completion, you can either Join() that thread or use either a ManualResetEvent or AutoResetEvent. There is a lot of information on MSDN about their usage. Take some time to read over it.
You can't really guarantee that your threads will each run on their own core. While it's entirely likely that the OS thread scheduler will do this, just be aware that it isn't guaranteed.
I would assume that the easiest way to increase your use of the processors would be to simply spawn the worker methods on threads from the ThreadPool (by calling ThreadPool.QueueUserWorkItem). If you do this in a loop, the runtime will pick up threads from the thread pool and run the worker threads in parallel.
ThreadPool.QueueUserWorkItem(state => DoWork());
Never use Sleep for thread synchronization.
Your question doesn't supply enough detail, but you might want to use a ManualResetEvent to make the workers wait for the initial query.
Yes, it makes sense what you are trying to do.
It would make sense to make 8 workers, each consuming tasks from a queue. You should take care to synchronize threads properly, if they need to access shared state. From your description of your problem, it sounds like you are having a thread synchronization problem.
You should remember, that you can only update the GUI from the GUI thread. That might also be the source of your problems.
There is really no way to tell, what exactly the problem is, without more information or a code example.
I'm suspecting you have a problem like this: You need to make a copy of the loop variable (task) into currenttask, otherwise the threads all actually share the same variable.
<main thread>
var tasks = db.GetTasks();
foreach(var task in tasks) {
var currenttask = task;
ThreadPool.QueueUserWorkItem(state => DoTask(currenttask));
// or, new Thread(() => DoTask(currentTask)).Start()
// ThreadPool.QueueUserWorkItem(state => DoTask(task)); this doesn't work!
}
Note that you shouldn't Thread.Sleep() on the main thread to wait for the worker threads to finish. if using the threadpool, you can continue to queue work items, if you want to wait for the executing tasks to finish, you should use something like an AutoResetEvent to wait for the threads to finish.
You seem to be encountering a common issue with multithreaded programming. It's called a Race Condition, and you'd do well to do some research on this and other multithreading issues before proceeding too far. It's very easy to quickly mess up all your data.
The short of it is that you must ensure all your commands to your database (eg: Get an available task) are performed within the scope of a single transaction.
I don't know MySQL Well enough to give a complete answer, however a very basic example for T-SQL might look like this:
BEGIN TRAN
DECLARE #taskid int
SELECT #taskid=taskid FROM tasks WHERE assigned = false
UPDATE tasks SET assigned=true WHERE taskid = #taskID
SELECT * from tasks where taskid = #taskid
COMMIT TRAN
MySQL 5 and above has support for transactions too.
You could also do a lock around the "fetch task from DB" code, that way only one thread will query the database at a time - but obviously this decrease the performance gain somewhat.
Some code of what you're doing (and maybe some SQL, this really depends) would be a huge help.
However assuming you're fetching a task from DB, and these tasks require some time in C#, you likely want something like this:
object myLock;
void StartWorking()
{
myLock = new object(); // only new it once, could be done in your constructor too.
for (int i = 0; i < Environment.Processorcount; i++)
{
ThreadPool.QueueUserWorkItem(null => DoWork());
}
}
void DoWork(object state)
{
object task;
lock(myLock)
{
task = GetTaskFromDB();
}
PerformTask(task);
}
There are some good ideas posted above. One of the things that we ran into is that we not only wanted a multi-processor capable application but a multi-server capable application as well. Depending upon your application we use a queue that gets wrapped in a lock through a common web server (causing others to be blocked) while we get the next thing to be processed.
In our case, we are processing lots of data, we to keep things single, we locked an object, get the id of the next unprocessed item, flag it as being processed, unlock the object, hand the record id to be processed back to the main thread on the calling server, and then it gets processed. This seems to work well for us since the time it takes to lock, get, update, and release is very small, and while blocking does occur, we never run into a deadlock situation while waiting for reasources (because we are using lock(object) { } and a nice tight try catch inside to ensure we handle errors gracefully inside.
As mentioned elsewhere, all of this is handled in the primary thread. Given the information to be processed, we push it to a new thread (which for us goes and retrieve 100mb's of data and processes it per call). This approach has allowed us to scale beyond the single server. In the past we had to through high end hardware at the problem, now we can throw several cheaper, but still very capable servers. We can also through this across our virtualization farm in low utilization periods.
On other thing I failed to mention, we also use locking mutexes inside our stored proc as well so if two apps on two servers call it at the same time, it's handled gracefully. So the concept above applies to our app and to the database. Our clients backend is MySql 5.1 series and it is done with just a few lines.
One of this things that I think people forget when they are developing is that you want to get in and out of the lock relatively quickly. If you want to return large chunks of data, I personally wouldn't do it in the lock itself unless you really had to. Otherwise, you can't really do much mutlithreading stuff if everyone is waiting to get data.
Okay, found my MySql code for doing just what you will need.
DELIMITER //
CREATE PROCEDURE getnextid(
I_service_entity_id INT(11)
, OUT O_tag VARCHAR(36)
)
BEGIN
DECLARE L_tag VARCHAR(36) DEFAULT '00000000-0000-0000-0000-000000000000';
DECLARE L_locked INT DEFAULT 0;
DECLARE C_next CURSOR FOR
SELECT tag FROM workitems
WHERE status in (0)
AND processable_date <= DATE_ADD(NOW(), INTERVAL 5 MINUTE)
;
DECLARE EXIT HANDLER FOR NOT FOUND
BEGIN
SET L_tag := '00000000-0000-0000-0000-000000000000';
DO RELEASE_LOCK('myuniquelockis');
END;
SELECT COALESCE(GET_LOCK('myuniquelockis',20), 0) INTO L_locked;
IF L_locked > 0 THEN
OPEN C_next;
FETCH C_next INTO I_tag;
IF I_tag <> '00000000-0000-0000-0000-000000000000' THEN
UPDATE workitems SET
status = 1
, service_entity_id = I_service_entity_id
, date_locked = NOW()
WHERE tag = I_tag;
END IF;
CLOSE C_next;
DO RELEASE_LOCK('myuniquelockis');
ELSE
SET I_tag := L_tag;
END IF;
END
//
DELIMITER ;
In our case, we return a GUID to C# as an out parameter. You could replace the SET at the end with SELECT L_tag; and be done with it and loose the OUT parameter, but we call this from another wrapper...
Hope this helps.

Categories