Potential race conditions with ConcurrentBag and multithreaded application - c#

I've been wrestling for the past few months with how to improve a process where I'm using a DispatcherTimer to periodically check resources to see if they need to be updated/processed. After updating the resource("Product"), move the Product to the next step in the process, etc. The resource may or may not be available immediately.
The reason I have been struggling is two-fold. One reason is that I want to implement this process asynchronously, since it is just synchronous at the moment. The second reason is that I have identified the area where my implementation is stuck and it seems like not an uncommon design pattern but I have no idea how to describe it succinctly, so I can't figure out how to get a useful answer from google.
A rather important note is that I am accessing these Products via direct USB connection, so I am using LibUsbDotNet to interface with the devices. I have made the USB connections asyncronous so I can connect to multiple Products at the same time and process an arbitrary number at once.
public Class Product
{
public bool IsSoftwareUpdated = false;
public bool IsProductInformationCorrect = false;
public bool IsEOLProcessingCompleted = false;
public Product(){}
~Product()
}
public class ProcessProduct
{
List<Product> bagOfProducts = new List<Product>(new Product[10]);
ConcurrentBag<Product> UnprocessedUnits = new ConcurrentBag<Product>();
ConcurrentBag<Product> CurrentlyUpdating = new ConcurrentBag<Product>();
ConcurrentBag<Product> CurrentlyVerifyingInfo = new ConcurrentBag<Product>();
ConcurrentBag<Product> FinishedProcessing = new ConcurrentBag<Product>();
DispatcherTimer _timer = new DispatcherTimer();
public ProcessProduct()
{
_timer.Tick += Timer_Tick; //Every 1 second, call Timer_Tick
_timer.Interval = new TimeSpan(0,0,1); //1 Second timer
bagOfProducts.ForEach(o => UnprocessedUnits.Add(o)); //Fill the UnprocessedUnits with all products
StartProcessing();
}
private void StartProcessing()
{
_timer.Start();
}
private void Timer_Tick(object sender, EventArgs e)
{
ProductOrganizationHandler();
foreach(Product prod in CurrentlyUpdating.ToList())
{
UpdateProcessHandler(prod); //Async function that uses await
}
foreach(Product prod in CurrentlyVerifyingInfo.ToList())
{
VerifyingInfoHandler(prod); //Async function that uses Await
}
if(FinishedProcessing.Count == bagOfProducts.Count)
{
_timer.Stop(); //If all items have finished processing, then stop the process
}
}
private void ProductOrganizationHandler()
{
//Take(read REMOVE) Product from each ConcurrentBag 1 by 1 and moves that item to the bag that it needs to go
//depending on which process step is finished
//(or puts it back in the same bag if that step was not finished).
//E.G, all items are moved from UnprocessUnits to CurrentlyUpdating or CurrentlyVerifying etc.
//If a product is finished updating, it is moved from CurrentlyUpdating to CurrentlyVerifying or FinishedProcessing
}
private async void UpdateProcessHandler(Product prod)
{
await Task.Delay(1000).ConfigureAwait(false);
//Does some actual work validating USB communication and then running through the USB update
}
private async void VerifyingInfoHandler(Product prod)
{
await Task.Delay(1000).ConfigureAwait(false);
//Does actual work here and communicates with the product via USB
}
}
Full Compile-ready code example available via my code on Pastebin.
So, my question really is this: Are there any meaningful race conditions in this code? Specifically, with the ProductOrganizationHandler() code and the looping through the ConcurrentBags in Timer_Tick() (since a new call to Timer_Tick() happens every second). I'm sure this code works the majority of the time, but I am afraid of a hard-to-track bug later on that happens because of a rare race condition when, say, ProductOrganizationHandler() takes > 1 sec to run for some dumb reason.
As a secondary note: Is this even the best design pattern for this type of process? C# is my first OOP language and all self-taught on the job (nearly all of my job is Embedded C) so I don't have any formal experience with OOP design patterns.
My main goal is to asynchronously Update/Verify/Communicate with each device as it becomes available via USB. Once all products in the list are finished (or a timeout), then the process finishes. This project is in .NET 5.
EDIT: For anyone that comes along later with the same question, here's what I did.
I did not understand that DispatcherTimer add Ticks to the Dispatcher queue. This implies that a tick will only run if there is not already another instance of Tick already running or, worded another way, Timer_Tick will run to completion before the next Timer_Tick instance runs.
So, most(all?) of the Threading/concurrency concerns I had were unfounded and I can treat the Timer_Tick as a single-threaded non-concurrent function (which it is).
Also, to keep Ticks from piling up, I ran _timer.Stop() at the beginning of Timer_Tick and restarted the timer at the end of Timer_Tick.

First of all, you are using DispatchTimer, this will raise ticks on the UI thread. So as far as I can see there is no multi threading going on in the example. There are other timers, like System.Timers.Timer that raises events on a background thread if that is the intent. But if you just want to check and update status every so often, and are not running any code that blocks, just using the UI thread is fine and will simplify things a lot.
Even if we assume ProductOrganizationHandler did run on a worker thread, it would still be generally safe to remove items from one concurrent collection and putting them in another. But it would not guarantee that items are processed in any particular order, nor that any specific item is processed by a given tick of the timer. But since the timer will tick periodically all the items should eventually be processed. Keep in mind that most timers need to be disposed, so you need to handle that somehow, including if the processing is stopped prematurely.
Keep in mind that async does not mean concurrent, so I would not use it unless your USB library provides async methods. Even then I would avoid async void since this promotes exceptions to the captured synchronization context, potentially crashing the application, so it should mostly be used in the outermost layer, like button event handlers, or timers, and then you should probably handle exceptions somehow.
As for the best way to do it, I would take a look at DataFlow library.

Related

Trying to solve a core affinity problem results in UI stalling under heavy CPU load

I have literally no experience in threading, so bear with me, please.
I'm making a monitoring/testing tool, that monitors hardware sensors and uses affinity masks and for loop to cycle through the cores one by one running a full-load single-core stress test.
The problem is, that when the user starts the test, and affinity is set, instead of assigning just the test method to that core, it assigns the entire program, which means UI can run only on the core that is currently tested and is under full load.
I guess it's clear that UI is just stalling during the test, and labels that output monitoring data are frozen, while it's crucial to have relevant readings during the test because this is the main purpose of this program.
After some research, I figured that I need to use threading but I never worked with it before.
Here is my shortened code. The problem is, that when I use threading on any method that contains labels, it throws the error "Cross-thread operation not valid: Control 'CPUTempTDie' accessed from a thread other than the thread it was created on". I tried to start Report sensors in a new thread, but it doesn't help. I literally tried to start in a new thread every method and label that is involved, but it's either doesn't help or control score (score - is a result returned by TestMethod to compare it with the correct number to confirm stability) or the program just skips some part of the code and just says "Done".
The question is: Is it possible to set just a TestingMethod to a particular core, allowing the rest of the program (including UI) to use any other free core, and if not, what exactly should I start in a new thread to let UI update under the load?
//the method below updates labels and calls ReportSensors method that reads
//sensors on a timer tick
private void Monitoring()
{
sensor.ReportSensors(); //calls Method that reads sensors
//Two labels below are stalling when TestingMethod runs
CPUTempTDie.Value = (int)sensor.CpuTemp;
FrequencyLabel.Text = sensor.CoreFrequency.ToString("0") + "MHz";
}
private int TestingMethod()
{
while (true)
{
//Performs calculations to generate load, returns the "score"
}
if (timer.Elapsed.TotalSeconds > 60)
{
break;
}
return score;
}
private async void PerCoreTest()
{
try
{
await Task.Delay(3000);
for (int i = 0; i < (numberOfCores); i++)
{
coreCounter++;
Thread.BeginThreadAffinity();
SetThreadAffinityMask(GetCurrentThread(), new IntPtr(intptrVal));
//TestingMethod below being called twice, and results from both runs
//are later compared for consistency.
TestingMethod();
iter1 = score / 10000;
TestingMethod();
iter2 = score / 10000;
maxScore = Math.Max(iter1, iter2);
await Task.Delay(1000);
TestLabel.Text = score.ToString();
//Switches to the next thread mask
}
}
finally
{
Thread.EndThreadAffinity();
}
}
private void TestButton_Click(object sender, EventArgs e)
{
using (Process p = Process.GetCurrentProcess())
p.PriorityClass = ProcessPriorityClass.High;
PerCoreTest();
using (Process p = Process.GetCurrentProcess())
p.PriorityClass = ProcessPriorityClass.Normal;
}
Clarification: My question was closed as a duplicate despite the linked thread doesn't answer my question. I ask to reopen it because:
While "a large number of Remote Calls around 2000 - 3000 calls" mentioned in a linked thread might be heavy on some hardware, it's not the same as hammering the CPU with calculations in the while(true) loop, which squeeze all performance from any kind of hardware living nothing for UI if UI sits on the same core.
Suggested solution in the thread that I allegedly duplicated doesn't resolve the issue, and my original question is completely different: I can not figure out what exactly must be put in a task to make UI run smoothly under the load.
Suggestions from the comments under my thread don't answer the question too. I tried the solution from
Panagiotis Kanavos (see below) but the problem persists:
while (true)
{
await Task.Delay(500);
await Task.Run(() => sesnor.ReportSensors());
}
After researching similar topics it seems like none of them address my particular issue.
You're setting the CPU affinity for the UI thread, then running the test routine on the same thread so it makes sense your UI is hanging during the test. Simplify things and ensure your UI/threading is working properly before you jump into actually performing your test routine.
private int TestingMethod()
{
// set affinity for current thread here when ready
// mock a busy thread by sleeping
System.Threading.Thread.Sleep( 15 * 1000 );
return 42;
}
// don't use `async void`
private async Task PerCoreTest()
{
TestLabel.Text = "Running...";
// we're in the UI thread, so we want to run
// the test in another thread. run a new
// task to do so, await result so the continuation
// will execute back in the UI thread
var score = await Task.Run(() => TestingMethod());
TestLabel.Text = score.ToString();
}
private async Task TestButton_Click(object sender, EventArgs e)
{
await PerCoreTest();
}
Nice and simple. Add something else to the form that updates every second or so or a button you can click to verify the UI is updating properly as the test routine is running.
Once you've verified that the UI isn't locking up, then you may begin adding substance to your test routine. I suggest just getting a working test routine without processor affinity first.
private int TestingMethod()
{
var score = 0;
// set affinity for current thread here when ready
do
{
// your cpu-frying logic
}
while( /* sentinel condition */ )
return score;
}
Again, verify the UI is responsive during the test and you can also verify one of your cores is getting abused. Once all that is verified, you may then set the thread affinity INSIDE the TestingMethod() method's implementation (abstracting it to another method call is fine as well as long as it's called from within the TestingMethod's body and isn't run in a Task or another thread. You can pass the mask into TestingMethod as a parameter from the PerCoreTest method.
This should get you on the right track to doing what you want to do. I suggest you spend some quality time reading about multithreading in general and .NET's threading/asynchronous programming model if you plan on continuing with it in the future.

.NET ThreadPool QueueUserWorkItem Synchronization

I am employing ThreadPool.QueueUserWorkItem to play some sound files and not hanging up the GUI while doing so.
It is working but has an undesirable side effect.
While the QueueUserWorkItem CallBack Proc is being executed there is nothing to stop it from starting a new thread. This causes the samples in the threads to overlap.
How can I make it so that it waits for the already running thread to finish running and only then run the next request?
EDIT: private object sync = new Object();
lock (sync) {
.......do sound here
}
this works. plays in the sounds in order.
but some samples are getting played more than once when i keep sending sound requests before the one being played ends. will investigate.
EDIT: is the above a result of Lock Convoy #Aaronaught mentioned?
This is a classic thread synchronization issue, where you have multiple clients that all want to use the same resource and need to control how they access it. In this particular case, the sound system is willing to play more than one sound at the same time (and this is often desirable), but since you don't want that behavior, you can use standard locking to gate access to the sound system:
public static class SequentialSoundPlayer
{
private static Object _soundLock = new object();
public static void PlaySound(Sound sound)
{
ThreadPool.QueueUserWorkItem(AsyncPlaySound, sound);
}
private static void AsyncPlaySound(Object state)
{
lock (_soundLock)
{
Sound sound = (Sound) state;
//Execute your sound playing here...
}
}
}
where Sound is whatever object you're using to represent a sound to be played. This mechanism is 'first come, first served' when multiple sounds vie for play time.
As mentioned in another response, be careful of excessive 'pile-up' of sounds, as you'll start to tie up the ThreadPool.
You could use a single thread with a queue to play all the sounds.
When you want to play a sound, insert a request into the queue and signal to the playing thread that there is a new sound file to be played. The sound playing thread sees the new request and plays it. Once the sound completes, it checks to see if there are any more sounds in the queue and if so plays the next, otherwise it waits for the next request.
One possible problem with this method is that if you have too many sounds that need to be played you can get an evergrowing backlog so that sounds may come several seconds or possibly even minutes late. To avoid this you might want to put a limit on the queue size and drop some sounds if you have too many.
A very simple producer/consumer queue would be ideal here - since you only have 1 producer and 1 consumer you can do it with minimal locking.
Don't use a critical section (lock statement) around the actual Play method/operation as some people are suggesting, you can very easily end up with a lock convoy. You do need to lock, but you should only be doing it for very short periods of time, not while a sound is actually playing, which is an eternity in computer time.
Something like this:
public class SoundPlayer : IDisposable
{
private int maxSize;
private Queue<Sound> sounds = new Queue<Sound>(maxSize);
private object sync = new Object();
private Thread playThread;
private bool isTerminated;
public SoundPlayer(int maxSize)
{
if (maxSize < 1)
throw new ArgumentOutOfRangeException("maxSize", maxSize,
"Value must be > 1.");
this.maxSize = maxSize;
this.sounds = new Queue<Sound>();
this.playThread = new Thread(new ThreadStart(ThreadPlay));
this.playThread.Start();
}
public void Dispose()
{
isTerminated = true;
lock (sync)
{
Monitor.PulseAll(sync);
}
playThread.Join();
}
public void Play(Sound sound)
{
lock (sync)
{
if (sounds.Count == maxSize)
{
return; // Or throw exception, or block
}
sounds.Enqueue(sound);
Monitor.PulseAll(sync);
}
}
private void PlayInternal(Sound sound)
{
// Actually play the sound here
}
private void ThreadPlay()
{
while (true)
{
lock (sync)
{
while (!isTerminated && (sounds.Count == 0))
Monitor.Wait(sync);
if (isTerminated)
{
return;
}
Sound sound = sounds.Dequeue();
Play(sound);
}
}
}
}
This will allow you to throttle the number of sounds being played by setting maxSize to some reasonable limit, like 5, after which point it will simply discard new requests. The reason I use a Thread instead of ThreadPool is simply to maintain a reference to the managed thread and be able to provide proper cleanup.
This only uses one thread, and one lock, so you'll never have a lock convoy, and will never have sounds playing at the same time.
If you're having any trouble understanding this, or need more detail, have a look at Threading in C# and head over to the "Producer/Consumer Queue" section.
The simplest code you could write would be as follows:
private object playSoundSync = new object();
public void PlaySound(Sound someSound)
{
ThreadPool.QueueUserWorkItem(new WaitCallback(delegate
{
lock (this.playSoundSync)
{
PlaySound(someSound);
}
}));
}
Allthough very simple it pontentially could yield problems:
If you play a lot of (longer) sounds simultaneously there will be a lot of locks and a lot of threadpool threads get used up.
The order you enqueued the sounds is not necesesarily the order they will be played back.
in practise these problems should only be relevant if you play a lot of sounds frequently or if the sounds are very long.
Another option, if you can make the (major) simplifying assumption that any attempts to play a second sound while the first is still playing will just be ignored, is to use a single event:
private AutoResetEvent playEvent = new AutoResetEvent(true);
public void Play(Sound sound)
{
ThreadPool.QueueUserWorkItem(s =>
{
if (playEvent.WaitOne(0))
{
// Play the sound here
playEvent.Set();
}
});
}
This one's dead easy, with the obvious disadvantage that it will simply discard "extra" sounds instead of queuing them. But in this case, it may be exactly what you want, and we get to use the thread pool because this function will return instantly if a sound is already playing. It's basically "lock-free."
As per your edit, create your thread like this:
MySounds sounds = new MySounds(...);
Thread th = new Thread(this.threadMethod, sounds);
th.Start();
And this will be your thread entry point.
private void threadMethod (object obj)
{
MySounds sounds = obj as MySounds;
if (sounds == null) { /* do something */ }
/* play your sounds */
}
The use of ThreadPool is not the error. The error is queueing every sound as work item. Naturally the thread pool will start more threads. This is what it is supposed to do.
Build your own queue. I have one (AsyncActionQueue). It queues items and when it has an item it will start a ThreadPool WorkItem - not one per item, ONE (unless one is already queued and not finished). The callback basically unqeueues items and processes them.
This allows me to have X queues share Y threads (i.e. not waste threads) and still get very nice async operations. I use that for a comples UI trading application - X windows (6, 8) communicating with a central service cluster (i.e. a number of services) and they all use async queues to move items back and forth (well, mostly forth towards the UI).
One thing you NEED to be aware of - and that has been said already - is that if you overload your queue, it will fall back. What to do then depends on your. I have a ping/pong message that gets queued regularly to a service (from the window) and if not returned in time, the window goes grey marking "I am stale" until it catches up.
Microsoft's new TPL Dataflow Library could be a good solution for this sort of thing. Check out the video here - the first code example demonstrated fits your requirements pretty much exactly.
http://channel9.msdn.com/posts/TPL-Dataflow-Tour

Multiple Threads

I post a lot here regarding multithreading, and the great stackoverflow community have helped me alot in understand multithreading.
All the examples I have seen online only deal with one thread.
My application is a scraper for an insurance company (family company ... all free of charge). Anyway, the user is able to select how many threads they want to run. So lets say for example the user wants the application to scrape 5 sites at one time, and then later in the day he choses 20 threads because his computer isn't doing anything else so it has the resources to spare.
Basically the application builds a list of say 1000 sites to scrape. A thread goes off and does that and updates the UI and builds the list.
When thats finished another thread is called to start the scraping. Depending on the number of threads the user has set to use it will create x number of threads.
Whats the best way to create these threads? Should I create 1000 threads in a list. And loop through them? If the user has set 5 threads to run, it will loop through 5 at a time.
I understand threading, but it's the application logic which is catching me out.
Any ideas or resources on the web that can help me out?
You could consider using a thread pool for that:
using System;
using System.Threading;
public class Example
{
public static void Main()
{
ThreadPool.SetMaxThreads(100, 10);
// Queue the task.
ThreadPool.QueueUserWorkItem(new WaitCallback(ThreadProc));
Console.WriteLine("Main thread does some work, then sleeps.");
Thread.Sleep(1000);
Console.WriteLine("Main thread exits.");
}
// This thread procedure performs the task.
static void ThreadProc(Object stateInfo)
{
Console.WriteLine("Hello from the thread pool.");
}
}
This scraper, does it use a lot of CPU when its running?
If it does a lot of communication with these 1000 remote sites, downloading their pages, that may be taking more time than the actual analysis of the pages.
And how many CPU cores does your user have? If they have 2 (which is common these days) then beyond two simultaneous threads performing analysis, they aren't going to see any speed up.
So you probably need to "parallelize" the downloading of the pages. I doubt you need to do the same for the analysis of the pages.
Take a look into asynchronous IO, instead of explicit multi-threading. It lets you launch a bunch of downloads in parallel and then get called back when each one completes.
If you really just want the application, use something someone else already spent time developing and perfecting:
http://arachnode.net/
arachnode.net is a complete and comprehensive .NET web crawler for
downloading, indexing and storing
Internet content including e-mail
addresses, files, hyperlinks, images,
and Web pages.
Whether interested or involved in
screen scraping, data mining, text
mining, research or any other
application where a high-performance
crawling application is key to the
success of your endeavors,
arachnode.net provides the solution
you need for success.
If you also want to write one yourself because it's a fun thing to write (I wrote one not long ago, and yes, it is alot of fun ) then you can refer to this pdf provided by arachnode.net which really explains in detail the theory behind a good web crawler:
http://arachnode.net/media/Default.aspx?Sort=Downloads&PageIndex=1
Download the pdf entitled: "Crawling the Web" (second link from top). Scroll to Section 2.6 entitled: "2.6 Multi-threaded Crawlers". That's what I used to build my crawler, and I must say, I think it works quite well.
I think this example is basically what you need.
public class WebScraper
{
private readonly int totalThreads;
private readonly List<System.Threading.Thread> threads;
private readonly List<Exception> exceptions;
private readonly object locker = new object();
private volatile bool stop;
public WebScraper(int totalThreads)
{
this.totalThreads = totalThreads;
threads = new List<System.Threading.Thread>(totalThreads);
exceptions = new List<Exception>();
for (int i = 0; i < totalThreads; i++)
{
var thread = new System.Threading.Thread(Execute);
thread.IsBackground = true;
threads.Add(thread);
}
}
public void Start()
{
foreach (var thread in threads)
{
thread.Start();
}
}
public void Stop()
{
stop = true;
foreach (var thread in threads)
{
if (thread.IsAlive)
{
thread.Join();
}
}
}
private void Execute()
{
try
{
while (!stop)
{
// Scrap away!
}
}
catch (Exception ex)
{
lock (locker)
{
// You could have a thread checking this collection and
// reporting it as you see fit.
exceptions.Add(ex);
}
}
}
}
The basic logic is:
You have a single queue in which you put the URLs to scrape then you create your threads and use a queue object to which every thread has access. Let the threads start a loop:
lock the queue
check if there are items in the queue, if not, unlock queue and end thread
dequeue first item in the queue
unlock queue
process item
invoke an event that updates the UI (Remember to lock the UI Controller)
return to step 1
Just let the Threads do the "get stuff from the queue" part (pulling the jobs) instead of giving them the urls (pushing the jobs), that way you just say
YourThreadManager.StartThreads(numberOfThreadsTheUserWants);
and everything else happens automagically. See the other replies to find out how to create and manage the threads .
I solved a similar problem by creating a worker class that uses a callback to signal the main app that a worker is done. Then I create a queue of 1000 threads and then call a method that launches threads until the running thread limit is reached, keeping track of the active threads with a dictionary keyed by the thread's ManagedThreadId. As each thread completes, the callback removes its thread from the dictionary and calls the thread launcher.
If a connection is dropped or times out, the callback reinserts the thread back into the queue. Lock around the queue and the dictionary. I create threads vs using the thread pool because the overhead of creating a thread is insignificant compared to the connection time, and it allows me to have a lot more threads in flight. The callback also provides a convenient place with which to update the user interface, even allowing you to change the thread limit while it's running. I've had over 50 open connections at one time. Remember to increase your MacConnections property in your app.config (default is two).
I would use a queue and a condition variable and mutex, and start just the requested number of threads, for example, 5 or 20 (and not start 1,000).
Each thread blocks on the condition variable. When woken up, it dequeues the first item, unlocks the queue, works with the item, locks the queue and checks for more items. If the queue is empty, sleep on the condition variable. If not, unlock, work, repeat.
While the mutex is locked, it can also check if the user has requested the count of threads to be reduced. Just check if count > max_count, and if so, the thread terminates itself.
Any time you have more sites to queue, just lock the mutex and add them to the queue, then broadcast on the condition variable. Any threads that are not already working will wake up and take new work.
Any time the user increases the requested thread count, just start them up and they will lock the queue, check for work, and either sleep on the condition variable or get going.
Each thread will be continually pulling more work from the queue, or sleeping. You don't need more than 5 or 20.
Consider using the event-based asynchronous pattern (AsyncOperation and AsyncOperationManager Classes)
You might want to take a look at the ProcessQueue article on CodeProject.
Essentially, you'll want to create (and start) the number of threads that are appropriate, in your case that number comes from the user. Each of these threads should process a site, then find the next site needed to process. Even if you don't use the object itself (though it sounds like it would suit your purposes pretty well, though I'm obviously biased!) it should give you some good insight into how this sort of thing would be done.

Waiting for either of these BackgroundWorker to complete

There is a sequence for FORM(some UI) should get downloaded using service.
Currently, this download is in a BackgroundWorker Thread.
Now, since the performance is slow... We decided to categories the FORMS into 2 and start downloading parallely using another BackgroundWorker on top of the existing Thread.
Now, the scenario is the either of this BackgroundWorker should wait for other to complete.
So, how to implement it.
I tried with AutoResetEvent. but, i could not achieve this.
Any help is appreciated.
I don't think that the scenario is really that one BackgroundWorker should wait for another. What you really want is to fire some UI event after (and only after) both of them complete. It's a subtle but important difference; the second version is a lot easier to code.
public class Form1 : Form
{
private object download1Result;
private object download2Result;
private void BeginDownload()
{
// Next two lines are only necessary if this is called multiple times
download1Result = null;
download2Result = null;
bwDownload1.RunWorkerAsync();
bwDownload2.RunWorkerAsync();
}
private void bwDownload1_RunWorkerCompleted(object sender,
RunWorkerCompletedEventArgs e)
{
download1Result = e.Result;
if (download2Result != null)
DisplayResults();
}
private void bwDownload2_RunWorkerCompleted(object sender,
RunWorkerCompletedEventArgs e)
{
download2Result = e.Result;
if (download1Result != null)
DisplayResults();
}
private void DisplayResults()
{
// Do something with download1Result and download2Result
}
}
Note that those object references should be strongly-typed, I just used object because I don't know what you're downloading.
This is really all you need; the RunWorkerCompleted event runs in the foreground thread so you actually don't need to worry about synchronization or race conditions in there. No need for lock statements, AutoResetEvent, etc. Just use two member variables to hold the results, or two boolean flags if the result of either can actually be null.
You should be able to use two AutoResetEvent's and the WaitAll function to wait for both to complete. Call the Set function on the AutoResetEvent objects in the respective OnRunWorkerCompleted event.
Jeffrey Richter is THE guru when it comes to multi threading and he's written an amazing library called Power Threading Library which makes doing tasks like downloading n files asynchronously and continuing after they are all completed (or one or some), really simple.
Take a little time out to watch the video, learn about it and you won't regret it. Using the power threading library (which is free and has a Silverlight and Compact Framework version also) also makes your code easier to read, which is a big advantage when doing any async stuff.
Good luck,
Mark
int completedCount = 0;
void threadProc1() { //your thread1 proc
//do something
....
completedCount++;
while (completedCount < 2) Thread.Sleep(10);
//now both threads are done
}
void threadProc2() { //your thread1 proc
//do something
....
completedCount++;
while (completedCount < 2) Thread.Sleep(10);
//now both threads are done
}
Just use 2 BackgroundWorker objects, and have each one alert the UI when it completes. That way you can display a spinner, progress bar, whatever on the UI and update it as download results come back from the threads. You will also avoid any risks of thread deadlocking, etc.
By the way, just so we are all clear, you should NEVER call a blocking function such as WaitAll from the UI thread. It will cause the UI to completely lock up which will make you users wonder WTF is going on :)

Windows service scheduled execution

If I have a Windows Service that needs to execute a task every 30 seconds which is better to use; the Timer() class or a loop that executes the task then sleeps for a number of seconds?
class MessageReceiver
{
public MessageReceiver()
{
}
public void CommencePolling()
{
while (true)
{
try
{
this.ExecuteTask();
System.Threading.Thread.Sleep(30000);
}
catch (Exception)
{
// log the exception
}
}
}
public void ExecutedTask()
{
// do stuff
}
}
class MessageReceiver
{
public MessageReceiver()
{
}
public void CommencePolling()
{
var timer = new Timer()
{
AutoReset = true,
Interval = 30000,
Enabled = true
};
timer.Elapsed += Timer_Tick;
}
public void Timer_Tick(object sender, ElapsedEventArgs args)
{
try
{
// do stuff
}
catch (Exception)
{
// log the exception
}
}
}
The windows service will create an instance of the MessageReciever class and execute the CommencePolling method on a new thread.
I think it really depends on your requirement.
case 1.
Suppose you want to run this.ExecuteTask() every five minutes starting from 12:00AM (i.e., 12:00, 12:05, ...) and suppose the execution time of this.ExecuteTask() varies (for example, from 30 sec to 2 min), maybe using timer instead of Thread.Sleep() seems to be an easier way of doing it (at least for me).
However, you can achieve this behavior with Thread.Sleep() as well by calculating the offset while taking timestamps on a thread wake-up and on a completion of this.ExecuteTask().
case 2.
Suppose you want to perform the task in the next 5 min just after completion of this.ExecuteTask(), using Thread.Sleep() seems to be easier. Again, you can achieve this behavior with a timer as well by reseting the timer every time while calculating offsets on every time this.ExecuteTask() completes.
Note1, for the case 1, you should be very careful in the following scenario: what if this.ExecuteTask() sometimes takes more than the period (i.e. it starts at 12:05 and completes 12:13 in the example above).
What does this mean to your application and how will it be handled?
a. Total failure - abort the service or abort the current(12:05) execution at 12:10 and launch 12:10 execution.
b. Not a big deal (skip 12:10 one and run this.ExecuteTask() at 12:15).
c. Not a big deal, but need to launch 12:10 execution immediately after 12:05 task finishes (what if it keeps taking more than 5 min??).
d. Need to launch 12:10 execution even though 12:05 execution is currently running.
e. anything else?
For the policy you select above, does your choice of implementation (either timer or Thread.Sleep()) easy to support your policy?
Note2. There are several timers you can use in .NET. Please see the following document (even though it's bit aged, but it seems to be a good start): Comparing the Timer Classes in the .NET Framework Class Library
Are you doing anything else during that ten second wait? Using Thread.sleep would block, preventing you from doing other things. From a performance point of view I don't think you'd see too much difference, but I would avoid using Thread.sleep myself.
There are three timers to choose from - System.Windows.Forms.Timer is implemented on the main thread whereas System.Timers.Timer and System.Threading.Timer are creating seperate threads.
I believe both methods are equivalent. There will be a thread either way: either because you create one, or because the library implementing the Timer class creates one.
Using the Timer class might be slightly more less expensive resource-wise, since the thread implementing timers probably monitors other timeouts as well.
I this the answers to this question will help.
Not answered by me but John Saunders (above)... the answer can be found here For a windows service, which is better, a wait-spin or a timer?

Categories