I have written a Window Manager for my program, which keeps certain windows open for the life of the Program (on background threads) (if the user wants them open).
I just implemented an action for the contacts window. The problem is that, the action works when the window is already open, but if the action is invoked when the window isn't open yet, then the window opens, but the action is not carried out (pressing the button again will carry out the action).
the code:
private static SetupContacts _contactsWindow;
private static Thread _contactthread;
public static void ShowContact(repUserObject uo, ContactFormAction action, int contactID)
{
if (_contactsWindow == null)
CreateContactThread(uo, contactID);
// make sure it is still alive
if (!_contactthread.IsAlive)
CreateContactThread(uo, contactID);
if (_contactsWindow != null)
{
_contactsWindow.BringToFront();
_contactsWindow.Focus();
switch (action)
{
case ContactFormAction.ViewContact:
if (contactID > 0)
_contactsWindow.LoadCustomer(contactID); // load the contact
break;
case ContactFormAction.AddNewContact:
_contactsWindow.AddCustomer();
break;
}
}
}
private static void CreateContactThread(repUserObject uo, int contactID)
{
if (_contactthread == null || !_contactthread.IsAlive)
{
_contactthread = new Thread(delegate()
{
_contactsWindow = new SetupContacts(uo, contactID);
_contactsWindow.CerberusContactScreenClosed += delegate { _contactsWindow = null; };
_contactsWindow.CerberusContactHasBeenSaved += delegate(object sender, ContactBeenSavedEventArgs args)
{
if (CerberusContactHasBeenSaved != null)
CerberusContactHasBeenSaved.Raise(sender, args);
};
Application.EnableVisualStyles();
BonusSkins.Register();
SkinManager.EnableFormSkins();
UserLookAndFeel.Default.SetSkinStyle("iMaginary");
Application.Run(_contactsWindow);
});
_contactthread.SetApartmentState(ApartmentState.STA);
_contactthread.Start();
}
}
What happens when the routine runs for the first time, (by calling ShowTime), that it hits the first if statement and goes to CreateContactThread() routine. That does it job, but when it returns, the _contactsWindow is still null. The next time the routine is called (ie, call by pressing the button the second time), it all works fine as the _contactWindow is not null.
How do i get it to do it all in one go ?
I am in vehement agreement with commenter Blorgbeard, who advises that it's a bad idea to run more than one UI thread. The API itself works best when used in a single thread, and many of the kinds of actions and operations one might want to do in code with respect to the UI objects are most easily handled in a single thread, because doing so inherently ensures things happen in the order one expects (e.g. variables are initialized before being used).
That said, if for some reason you really must run your new window in a different thread, you can synchronize the two threads so that the initial thread cannot proceed until the new thread has gotten far enough for the operations you want to perform on the newly-initialized object to have a reasonable chance of success (including, of course, that object having been created in the first place).
There are lots of techniques for synchronizing threads, but I prefer the new TaskCompletionSource<T> object. It's simple to use, and if and when you update the code to use async/await, it will readily mesh with that.
For example:
public static void ShowContact(repUserObject uo, ContactFormAction action, int contactID)
{
CreateContactThread(uo, contactID);
if (_contactsWindow != null)
{
_contactsWindow.BringToFront();
_contactsWindow.Focus();
switch (action)
{
case ContactFormAction.ViewContact:
if (contactID > 0)
_contactsWindow.LoadCustomer(contactID); // load the contact
break;
case ContactFormAction.AddNewContact:
_contactsWindow.AddCustomer();
break;
}
}
}
private static void CreateContactThread(repUserObject uo, int contactID)
{
if (_contactthread == null || !_contactthread.IsAlive)
{
TaskCompletionSource<bool> tcs = new TaskCompletionSource<bool>();
_contactthread = new Thread(delegate()
{
_contactsWindow = new SetupContacts(uo, contactID);
_contactsWindow.CerberusContactScreenClosed += delegate { _contactsWindow = null; };
_contactsWindow.CerberusContactHasBeenSaved += delegate(object sender, ContactBeenSavedEventArgs args)
{
if (CerberusContactHasBeenSaved != null)
CerberusContactHasBeenSaved.Raise(sender, args);
};
_contactsWindow.Loaded += (sender, e) =>
{
tcs.SetResult(true);
};
Application.EnableVisualStyles();
BonusSkins.Register();
SkinManager.EnableFormSkins();
UserLookAndFeel.Default.SetSkinStyle("iMaginary");
Application.Run(_contactsWindow);
});
_contactthread.SetApartmentState(ApartmentState.STA);
_contactthread.Start();
tcs.Task.Wait();
}
}
Notes:
You had what appears to me to be redundant checks in your code. The CreateContactThread() method itself checks for null and !IsAlive, and restarts the thread if either of those are false. So in theory, by the time that method returns, the caller should be guaranteed that everything has been initialized as desired. And you should only have to call the method once. So I changed the code to do just that: call the method exactly once, and do so unconditionally (since the method will just do nothing if there is nothing to do).
The calling thread will wait in the CreateContactThread() method after starting the new thread, until the new window's Loaded event has been raised. Of course, the window object itself has been created earlier than that, and you could in fact release the calling thread at that time. But it seems likely to me that you want the window object fully initialized before you start trying to do things to it. So I've delayed the synchronization to that point.
As Blorgbeard has noted, one of the risks of running UI objects in multiple threads is that it's harder to access those objects without getting InvalidOperationExceptions. Even if it works, you should not really be accessing _contactsWindow outside of the thread where it was created, but the code above does just that (i.e. calls BringToFront(), Focus(), LoadCustomer(), and AddCustomer() from the original thread). I make no assurances that the code above is actually fully correct. Only that it addresses the primary synchronization issue that you are asking about.
Speaking of other possible bugs, you probably have an unresolved race condition, in that the new contacts-form thread might be exiting just as you are checking its IsAlive property. If you check the property just before it exits, but then try to access the thread and/or the window after it has exited, your code is likely to do something bad (like crash with an exception). This is yet another example of something that would be a lot easier to address if all of your UI objects were being handled in a single thread.
I admit that some of the above is speculative. It's impossible for me to say for sure how your code will behave without seeing a good, minimal, complete code example. But I feel the likelihood of all of the above being accurate and applicable is very high. :)
Related
I have a simple C# winform app where I spawn a new thread to show another winform. After a process is completed i want to close that form using the below code. The issue I have is that when I call busyForm.BeginInvoke it is bypassing the null check and throw and error. How to correctly close the winform in another thread?
static Indicator busyForm;
public static async Task Execute()
{
Thread busyIndicatorthread = new Thread(new ThreadStart(()=>FormThread()));
busyIndicatorthread.SetApartmentState(ApartmentState.STA);
busyIndicatorthread.Start();
}
private static void FormThread()
{
busyForm = new Indicator();
busyForm.Closed += (sender2, e2) => busyForm.Dispatcher.InvokeShutdown();
Dispatcher.Run();
}
public static Task Execute(){
Thread busyIndicatorthread = new Thread(new ThreadStart(()=>FormThread(hwind)));
busyIndicatorthread.SetApartmentState(ApartmentState.STA);
busyIndicatorthread.Start();
// dos some stuff
if (busyForm != null)
{
busyForm.BeginInvoke(new System.Action(() => busyForm.Close())); <--- throw null error
busyForm = null;
}
}
That is because before calling .Close() method, time has passed and it is not assured that busyForm exists anymore.
In fact, it is possible that, while the new System.Action(() => busyForm.Close() thread is starting, you main thread goes to busyForm = null;.
You can try moving the null to secondary thread.
if (busyForm != null)
{
busyForm.BeginInvoke(new System.Action(() =>
{
lock(busyForm){
busyForm.Close();
busyForm = null;
}
}));
}
Almost no application starts another message pump to display notifications. It's not needed. In all applications, the busy and progress dialog boxes are generated and displayed.by the UI thread. Operations that could block are performed in the background, eg in a background thread or far better, using async/await and Task.Run. The UI is updated using events or callbacks, eg using the Progress< T> class.
In this case though, it seems all that's needed is to display a form before a long-running task and hide it afterward:
public async void btnDoStuff_Async(object sender, EventArgs args)
{
//Disable controls, display indicator, etc
btnDoStuff.Enabled=false;
using var busyForm = new Indicator();
busyForm.Show();
try
{
var result=await Task.Run(()=> ActuallyDoStuffAndReturnResult());
//Back in the UI form
//Do something with the result
}
finally
{
//Close the busy indicator, re-enable buttons etc.
busyForm.Close();
btnDoStuff.Enabled=true;
}
}
The finally block ensures the UI is enabled and the busy form hidden even in case of error.
20+ years ago some Visual Basic 6 applications did start another Window message pump to act as a "server". Visual Basic 6 threading was very quirky, so people used various tricks to get around its limitations.
When you write this code:
busyForm.BeginInvoke(new System.Action(() => busyForm.Close())); <--- throw null error
busyForm = null;
The order in which it executes is almost certainly this:
busyForm = null;
busyForm.Close();
No wonder you're getting a null reference exception!
Simply set the form to null in your invoke. That'll fix it.
However, the correct way to do this is as Panagiotis Kanavos suggests.
i know the common ways of cancelling a backgroundworker using eventwaithandles...
but i wanna know is that right to use a while loop to trap and pause working of a backgroundworker ? i coded like this :
Bool stop = false;
private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)
{
progressBar1.Minimum = 0;
progressBar1.Maximum = 100000;
progressBar1.Value = 0;
for (int i = 0; i < 100000; i++)
{
progressBar1.Value++;
if (i == 50000)
stop = true;
while (stop)
{ }
}
}
private void button1_Click(object sender, EventArgs e)
{
stop = !stop;
}
Did you try it? What happened? Was it what you wanted to happen? Did you notice your computer's fans speeding up, to handle all the heat from your CPU in a tight, "do-nothing" loop?
Fact is, you should not "pause" a background task in the first place; if you don't it to keep running, interrupt it. If you want to be able to resume later, provide a mechanism to allow that. Even having your thread blocked efficiently waiting on a WaitHandle object would be the wrong thing to do, because it wastes a thread pool thread.
The code you've posted here is about the worst way to implement "pausing". Instead of waiting on some synchronization object such as a WaitHandle, you have the current thread just loop without interrupting, constantly checking the value of a flag. Even ignoring the question of whether you're using volatile (the code example doesn't show that, but then it also wouldn't compile, so…), it's terrible to force a CPU core to do so much work and yet get nowhere.
Don't pause your BackgroundWorker.DoWork handler in the first place. Really. Just don't do that. But if you insist, then at least use some kind of waitable object instead of a "spin-wait" loop as in the example you've posted here.
Here's an example of how your code might work if you wanted to avoid altogether tying up a thread while "paused". First, don't use BackgroundWorker, because it doesn't have a graceful way to do this. Second, do use await…that does specifically what you want: it allows the current method to return, but without losing track of its progress. The method will resume executing when the thing it waited on indicates completion.
In the example below, I've tried to guess at what the code that calls RunWorkerAsync() looks like. Or rather, I just assumed you've got a button2, which when clicked you call that method to start your worker task. If this is not enough to get you pointed in the right direction, please improve your question by including a good, minimal, complete code example showing what you're actually doing.
// These fields will work together to provide a way for the thread to interrupt
// itself temporarily without actually using a thread at all.
private TaskCompletionSource<object> _pause;
private readonly object _pauseLock = new object();
private void button2_Click(object sender, DoWorkEventArgs e)
{
// Initialize ProgressBar. Note: in your version of the code, this was
// done in the DoWork event handler, but that handler isn't executed in
// the UI thread, and so accessing a UI object like progressBar1 is not
// a good idea. If you got away with it, you were lucky.
progressBar1.Minimum = 0;
progressBar1.Maximum = 100000;
progressBar1.Value = 0;
// This object will perform the duty of the BackgroundWorker's
// ProgressChanged event and ReportProgress() method.
Progress<int> progress = new Progress<int>(i => progressBar1.Value++);
// We do want the code to run in the background. Use Task.Run() to accomplish that
Task.Run(async () =>
{
for (int i = 0; i < 100000; i++)
{
progress.Report(i);
Task task = null;
// Locking ensures that the two threads which may be interacting
// with the _pause object do not interfere with each other.
lock (_pauseLock)
{
if (i == 50000)
{
// We want to pause. But it's possible we lost the race with
// the user, who also just pressed the pause button. So
// only allocate a new TCS if there isn't already one
if (_pause == null)
{
_pause = new TaskCompletionSource<object>();
}
}
// If by the time we get here, there's a TCS to wait on, then
// set our local variable for the Task to wait on. In this way
// we resolve any other race that might occur between the time
// we checked the _pause object and then later tried to wait on it
if (_pause != null)
{
task = _pause.Task;
}
}
if (task != null)
{
// This is the most important part: using "await" tells the method to
// return, but in a way that will allow execution to resume later.
// That is, when the TCS's Task transitions to the completed state,
// this method will resume executing, using any available thread
// in the thread pool.
await task;
// Once we resume execution here, reset the TCS, to allow the pause
// to go back to pausing again.
lock (_pauseLock)
{
_pause.Dispose();
_pause = null;
}
}
}
});
}
private void button1_Click(object sender, EventArgs e)
{
lock (_pauseLock)
{
// A bit more complicated than toggling a flag, granted. But it achieves
// the desirable goal.
if (_pause == null)
{
// Creates the object to wait on. The worker thread will look for
// this and wait if it exists.
_pause = new TaskCompletionSource<object>();
}
else if (!_pause.Task.IsCompleted)
{
// Giving the TCS a result causes its corresponding Task to transition
// to the completed state, releasing any code that might be waiting
// on it.
_pause.SetResult(null);
}
}
}
Note that the above is just as contrived as your original example. If all you had really was a simple single loop variable iterating from 0 to 100,000 and stopping halfway through, nothing nearly so complicated as the above would be required. You'd just store the loop variable in a data structure somewhere, exit the running task thread, and then when you want to resume, pass in the current loop variable value so the method can resume at the right index.
But I'm assuming your real-world example is not so simple. And the above strategy will work for any stateful processing, with the compiler doing all the heavy-lifting of storing away intermediate state for you.
Many times in UI development I handle events in such a way that when an event first comes - I immediately start processing, but if there is one processing operation in progress - I wait for it to complete before I process another event. If more than one event occurs before the operation completes - I only process the most recent one.
The way I typically do that my process method has a loop and in my event handler I check a field that indicates if I am currently processing something and if I am - I put my current event arguments in another field that is basically a one item sized buffer and when current processing pass completes - I check if there is some other event to process and I loop until I am done.
Now this seems a bit too repetitive and possibly not the most elegant way to do it, though it seems to otherwise work fine for me. I have two questions then:
Does what I need to do have a name?
Is there some reusable synchronization type out there that could do that for me?
I'm thinking of adding something to the set of async coordination primitives by Stephen Toub that I included in my toolkit.
So first, we'll handle the case that you described in which the method is always used from the UI thread, or some other synchronization context. The Run method can itself be async to handle all of the marshaling through the synchronization context for us.
If we're running we just set the next stored action. If we're not, then we indicate that we're now running, await the action, and then continue to await the next action until there is no next action. We ensure that whenever we're done we indicate that we're done running:
public class EventThrottler
{
private Func<Task> next = null;
private bool isRunning = false;
public async void Run(Func<Task> action)
{
if (isRunning)
next = action;
else
{
isRunning = true;
try
{
await action();
while (next != null)
{
var nextCopy = next;
next = null;
await nextCopy();
}
}
finally
{
isRunning = false;
}
}
}
private static Lazy<EventThrottler> defaultInstance =
new Lazy<EventThrottler>(() => new EventThrottler());
public static EventThrottler Default
{
get { return defaultInstance.Value; }
}
}
Because the class is, at least generally, going to be used exclusively from the UI thread there will generally need to be only one, so I added a convenience property of a default instance, but since it may still make sense for there to be more than one in a program, I didn't make it a singleton.
Run accepts a Func<Task> with the idea that it would generally be an async lambda. It might look like:
public class Foo
{
public void SomeEventHandler(object sender, EventArgs args)
{
EventThrottler.Default.Run(async () =>
{
await Task.Delay(1000);
//do other stuff
});
}
}
Okay, so, just to be verbose, here is a version that handles the case where the event handlers are called from different threads. I know you said that you assume they're all called from the UI thread, but I generalized it a bit. This means locking over all access to instance fields of the type in a lock block, but not actually executing the function inside of a lock block. That last part is important not just for performance, to ensure we're not blocking items from just setting the next field, but also to avoid issues with that action also calling run, so that it doesn't need to deal with re-entrancy issues or potential deadlocks. This pattern, of doing stuff in a lock block and then responding based on conditions determined in the lock means setting local variables to indicate what should be done after the lock ends.
public class EventThrottlerMultiThreaded
{
private object key = new object();
private Func<Task> next = null;
private bool isRunning = false;
public void Run(Func<Task> action)
{
bool shouldStartRunning = false;
lock (key)
{
if (isRunning)
next = action;
else
{
isRunning = true;
shouldStartRunning = true;
}
}
Action<Task> continuation = null;
continuation = task =>
{
Func<Task> nextCopy = null;
lock (key)
{
if (next != null)
{
nextCopy = next;
next = null;
}
else
{
isRunning = false;
}
}
if (nextCopy != null)
nextCopy().ContinueWith(continuation);
};
if (shouldStartRunning)
action().ContinueWith(continuation);
}
}
Does what I need to do have a name?
What you're describing sounds a bit like a trampoline combined with a collapsing queue. A trampoline is basically a loop that iteratively invokes thunk-returning functions. An example is the CurrentThreadScheduler in the Reactive Extensions. When an item is scheduled on a CurrentThreadScheduler, the work item is added to the scheduler's thread-local queue, after which one of the following things will happen:
If the trampoline is already running (i.e., the current thread is already processing the thread-local queue), then the Schedule() call returns immediately.
If the trampoline is not running (i.e., no work items are queued/running on the current thread), then the current thread begins processing the items in the thread-local queue until it is empty, at which point the call to Schedule() returns.
A collapsing queue accumulates items to be processed, with the added twist that if an equivalent item is already in the queue, then that item is simply replaced with the newer item (resulting in only the most recent of the equivalent items remaining in the queue, as opposed to both). The idea is to avoid processing stale/obsolete events. Consider a consumer of market data (e.g., stock ticks). If you receive several updates for a frequently traded security, then each update renders the earlier updates obsolete. There is likely no point in processing earlier ticks for the same security if a more recent tick has already arrived. Thus, a collapsing queue is appropriate.
In your scenario, you essentially have a trampoline processing a collapsing queue with for which all incoming events are considered equivalent. This results in an effective maximum queue size of 1, as every item added to a non-empty queue will result in the existing item being evicted.
Is there some reusable synchronization type out there that could do that for me?
I do not know of an existing solution that would serve your needs, but you could certainly create a generalized trampoline or event loop capable of supporting pluggable scheduling strategies. The default strategy could use a standard queue, while other strategies might use a priority queue or a collapsing queue.
What you're describing sounds very similar to how TPL Dataflow's BrodcastBlock behaves: it always remembers only the last item that you sent to it. If you combine it with ActionBlock that executes your action and has capacity only for the item currently being processed, you get what you want (the method needs a better name):
// returns send delegate
private static Action<T> CreateProcessor<T>(Action<T> executedAction)
{
var broadcastBlock = new BroadcastBlock<T>(null);
var actionBlock = new ActionBlock<T>(
executedAction, new ExecutionDataflowBlockOptions { BoundedCapacity = 1 });
broadcastBlock.LinkTo(actionBlock);
return item => broadcastBlock.Post(item);
}
Usage could be something like this:
var processor = CreateProcessor<int>(
i =>
{
Console.WriteLine(i);
Thread.Sleep(i);
});
processor(100);
processor(1);
processor(2);
Output:
100
2
I'm currently making a program to simulate a set of ATMs in visual C#. It's supposed to stop somebody accessing their account if it has already been accessed from a different location. Is it possible to show a message that the account has already been accessed while a semaphore is waiting?
Here is the part of the code where the semaphore is used:
private void button1_Click(object sender, EventArgs e)
{
count++;
if (count == 1)
{
account = findAccount();
if (findAccount() != 5)
{
textBox1.Text = "Please Enter Your Pin";
}
else
{
textBox1.Text = "Please Enter Your Account Number";
count = 0;
}
textBox2.Clear();
}
if (count == 2)
{
if (findPin(account) == true)
{
semaphore.WaitOne();
textBox1.Text = "1: Take Out Cash \r\n2: Balance \r\n3: Exit";
}
else
{
semaphore.Release();
textBox1.Text = "Please Enter Your Account Number";
count = 0;
}
textBox2.Clear();
}
if (count == 3)
{
atm();
}
if (count == 4)
{
withdraw();
}
if (count == 5)
{
int value = Convert.ToInt32(textBox2.Text);
customWithdrawl(value);
}
}
Consider doing two calls to WaitOne. The first call will have a timeout of zero and return a bool that will tell you whether or not you got the semaphore, or someone else still owns it. Two things can happen from there:
1) If someone else owns it, pop up a message that says "Someone else owns the semaphore" and call WaitOne again, but without a timeout (like you're doing now). After the 2nd call to WaitOne returns, close the window that you popped up a second ago..
2) If your call to waitOne with 0 timeout returns true, then you got the semaphore on the 1st try. No need to pop up a window.
Example:
if( semaphore.WaitOne(0) ) //This returns immediately
{
//We own the semaphore now.
DoWhateverYouNeedToDo();
}
else
{
//Looks like someone else already owns the semaphore.
PopUpNotification();
semaphore.WaitOne(); //This one will block until the semaphore is available
DoWhateverYouNeedToDo();
CloseNotification();
}
semaphore.Release();
Note, there are some other issues lurking here.
You probably want to use a try/finally block to release the semaphore to ensure that it gets released across all exception paths.
It's also probably a bad idea to call semaphore.WaitOne() from the GUI thread because the application will become non-responsive while it waits. In fact, you may not see the result of PopUpNotification() if you've hung the GUI thread while doing the 2nd Wait. Consider doing the long wait on a 2nd thread and raising an event back on the GUI thread once you own the semaphore
Consider the following design to resolve Issue 2:
private void button1_Click(object sender, EventArgs e)
{
if(AcquireSemaphoreAndGenerateCallback())
{
//Semaphore was acquired right away. Go ahead and do whatever we need to do
DoWhateverYouNeedToDo();
semaphore.Release()
}
else
{
//Semaphore was not acquired right away. Callback will occur in a bit
//Because we're not blocking the GUI thread, this text will appear right away
textBox1.Text = "Waiting on the Semaphore!";
//Notice that the method returns right here, so the GUI will be able to redraw itself
}
}
//This method will either acquire the semaphore right away and return true, or
//have a worker thread wait on the semaphore and return false. In the 2nd case,
//"CallbackMethod" will run on the GUI thread once the semaphore has been acquired
private void AcquireSemaphoreAndGenerateCallback()
{
if( semaphore.WaitOne(0) ) //This returns immediately
{
return true; //We have the semaphore and didn't have to wait!
}
else
{
ThreadPool.QueueUserWorkItem(new WaitCallback(Waiter));
return false; //Indicate that we didn't acquire right away
}
}
//Wait on the semaphore and invoke "CallbackMethod" once we own it. This method
//is meant to run on a background thread.
private void Waiter(object unused)
{
//This is running on a separate thread
Semaphore.WaitOne(); //Could take a while
//Because we're running on a separate thread, we need to use "BeginInvoke" so
//that the method we're calling runs on the GUI thread
this.BeginInvoke(new Action(CallbackMethod));
}
private void CallbackMethod()
{
textBox1.Text = string.Empty; //Get rid of the "Waiting For Semaphore" text. Can't do this if we're not running on the GUI thread
DoWhateverYouNeedToDo();
semaphore.Release();
}
Now, this solution could also be fraught with peril. It's kind of hard to follow the execution of the program because it jumps around from method to method. If you have an exception, it could be difficult to recover from and make sure all of your program state is correct. You also have to keep track of things like the account number and the pin numbers through all of these method calls. In order to do that, Waiter and CallbackMethod should probably take some parameter that tracks this state that gets passed along to each step. There's also no way to abort waiting (a time out). It will probably work, but shouldn't make it into any production code because it would be too difficult to maintain or extend.
If you really wanted to do it right, you should consider encapsulating the ATM logic in an object that will raise events that the GUI can subscribe to. You could have a method like ATM.LogInAsync(Account,Pin) that you could call. This method would return immediately, but some time later, an event on the ATM class like "LogInComplete" would fire. This event would have a custom EventArgs object that would contain data to trace which log-in has occurred (mainly the Account number). This is called the Event-based Asynchronous Pattern
Alternatively, if you're using C# 5.0, you can use the new Async/Await syntax in the AcquireSemaphoreAndGenerateCallback() method. That's probably the easiest way because the compiler will handle most of the complexities for you
Yes, you may show your message/form/messagebox right before the Wait method. Then when it receives the signal to unblock, you hide your message.
I have a Form which "listens" to events that are raised elsewhere (not on the Form itself, nor one of its child controls). Events are raised by objects which exist even after the Form is disposed, and may be raised in threads other than the one on which the Form handle was created, meaning I need to do an Invoke in the event handler (to show the change on the form, for example).
In the Dispose(bool) method of the form (overridden) I unsubscribed from all events that may still be subscribed when this method is called. However, Invoke is still called sometimes from one of the event handlers. I assume this is because the event handler gets called just a moment before the event is unsubscribed, then OS switches control to the dispose method which executes, and then returns control back to the handler which calls the Invoke method on a disposed object.
Locking the threads doesn't help because a call to Invoke will lock the calling thread until main thread processes the invoked method. This may never happen, because the main thread itself may be waiting for a release of the lock on the object that the Invoke-calling thread has taken, thus creating a deadlock.
So, in short, how do I correctly dispose of a Form, when it is subscribed to external events, which may be raised in different threads?
Here's how some key methods look at the moment. This approach is suffering the problems I described above, but I'm not sure how to correct them.
This is an event handler handling a change of Data part of the model:
private void updateData()
{
if (model != null && model.Data != null)
{
model.Data.SomeDataChanged -= new MyEventHandler(updateSomeData);
model.Data.SomeDataChanged += new MyEventHandler(updateSomeData);
}
updateSomeData();
}
This is an event handler which must make changes to the view:
private void updateSomeData()
{
if (this.InvokeRequired) this.myInvoke(new MethodInvoker(updateSomeData));
else
{
// do the necessary changes
}
}
And the myInvoke method:
private object myInvoke(Delegate method)
{
object res = null;
lock (lockObject)
{
if (!this.IsDisposed) res = this.Invoke(method);
}
return res;
}
My override of the Dispose(bool) method:
protected override void Dispose(bool disposing)
{
lock (lockObject)
{
if (disposing)
{
if (model != null)
{
if (model.Data != null)
{
model.Data.SomeDataChanged -= new MyEventHandler(updateSomeData);
}
// unsubscribe other events, omitted for brevity
}
if (components != null)
{
components.Dispose();
}
}
base.Dispose(disposing);
}
}
Update (as per Alan's request):
I never explicitly call the Dispose method, I let that be done by the framework. The deadlock has so far only happened when the application is closed. Before I did the locking I sometimes got some exceptions thrown when a form was simply closed.
There are two approaches to consider. One is to have a locking object within the Form, and have the internal calls to Dispose and BeginInvoke calls occur within the lock; since neither Dispose nor BeginInvoke should take very long, code should never have to wait long for the lock.
The other approach is to just declare that because of design mistakes in Control.BeginInvoke/Form.BeginInvoke, those methods will sometimes throw an exception that cannot practically be prevented and should simply be swallowed in cases where it won't really matter whether or not the action occurs on a form which has been disposed anyway.
I'd like to provide a sort of addendum to supercat's answer that may be interesting.
Begin by making a CountdownEvent (we'll call it _invoke_counter) with an initial count of 1. This should be a member variable of the form (or control) itself:
private readonly CountdownEvent _invoke_counter = new CountdownEvent(1);
Wrap each use of Invoke/BeginInvoke as follows:
if(_invoke_counter.TryAddCount())
{
try
{
//code using Invoke/BeginInvoke goes here
}
finally { _invoke_counter.Signal(); }
}
Then in your Dispose you can do:
_invoke_counter.Signal();
_invoke_counter.Wait();
This also allows you to do a few other nice things. The CountdownEvent.Wait() function has an overload with a timeout. Perhaps you only want to wait a certain period of time to let the invoking functions finish before letting them die. You could also do something like Wait(100) in a loop with a DoEvents() to keep things responsive if you expect the Invokes to take a long time to finish. There's a lot of niftyness you can achieve with this method.
This should prevent any weird timing race condition type of issues and it's fairly simple to understand and implement. If anyone sees any glaring problems with this, I'd love to hear about them because I use this method in production software.
IMPORTANT: Make sure that the disposal code is on the Finalizer's thread (which it should be in a "natural" disposal). If you try to manually call the Dispose() method from the UI thread, it will deadlock because it will get stuck on the _invoke_counter.Wait(); and the Invokes won't run, etc.
I had the problem with the Invoke method while multithreading, and I found a solution that works like a charm!
I wanted to create a loop in a task that update a label on a form to do monitoring.
But when I closed the form window, my Invoke threw an exception because my Form is disposed !
Here is the pattern I implemented to resolve this problem:
class yourClass : Form
{
private bool isDisposed = false;
private CancellationTokenSource cts;
private bool stopTaskSignal = false;
public yourClass()
{
InitializeComponent();
this.FormClosing += (s, a) =>
{
cts.Cancel();
isDisposed = true;
if (!stopTaskSignal)
a.Cancel = true;
};
}
private void yourClass_Load(object sender, EventArgs e)
{
cts = new CancellationTokenSource();
CancellationToken token = cts.Token;
Task.Factory.StartNew(() =>
{
try
{
while (true)
{
if (token.IsCancellationRequested)
{
token.ThrowIfCancellationRequested();
}
if (this.InvokeRequired)
{
this.Invoke((MethodInvoker)delegate { methodToInvoke(); });
}
}
}
catch (OperationCanceledException ex)
{
this.Invoke((MethodInvoker)delegate { stopTaskSignalAndDispose(); });
}
}, token);
}
public void stopTaskSignalAndDispose()
{
stopTaskSignal = true;
this.Dispose();
}
public void methodToInvoke()
{
if (isDisposed) return;
label_in_form.Text = "text";
}
}
I execute methodToInvoke() in an invoke to update the label from the form's thread.
When I close the window, the FormClosing event is called. I take this opportunity to cancel the closing of the window (a.Cancel) and to call the Cancel method of the object Task to stop the thread.
I then access the ThrowIfCancellationRequested() method which throws an OperationCanceledException allowing, juste after, to exit the loop and complete the task.
The Invoke method sends a "Window message" in a Queue.
Microsoft says : « For each thread that creates a window, the operating system creates a queue for window messages. »
So I call another method that will now really close the window but this time by using the Invoke method to make sure that this message will be the last of the Queue!
And then I close the window with the Dispose() method.