I'm writing an application for WPF in MVC pattern. The purpose of application is to display some data in the database and these data are being updated asynchronously.
I'm thinking about how to design the architecture, such that it will be thread-safe. In particular:
Each page (or its viewmodel) must be able to subscribe and unsubscribe from the service, which updates the database.
The service updating the database informs all subscribers, that new data arrived and that they should refresh their views.
Obviously, the page, which is just being closed should unsubscribe from the service and the page, which just appears, should (or may) subscribe.
I could put subscription inside a critical section, as well as broadcast about new data, but then imagine the following scenario (page ~= its viewmodel, that does not matter much here):
Service enters critical section to broadcast information about new data (in separate thread)
Page tries to enter critical section to unsubscribe (in main thread)
Service informs page about new data (in separate thread).
Page populates its fields and raises PropertyChange event (in separate thread).
PropertyChange event is marshalled to the main thread. Which waits for the critical section.
And it looks like a deadlock to me.
How can I safely design this architecture to avoid such deadlocks? Maybe pages should never unsubscribe? Or is there another way to secure threads such that they won't deadlock?
Given that the post is tagged WPF and WP-8.1 and the clarification in the comments, i would do the following:
Have the base Model class (the one with properties holding relevant data) implement INotifyPropertyChanged
Have the Model for ALL pages as ObservableCollection<BaseModel>. The model should also implement a mutex/lock property instantiated in the constructor.
Share the model across all viewmodels (e.g. share the instance of the model).
In the 'Service' performing async operation, i would only lock the section of the code that would Add or Remove items from the Model ObservableCollection using the lock object from the Model itself. This section MUST be placed in the Dispatcher.Invoke() or equivalent platform call. This ensures that it is only UI thread that is waiting to update the collection.
I would bind all the UI in the relevant pages to the model reference in the viewmodel.
This way the UI and viewmodels are careless to the specific service events thus eliminating the overhead of subscribing, and you also limit the duplication of the data if you share the model - even with 20 pages on screen, your service will perform a single update that is propagated to the UI and viewmodels by the powers of the framework (binding).
A simple solution could be: Do not do the unsubscribe operation in the UI thread. (In general do not block the UI thread.) Do it in async way, fire and forget.
Alternatively you may take a look to Rx (Reactive Extensions) what are exactly for this purpose: Implementing the observer pattern in multithreaded way.
Silently "just not unsubscribe" is probably not a good idea. Although I do not know your implementation details, if the event handlers are instance methods, then a reference to that instance implicitly will be kept by the service, and depending the reference chain maybe your page or other instances will be prevented to garbage collected.
"Or is there another way to secure threads such that they won't deadlock?" Currently in .NET framework there is no magic trick what automatically prevents deadlock. Other multithreaded environments may or may not provide an automatic deadlock resolution (note: not prevention) service what can detect a deadlock (after it happen) and automatically choose a victim. In .NET it could be an exception what occurs while your are waiting to a resource. (again this is not implemented yet)
Related
I know that if I am modifying a control from a different thread, I should take care because WinForms and WPF don't allow modifying control's state from other threads.
Why is this restriction in place?
If I can write thread-safe code, I should be able to modify control state safely. Then why is this restriction present?
Several GUI frameworks have this limitation. According to the book Java Concurrency in Practice the reason for this is to avoid complex locking. The problem is that GUI controls may have to react to both events from the UI, data binding and so forth, which leads to locking from several different sources and thus a risk of deadlocks. To avoid this .NET WinForms (and other UIs) restricts access to components to a single thread and thus avoids locking.
In the case of windows, when a control is created UI updates are performed via messages from a message pump. The programmer does not have direct control of the thread the pump is running on, therefore the arrival of a message for a control could possibly result in the changing of the state of the control. If another thread (that the programmer was in direct control of) were allowed to change the state of the control then some sort of synchronization logic would have to be put in place to prevent corruption of the control state. The controls in .Net are not thread safe; this is, I suspect by design. Putting synchronization logic in all controls would be expensive in terms of designing, developing, testing and supporting the code that provides this feature. The programmer could of course provide thread safety to the control for his own code, but not for the code that is in .Net that is running concurrently with his code. One solution to this issue is to restrict these types of actions to one thread and one thread only, which makes the control code in .Net simpler to maintain.
.NET reserves the right to access your control in the thread where you created it at any time. Therefore accesses that come from another thread can never be thread safe.
You might be able to make your own code thread-safe, but there is no way for you to inject the necessary synchronization primitives into the builtin WinForm and WPF code that match up with the ones in your code. Remember, there are a lot of messages getting passed around behind the scenes that eventually cause the UI thread to access the control without you really ever realizing it.
Another interesting aspect of a controls thread affinity is that it could (though I suspect they never would) use the Thread Local Storage pattern. Obviously if you accessed a control on a thread other than the one it was created on it would not be able to access the correct TLS data no matter how carefully you structured the code to guard against all of the normal problems of multithreaded code.
Windows supports many operations which, especially used in combination, are inherently not thread-safe. What should happen, for example, if while one thread is trying to insert some text into a text field starting with the 50th character, while another thread tries to delete the first 40 characters from that field? It would be possible for Windows to use locks to ensure that the second operation couldn't be begun until the first one completed, but using locks would add overhead to every operation, and would also raise the possibility of deadlock if actions on one entity require manipulation of another. Requiring that actions involving a particular window must happen on a particular thread is a more stringent requirement than would be necessary to prevent unsafe combinations of operations from being performed simultaneously, but it's relatively easy to analyze. Using controls from multiple threads and avoiding clashes via some other means would generally be more difficult.
Actually, as far as I know, that WAS the plan from the beginning! Every control could be accessed from any thread! And just because thread locking was needed when another thread required access to the control --and because locking is expensive-- a new threading model was crafted called "thread rental". In that model, related controls would be aggregated into "contexts" using only one thread, thus reducing the amount of locking needed.
Pretty cool, huh?
Unfortunately, that attempt was too bold to succeed (and a bit more complex because locking was still required), so the good old Windows Forms threading model --with the single UI thread and with the creating thread to claim ownership of the control-- is used once again in wPF to make our lives ...easier?
MVVM approach is nice and well established. However picture the scene: you have an app page where user can initiate some long-running task. Like synchronization of local and remote databases. This task can be long and should only be interrupted gracefully. Then user leaves the page, by going to some details page. It doesn't make sense to cancel that long async operation, because app is still running. But then suddenly user receives a phone call, so that app is deactivated.
In my (maybe too primitive) understanding of MVVM, View Model should be used to control interactions with the Model (that long operation particularly). But View Model doesn't need to know about application lifetime events, since that will limit code reusability (on Windows 8 there's no such class as PhoneApplicationService). See a contradiction here? VM initiates operation, but should not be used to cancel it.
Of course, View can take this responsibility to handle lifetime events. So that event about app deactivating is propagating like this: View -> ViewModel -> (cancels long operation) -> Model. But if user has navigated from the View, and some of operations initiated in that View is still running, there's no way of cancelling it anymore - View can be disposed of at any time.
I've came up with only one idea, that is handling app lifetime events in View Models. But, as I said before, I dislike this approach, because it limits View Models' portability. Could anyone offer a better solution?
I've came up with only one idea, that is handling app lifetime events in View Models. But, as I said before, I dislike this approach, because it limits View Models' portability. Could anyone offer a better solution?
I actually do not see a problem here. In MVVM, the ViewModel is traditionally the "glue" that ties the View to the Model.
Having a small amount of custom ViewModel code for each platform doesn't necessarily limit the portability of the rest of the ViewModel, especially if this is abstracted and contained within its own project for each platform.
VM initiates operation, but should not be used to cancel it.
This strongly suggests that the VM should be the one to cancel it. If the VM creates these operations, it effectively has ownership of them, which suggests that it should manage their lifecycle, as well.
I am not sure if this breaks MVVM principle, but I simply thought like this way.
Regarding the subscription of PhoneApplicationService in VM, are there any reasons not to take this approach like
App -> ViewModel
App is owner of VMs, and if App tells VMs activate/deactivate through interface like view does to its VM, the VM can keep the reusability. but isn't it true once VM subscribe PhoneApplicationService in it, it means VM has dependency on application which means VM and application depends each other and it limits reusability?
About long time task, if it needs to live according to the application lifetime but not to the page lifetime, it can be in App scope as application model or something which can be shared from VMs but not in page(view) scope.
The description you give suggests to me, that there is a missing layer of abstraction.
In particular:
Your ViewModel can of course start long running events that effect the model,
but they do not have any ownership on those long running events. This is exactly
true and should not be broken. If you start a long running event (and your
database synchronisation example is very good here), then the model should
handle this.
The missing part here is in the model I believe. When there are long running
tasks that effect the model, have a separate layer that handles those.
Lets call them Transaction for simplicities sake.
So: You start your long running task in the models domain. The model then
executes this task, if it all worked well and it was not interrupted by
some user or system interaction, then the Task Data can be applied to
the model (or the transaction can be committed).
If the user or system cancel the task in some way, no data at all should
be altered in the model. The model itself shall not change!
On the other side, if the model was successfully altered, the ViewModel
should be notified and the view should be updated. But this is only one
of the two main branches of execution you have here. And it is the
one, that is problaby already handled by your MVVM and ViewModel implementation.
Overall:
Your viewmodel shall start and cancel tasks running on the model, but it
does not need to control their lifetime in particular. If you present the
possibility to cancel the longrunning task, you can fire the cancel event,
but is should gracefully end the task in the model.
I'm designing a desktop application with multiple layers: the GUI layer (WinForms MVP) holds references to interfaces of adapter classes, and these adapters call BL classes that do the actual work.
Apart from executing requests from the GUI, the BL also fires some events that the GUI can subscribe to through the interfaces. For example, there's a CurrentTime object in the BL that changes periodically and the GUI should reflect the changes.
There are two issues that involve multithreading:
I need to make some of the logic
calls asynchronous so that they don't block the GUI.
Some of the events the GUI recevies are fired from non-GUI threads.
At what level is it best to handle the multithreading? My intuition says that the Presenter is the most suitable for that, am I right? Can you give me some example application that does what I need? And does it make sense for the presenter to hold a reference to the form so it can invoke delegates on it?
EDIT: The bounty will probably go to Henrik, unless someone gives an even better answer.
I would look at using a Task-based BLL for those parts that can be described as "background operations" (that is, they're started by the UI and have a definite completion point). The Visual Studio Async CTP includes a document describing the Task-based Asynchronous Pattern (TAP); I recommend designing your BLL API in this way (even though the async/await language extensions haven't been released yet).
For parts of your BLL that are "subscriptions" (that is, they're started by the UI and continue indefinitely), there are a few options (in order of my personal preference):
Use a Task-based API but with a TaskCompletionSource that never completes (or only completes by being cancelled as part of application shutdown). In this case, I recommend writing your own IProgress<T> and EventProgress<T> (in the Async CTP): the IProgress<T> gives your BLL an interface for reporting progress (replacing progress events) and EventProgress<T> handles capturing the SynchronizationContext for marshalling the "report progress" delegate to the UI thread.
Use Rx's IObservable framework; this is a good match design-wise but has a fairly steep learning curve and is less stable than I personally like (it's a pre-release library).
Use the old-fashioned Event-based Asynchronous Pattern (EAP), where you capture the SynchronizationContext in your BLL and raise events by queuing them to that context.
EDIT 2011-05-17: Since writing the above, the Async CTP team has stated that approach (1) is not recommended (since it somewhat abuses the "progress reporting" system), and the Rx team has released documentation that clarifies their semantics. I now recommend Rx for subscriptions.
It depends on what type of application you are writing - for example - do you accept bugs? What are your data requirements - soft realtime? acid? eventually consistent and/or partially connected/sometimes disconnected clients?
Beware that there's a distinction between concurrency and asynchronocity. You can have asynchronocity and hence call method call interleaving without actually having a concurrently executing program.
One idea could be to have a read and write side of your application, where the write-side publishes events when it's been changed. This could lead to an event driven system -- the read side would be built from the published events, and could be rebuilt. The UI could be task-driven - in that a task to perform would produce a command that your BL would take (or domain layer if you so wish).
A logical next step, if you have the above, is to also go event-sourced. Then you would recreate internal state of the write-model through what has been previously committed. There's a google group about CQRS/DDD that could help you with this.
With regards to updating the UI, I've found that the IObservable interfaces in System.Reactive, System.Interactive, System.CoreEx are well suited. It allows you to skip around different concurrent invocation contexts - dispatcher - thread pool, etc, and it interops well with the Task Parallel Library.
You'd also have to consider where you put your business logic -- if you go domain driven I'd say you could put it in your application as you'd have an updating procedure in place for the binaries you distribute anyway, when time comes to upgrade, but there's also the choice of putting it on the server. Commands could be a nice way to perform the updates to the write-side and a convenient unit of work when connection-oriented code fails (they are small and serializable and the UI can be designed around them).
To give you an example, have a look at this thread, with this code, that adds a priority to the IObservable.ObserveOnDispatcher(...)-call:
public static IObservable<T> ObserveOnDispatcher<T>(this IObservable<T> observable, DispatcherPriority priority)
{
if (observable == null)
throw new NullReferenceException();
return observable.ObserveOn(Dispatcher.CurrentDispatcher, priority);
}
public static IObservable<T> ObserveOn<T>(this IObservable<T> observable, Dispatcher dispatcher, DispatcherPriority priority)
{
if (observable == null)
throw new NullReferenceException();
if (dispatcher == null)
throw new ArgumentNullException("dispatcher");
return Observable.CreateWithDisposable<T>(o =>
{
return observable.Subscribe(
obj => dispatcher.Invoke((Action)(() => o.OnNext(obj)), priority),
ex => dispatcher.Invoke((Action)(() => o.OnError(ex)), priority),
() => dispatcher.Invoke((Action)(() => o.OnCompleted()), priority));
});
}
The example above could be used like this blog entry discusses
public void LoadCustomers()
{
_customerService.GetCustomers()
.SubscribeOn(Scheduler.NewThread)
.ObserveOn(Scheduler.Dispatcher, DispatcherPriority.SystemIdle)
.Subscribe(Customers.Add);
}
... So for example with a virtual starbucks shop, you'd have a domain entity that has something like a 'Barista' class, which produces events 'CustomerBoughtCappuccino' : { cost : '$3', timestamp : '2011-01-03 12:00:03.334556 GMT+0100', ... etc }. Your read-side would subscribe to these events. The read side could be some data model -- for each of your screens that present data -- the view would have a unique ViewModel-class -- which would be synchronized with the view in an observable dictionary like this. The repository would be (:IObservable), and your presenters would subscribe to all of that, or just a part of it. That way your GUI could be:
Task driven -> command driven BL, with focus on user operations
Async
Read-write-segregated
Given that your BL only takes commands and doesn't on top of that display a 'good enough for all pages'-type of read-model, you can make most things in it internal, internal protected and private, meaning you can use System.Contracts to prove that you don't have any bugs in it (!). It would produce events that your read-model would read. You could take the main principles from Caliburn Micro about the orchestration of workflows of yielded asynchronous tasks (IAsyncResults).
There are some Rx design guidelines you could read. And cqrsinfo.com about event sourcing and cqrs. If you are indeed interested in going beyond the async programming sphere into the concurrent programming sphere, Microsoft has released a well written book for free, on how to program such code.
Hope it helps.
I would consider the "Thread Proxy Mediator Pattern". Example here on CodeProject
Basically all method calls on your Adaptors run on a worker thread and all results are returned on the UI thread.
The recommended way is using threads on the GUI, and then update your controls with Control.Invoke().
If you don't want to use threads in your GUI application, you can use the BackgroundWorker class.
The best practice is having some logic in your Forms to update your controls from outside, normally a public method. When this call is made from a thread that is not the MainThread, you must protect illegal thread accesses using control.InvokeRequired/control.Invoke() (where control is the target control to update).
Take a look to this AsynCalculatePi example, maybe it's a good starting point.
I have a number of Windows Forms controls which are used to interact with my program objects. Currently they subscribe to an "Updated" event on the object and manually update values when needed. I would like to replace all (or as much as possible) of this boilerplate code using data binding.
The problem I'm running into is that the object state can be modified by any one of several different threads at any moment. Currently I use Invoke() to handle this, which works fine, but when I switch to data binding I get swamped by illegal cross-thread control exceptions. Is there a preferred method to handle this gracefully using data binding, or am I better off just leaving things the way they are now?
Thanks!
If you are data binding your controls to the data sources that are being updated from the underlying thread, then you will have to move the code that does the updating to the UI thread through a call to Invoke.
Or, if you want, you could get a ISynchronizeInvoke implementation (or a SynchronizationContext) and have all the events fire on the UI thread. Of course, this could cause unintended problems with your code, as you weren't firing the events on the UI thread in the first place.
I'm having a problem with handling an event on a different thread from where it is raised. The object that is handling the event is not an UI object however, so I can't use Invoke to execute the delegate and automatically switch to the UI thread for event handling.
The situation is as following: I have an MDI application containing multiple forms. Each form has it's own controller class that handles communication between the coupled form and external objects. All forms are either overview or detail forms (e.g. ContactsOverview & ContactDetail) and share the same data.
In the situation where the error occurs the forms appear in a wizard-like sequence, say a detail form is followed by an overview form. In the detail form data used on the following overview form is changed and before switching to the overview form these changes need to be reflected there. An event is raised from the detail form and handled by the controller for the overview form which does the necessary updating of UI elements.
Now the saving of the changed data in the detail form can take a while so it is necessary that the UI remains responsive and other parts of the application can still be used. This is why a backgroundworker is started to handle this. When the data is saved the event is raised on the background thread. The controller for the overview handles this but when the UI needs to be update there are of course cross-thread exceptions.
So what I need is a way to raise the event on the UI thread, but since the handling doesn't happen on a UI element there's no way to switch threads automatically using Invoke.
From searching around the web I've found one possible solution which is using the producer/consumer pattern. But this would require each controller to listen to a queue of events in a separate thread as far as I understand. Since it's an MDI application there could theoretically be any number of forms with controllers and I don't want to be starting up that many threads.
Any suggestions are welcome. If there would be a way to avoid using the backgroundworker alltogether that would be a suitable solution as well.
Thanks for reading,
Kevin
You can use SynchronizationContext, specifically SynchronizationContext.Current, to post messages to the main synchronization context (which is the main thread for a GUI application).
Unfortunately I don't know enough about the class and its usage to say this is a definite solution. In particular, I don't know what you should do if you don't require the main thread to handle your events, but instead a particular thread.
Perhaps the WindowsFormsSynchronizationContext class can help you out, it has a public parameterless constructor, I'm thinking it might associate it with the current thread, so if you construct that object from the thread that owns the controller, and give it to the background thread code, it might work.
You can have an event on the background objec that the UI element subscribes to. In the event handler (of the subscription - so it is part of the window code) you can then to the invocation. This is how I solve this.
You can try this flag but I don't think it's the best idea, just a work around.
You could also try to instantiate the issuing objects in a non-graphical thread, which may fix your problem.
One more thing, can't you have your UI component handle RunWorkerCompleted (with indirections) ?