MVVM approach is nice and well established. However picture the scene: you have an app page where user can initiate some long-running task. Like synchronization of local and remote databases. This task can be long and should only be interrupted gracefully. Then user leaves the page, by going to some details page. It doesn't make sense to cancel that long async operation, because app is still running. But then suddenly user receives a phone call, so that app is deactivated.
In my (maybe too primitive) understanding of MVVM, View Model should be used to control interactions with the Model (that long operation particularly). But View Model doesn't need to know about application lifetime events, since that will limit code reusability (on Windows 8 there's no such class as PhoneApplicationService). See a contradiction here? VM initiates operation, but should not be used to cancel it.
Of course, View can take this responsibility to handle lifetime events. So that event about app deactivating is propagating like this: View -> ViewModel -> (cancels long operation) -> Model. But if user has navigated from the View, and some of operations initiated in that View is still running, there's no way of cancelling it anymore - View can be disposed of at any time.
I've came up with only one idea, that is handling app lifetime events in View Models. But, as I said before, I dislike this approach, because it limits View Models' portability. Could anyone offer a better solution?
I've came up with only one idea, that is handling app lifetime events in View Models. But, as I said before, I dislike this approach, because it limits View Models' portability. Could anyone offer a better solution?
I actually do not see a problem here. In MVVM, the ViewModel is traditionally the "glue" that ties the View to the Model.
Having a small amount of custom ViewModel code for each platform doesn't necessarily limit the portability of the rest of the ViewModel, especially if this is abstracted and contained within its own project for each platform.
VM initiates operation, but should not be used to cancel it.
This strongly suggests that the VM should be the one to cancel it. If the VM creates these operations, it effectively has ownership of them, which suggests that it should manage their lifecycle, as well.
I am not sure if this breaks MVVM principle, but I simply thought like this way.
Regarding the subscription of PhoneApplicationService in VM, are there any reasons not to take this approach like
App -> ViewModel
App is owner of VMs, and if App tells VMs activate/deactivate through interface like view does to its VM, the VM can keep the reusability. but isn't it true once VM subscribe PhoneApplicationService in it, it means VM has dependency on application which means VM and application depends each other and it limits reusability?
About long time task, if it needs to live according to the application lifetime but not to the page lifetime, it can be in App scope as application model or something which can be shared from VMs but not in page(view) scope.
The description you give suggests to me, that there is a missing layer of abstraction.
In particular:
Your ViewModel can of course start long running events that effect the model,
but they do not have any ownership on those long running events. This is exactly
true and should not be broken. If you start a long running event (and your
database synchronisation example is very good here), then the model should
handle this.
The missing part here is in the model I believe. When there are long running
tasks that effect the model, have a separate layer that handles those.
Lets call them Transaction for simplicities sake.
So: You start your long running task in the models domain. The model then
executes this task, if it all worked well and it was not interrupted by
some user or system interaction, then the Task Data can be applied to
the model (or the transaction can be committed).
If the user or system cancel the task in some way, no data at all should
be altered in the model. The model itself shall not change!
On the other side, if the model was successfully altered, the ViewModel
should be notified and the view should be updated. But this is only one
of the two main branches of execution you have here. And it is the
one, that is problaby already handled by your MVVM and ViewModel implementation.
Overall:
Your viewmodel shall start and cancel tasks running on the model, but it
does not need to control their lifetime in particular. If you present the
possibility to cancel the longrunning task, you can fire the cancel event,
but is should gracefully end the task in the model.
Related
I'm writing an application for WPF in MVC pattern. The purpose of application is to display some data in the database and these data are being updated asynchronously.
I'm thinking about how to design the architecture, such that it will be thread-safe. In particular:
Each page (or its viewmodel) must be able to subscribe and unsubscribe from the service, which updates the database.
The service updating the database informs all subscribers, that new data arrived and that they should refresh their views.
Obviously, the page, which is just being closed should unsubscribe from the service and the page, which just appears, should (or may) subscribe.
I could put subscription inside a critical section, as well as broadcast about new data, but then imagine the following scenario (page ~= its viewmodel, that does not matter much here):
Service enters critical section to broadcast information about new data (in separate thread)
Page tries to enter critical section to unsubscribe (in main thread)
Service informs page about new data (in separate thread).
Page populates its fields and raises PropertyChange event (in separate thread).
PropertyChange event is marshalled to the main thread. Which waits for the critical section.
And it looks like a deadlock to me.
How can I safely design this architecture to avoid such deadlocks? Maybe pages should never unsubscribe? Or is there another way to secure threads such that they won't deadlock?
Given that the post is tagged WPF and WP-8.1 and the clarification in the comments, i would do the following:
Have the base Model class (the one with properties holding relevant data) implement INotifyPropertyChanged
Have the Model for ALL pages as ObservableCollection<BaseModel>. The model should also implement a mutex/lock property instantiated in the constructor.
Share the model across all viewmodels (e.g. share the instance of the model).
In the 'Service' performing async operation, i would only lock the section of the code that would Add or Remove items from the Model ObservableCollection using the lock object from the Model itself. This section MUST be placed in the Dispatcher.Invoke() or equivalent platform call. This ensures that it is only UI thread that is waiting to update the collection.
I would bind all the UI in the relevant pages to the model reference in the viewmodel.
This way the UI and viewmodels are careless to the specific service events thus eliminating the overhead of subscribing, and you also limit the duplication of the data if you share the model - even with 20 pages on screen, your service will perform a single update that is propagated to the UI and viewmodels by the powers of the framework (binding).
A simple solution could be: Do not do the unsubscribe operation in the UI thread. (In general do not block the UI thread.) Do it in async way, fire and forget.
Alternatively you may take a look to Rx (Reactive Extensions) what are exactly for this purpose: Implementing the observer pattern in multithreaded way.
Silently "just not unsubscribe" is probably not a good idea. Although I do not know your implementation details, if the event handlers are instance methods, then a reference to that instance implicitly will be kept by the service, and depending the reference chain maybe your page or other instances will be prevented to garbage collected.
"Or is there another way to secure threads such that they won't deadlock?" Currently in .NET framework there is no magic trick what automatically prevents deadlock. Other multithreaded environments may or may not provide an automatic deadlock resolution (note: not prevention) service what can detect a deadlock (after it happen) and automatically choose a victim. In .NET it could be an exception what occurs while your are waiting to a resource. (again this is not implemented yet)
I have a C# / MVVM application that works with a device. The application needs to constantly check to see it is connected / disconnected to this device. I was given code that the Model (USB connection code project) starts a thread that will continuously check to see if the device is connected. It will then use callbacks to the ViewModel to set the properties that are needed to be set.
But shouldn't the ViewModel start the thread and then call the appropriate methods in the "USB connection code project" to check this?
If I do keep the thread in the model then from reading other threads I probably should use INotifyPropertyChanged instead of delegates / callbacks....correct?
The existing code has it exactly right.
Checking a USB device has absolutely nothing to do with the view, or view logic; so it belongs in the model. Doing a delegate or event callback to tell the View Model to update its state is a perfectly reasonable notification mechanism.
Utilizing INotifyPropertyChanged yourself is really painful, and not very semantically clear. I wouldn't change a thing about the described design.
I'm currently building a C# application which will automatically authenticate a user against certain network resources when they connect to specific wireless networks.
At the moment, I'm using the Managed Wifi API to discover when a user connects / disconnects from a wireless network. I have an event handler, so that when any of these activities occurs, one of my methods is called to inspect the current state of the wireless connection.
To manage the state of the application, I have another class which is called the "conductor", which performs the operations required to change the state of the application. For instance, when the wireless card connects to the correct network, the conductor needs to change the system state from "Monitoring" to "Authenticating". If authentication succeeds, the conductor needs to change the state to "Connected". Disconnection results in the "Monitoring" state again, and an authentication error results in an "Error" state. These state changes (if the user requests) can result in TrayIcon notifications, so the user knows that they are being authenticated.
My current idea involves having the method used to inspect the current state of the wireless call the "authenticate" or "disconnect" methods within the state manager. However, I'm not sure if this is an appropriate use of the event handler -- should it instead be setting a flag or sending a message via some form of IPC to a separate thread which will begin the authentication / disconnection process?
In addition to the event handler being able to request connection / disconnection, a user can also perform it via the tray icon. As a result, I need to ensure these background operations are not blocking the tray's interactions with the user.
Only one component should be able to request a change of the system state at any time, so I would need to use a mutex to prevent concurrent state changes. However, how I should synchronous the rest of these components is a slight mystery to me.
Any advice or literature I should read would be appriciated. I have no formal training in C# language, so I apologize if I've misstated anything.
EDIT: Most importantly, I want to verify that an event will be executed as a separate thread, so it cannot block the main UI. In addition, I want to verify that if I have an event handler subscribed to an event, it will handle events serially, not in parallel (so if the user connects and disconnects before the first connection event is processed, two state changes will not be occurring simultaneously).
Any advice or literature I should read would be appriciated. I have no formal training in C# language, so I apologize if I've misstated anything.
That explains a few things. :)
I would read up on threads, event handling, and creation of system tray icons/interfaces.
It is important to note the following:
Events are processed on the same thread they are called from. If you want the processing of an event not to lock the GUI then you will need to have the button move the work to a different thread.
When an event is fired it passes the appropriate arguments to all the methods in its list. This is pretty much the same as calling one method which in turn calls all the others (see EventFired example). The purpose of events is not to call methods as we can do that already, it is to call methods which may not be known when the code is compiled (the click event on a button control would not be known when the library the control is in is compiled for example). In short, if you can call the method instead of using an event the do so.
void EventFired(int arg1, object arg2)
{
subscribedMethod1(arg1, arg2);
SubscribedMethod2(arg1, arg2);
SubscribedMethod3(arg1, arg2);
SubscribedMethod4(arg1, arg2);
SubscribedMethod5(arg1, arg2);
SubscribedMethod6(arg1, arg2);
SubscribedMethod7(arg1, arg2);
}
If you want to prevent a user interface from locking do the work on another thread. Remember though, user interface elements (forms, buttons, grids, labels, etc.) can only be accessed from their host thread. Use the control.Invoke method to call methods on their thread.
Removing an option from an interface is not a good way to prevent raceway conditions (the user starts a connect/disconnect while one is already running) as the user interface will be on a different thread and could be out of sync (it takes time for separate threads to sync up). While there are many ways to resolve this problem, the easiest for someone new to threading is to use a lock on the value. This way .NET will make sure only one thread can change the setting at a time. You will still need to update the user interface so the user knows the update is occurring.
Your general design sound fine. You could use 2-3 threads (1 for the user interface (tray icon), 1 for checking for new network connections, and 1 (could be merged with connection check) which checks the internet connection.
Hope this helps, let us know if you need more (or accept an answer).
As an option, alternative...
If I were you, and since you're starting anew anyway, I would seriously consider the
Rx Reactive Extensions
It gives a completely fresh look at events and event based programming and helps a lot exactly with the things you're dealing with (including synchronizing, dealing with threads, combining events, stopping, starting etc. etc.).
It might be a bit of a 'steep curve' to learn at start, but again, it might be worth it.
hope this helps,
To me it seems that you're going to overengineer the project.
You basically need to implement an event in Commander and in main application subscribe to them. That is.
If there is always one component can make a change and you can have more then one, using some sync mechanism, like a Mutex noted by you, is perfectly valid choice.
Hope this helps.
If you want to have at most one state change pending at any time it is probably best to have the event handlers of the external events you are listening to hold a lock during their execution. This ensure an easy way to program because you are guaranteed that the state of your app does not change underneath you. A separate thread is not needed in this particular case.
You need to make a distinction between the current state of the application and the target state. The user dictates the target state ("connected", "disconnected"). The actual state might be different. Example: the user wants to be disconnected but the actual state is authenticating. Once the authentication step is completed the state machine must examine the target state:
targetState == connected => set current state to connected
targetState == disconnected => begin to disconnect and set state to disconnecting
Separating actual and target state allows the user to change his mind any time and the state machine to steer towards the desired state.
It's hard to give a precise answer without seeing the whole (proposed) structure of your app. But in general, yes, it's OK to use an event hander for that sort of thing - though I'd probably move the actual implementation out to a separate method, so that you can more easily trigger it from other locations.
The comment about disabling the "Connect" button sounds right on to me, though it's quite conceivable you might need other forms of synchronization as well. If your app doesn't need to be multi-threaded, though, I'd steer away from introducing multiple threads just for the sake of it. If you do, look into the new Task API's that have been included as part of the Task Parallel Library. They abstract a lot of that stuff fairly well.
And the comment about not over-thinking the issue is also well-taken. If I were in your shoes, just beginning with a new language, I'd avoid trying to get the architecture just right at the start. Dive in, and develop it with the cognitive toolset you've already got. As you explore more, you'll figure out, "Oh, crap, this is a much better way to do that." And then go and do it that way. Refactoring is your friend.
I'm designing a desktop application with multiple layers: the GUI layer (WinForms MVP) holds references to interfaces of adapter classes, and these adapters call BL classes that do the actual work.
Apart from executing requests from the GUI, the BL also fires some events that the GUI can subscribe to through the interfaces. For example, there's a CurrentTime object in the BL that changes periodically and the GUI should reflect the changes.
There are two issues that involve multithreading:
I need to make some of the logic
calls asynchronous so that they don't block the GUI.
Some of the events the GUI recevies are fired from non-GUI threads.
At what level is it best to handle the multithreading? My intuition says that the Presenter is the most suitable for that, am I right? Can you give me some example application that does what I need? And does it make sense for the presenter to hold a reference to the form so it can invoke delegates on it?
EDIT: The bounty will probably go to Henrik, unless someone gives an even better answer.
I would look at using a Task-based BLL for those parts that can be described as "background operations" (that is, they're started by the UI and have a definite completion point). The Visual Studio Async CTP includes a document describing the Task-based Asynchronous Pattern (TAP); I recommend designing your BLL API in this way (even though the async/await language extensions haven't been released yet).
For parts of your BLL that are "subscriptions" (that is, they're started by the UI and continue indefinitely), there are a few options (in order of my personal preference):
Use a Task-based API but with a TaskCompletionSource that never completes (or only completes by being cancelled as part of application shutdown). In this case, I recommend writing your own IProgress<T> and EventProgress<T> (in the Async CTP): the IProgress<T> gives your BLL an interface for reporting progress (replacing progress events) and EventProgress<T> handles capturing the SynchronizationContext for marshalling the "report progress" delegate to the UI thread.
Use Rx's IObservable framework; this is a good match design-wise but has a fairly steep learning curve and is less stable than I personally like (it's a pre-release library).
Use the old-fashioned Event-based Asynchronous Pattern (EAP), where you capture the SynchronizationContext in your BLL and raise events by queuing them to that context.
EDIT 2011-05-17: Since writing the above, the Async CTP team has stated that approach (1) is not recommended (since it somewhat abuses the "progress reporting" system), and the Rx team has released documentation that clarifies their semantics. I now recommend Rx for subscriptions.
It depends on what type of application you are writing - for example - do you accept bugs? What are your data requirements - soft realtime? acid? eventually consistent and/or partially connected/sometimes disconnected clients?
Beware that there's a distinction between concurrency and asynchronocity. You can have asynchronocity and hence call method call interleaving without actually having a concurrently executing program.
One idea could be to have a read and write side of your application, where the write-side publishes events when it's been changed. This could lead to an event driven system -- the read side would be built from the published events, and could be rebuilt. The UI could be task-driven - in that a task to perform would produce a command that your BL would take (or domain layer if you so wish).
A logical next step, if you have the above, is to also go event-sourced. Then you would recreate internal state of the write-model through what has been previously committed. There's a google group about CQRS/DDD that could help you with this.
With regards to updating the UI, I've found that the IObservable interfaces in System.Reactive, System.Interactive, System.CoreEx are well suited. It allows you to skip around different concurrent invocation contexts - dispatcher - thread pool, etc, and it interops well with the Task Parallel Library.
You'd also have to consider where you put your business logic -- if you go domain driven I'd say you could put it in your application as you'd have an updating procedure in place for the binaries you distribute anyway, when time comes to upgrade, but there's also the choice of putting it on the server. Commands could be a nice way to perform the updates to the write-side and a convenient unit of work when connection-oriented code fails (they are small and serializable and the UI can be designed around them).
To give you an example, have a look at this thread, with this code, that adds a priority to the IObservable.ObserveOnDispatcher(...)-call:
public static IObservable<T> ObserveOnDispatcher<T>(this IObservable<T> observable, DispatcherPriority priority)
{
if (observable == null)
throw new NullReferenceException();
return observable.ObserveOn(Dispatcher.CurrentDispatcher, priority);
}
public static IObservable<T> ObserveOn<T>(this IObservable<T> observable, Dispatcher dispatcher, DispatcherPriority priority)
{
if (observable == null)
throw new NullReferenceException();
if (dispatcher == null)
throw new ArgumentNullException("dispatcher");
return Observable.CreateWithDisposable<T>(o =>
{
return observable.Subscribe(
obj => dispatcher.Invoke((Action)(() => o.OnNext(obj)), priority),
ex => dispatcher.Invoke((Action)(() => o.OnError(ex)), priority),
() => dispatcher.Invoke((Action)(() => o.OnCompleted()), priority));
});
}
The example above could be used like this blog entry discusses
public void LoadCustomers()
{
_customerService.GetCustomers()
.SubscribeOn(Scheduler.NewThread)
.ObserveOn(Scheduler.Dispatcher, DispatcherPriority.SystemIdle)
.Subscribe(Customers.Add);
}
... So for example with a virtual starbucks shop, you'd have a domain entity that has something like a 'Barista' class, which produces events 'CustomerBoughtCappuccino' : { cost : '$3', timestamp : '2011-01-03 12:00:03.334556 GMT+0100', ... etc }. Your read-side would subscribe to these events. The read side could be some data model -- for each of your screens that present data -- the view would have a unique ViewModel-class -- which would be synchronized with the view in an observable dictionary like this. The repository would be (:IObservable), and your presenters would subscribe to all of that, or just a part of it. That way your GUI could be:
Task driven -> command driven BL, with focus on user operations
Async
Read-write-segregated
Given that your BL only takes commands and doesn't on top of that display a 'good enough for all pages'-type of read-model, you can make most things in it internal, internal protected and private, meaning you can use System.Contracts to prove that you don't have any bugs in it (!). It would produce events that your read-model would read. You could take the main principles from Caliburn Micro about the orchestration of workflows of yielded asynchronous tasks (IAsyncResults).
There are some Rx design guidelines you could read. And cqrsinfo.com about event sourcing and cqrs. If you are indeed interested in going beyond the async programming sphere into the concurrent programming sphere, Microsoft has released a well written book for free, on how to program such code.
Hope it helps.
I would consider the "Thread Proxy Mediator Pattern". Example here on CodeProject
Basically all method calls on your Adaptors run on a worker thread and all results are returned on the UI thread.
The recommended way is using threads on the GUI, and then update your controls with Control.Invoke().
If you don't want to use threads in your GUI application, you can use the BackgroundWorker class.
The best practice is having some logic in your Forms to update your controls from outside, normally a public method. When this call is made from a thread that is not the MainThread, you must protect illegal thread accesses using control.InvokeRequired/control.Invoke() (where control is the target control to update).
Take a look to this AsynCalculatePi example, maybe it's a good starting point.
I am currently using sequential workflows in Windows WF, but need to break up the process because I now have multiple workflows that need to share a piece of functionality. I believe there's a way to create custom code activities in WF that would basically accomplish this, but my plan is to eventually ditch WF in favor of Stateless; therefore, I don't want to spend the time right now learning how to code custom activities.
The only thing I can think of is to create a new WF project that contains all of the "shared" behaviors, and then launch them from within the workflows that need them. I'm working on this now to see how it goes, but can anyone tell me if this is just a Bad Idea?
EDIT -- one "problem" I see right now is that I use a singleton for the WF runtime, as I have experienced massive memory leaks before, even when I dispose of the WF RT properly. I track all WF instances in the initial caller of the workflow, so in order to handle the events properly, I'd have to pass this List of WF instances into the workflow so it can add the WF that I'm launching internally. Seems a bit messy to me, although I can certainly still try it this way. I track the WF instances because I'm trying to use this to enable Pause / Abort / Resume functionality. When the user clicks the respective button in the GUI, it loops over all WF instances and calls the matching method.
Your main problem with splitting a workflow into separate parts is that they are completely disconnected. That is the main workflow doesn't wait for the child workflows to finish. This can be done but takes some doing.
Another think to keep in mind is error handling. When a child workflow faults the main workflow is not aware of this, quite different behavior from adding child activities.
If you need to reuse logic you can also create composite activities using the designer. This is very similar to developing workflows and you can reuse these activities on multiple workflows as needed.