I am trying to understand what is the point of declaring interface upon specific ViewModel. The only advantage I could think of is that we can specify common variables that we can use to our ViewModel that is used for design time purposes. This way we can be sure, that both (runtime & design time view models) will have the same variables with the same name.
Is there any other advantage to this?
One of the advantages to use interface is to use DI. This way you can specify in IoC container what concrete VM should be injected for that specific interface. Another advantage of using interface in VM is for unit tests when you need to mock your VM and instead of calling concrete VM you mock it with you mocking library (e.g. moq)
One thing I need the interface for is that I have a list of different view models that have some common properties like a "caption" that can be used as the header content when they are displayed as a tab control. While this basically could be done without the interface (by using a list of object), it gives me more confidence that there will be less runtime errors.
Common interfaces like IDisposable etc. are also something one comes along more frequently.
Another thing is if the view needs to interact with the view model (e.g. notify when the user clicks the "close" button). An interface may provide methods that can be invoked by the view in this case.
As Vlad already mentioned: Using a interface will make it easier to mock it (but only if all properties are in the interface!).
Related
I have a scenario where there are two pages to display orders in progress and delivered orders. The pages have almost identical binding and commands, so making separate ViewModels would result in code duplication
I was recommended to use the same ViewModel for both pages since the they would represent the same model, only order statuses would differ. The data comes from different endpoints, so I have two service methods. I don't like the idea of making one ViewModel for both pages, because this would make me to tell the ViewModel that it should get the Orders from one of the service methods when initializing the ViewModel.
The solution that I've thought of is to have OrdersViewModel as a base class, where all the common members are there, and create two derived classed called OrdersInProgressViewModel and DeliveredOrdersViewModel.
My every ViewModel has a method called InitializeAsync, if I decide to go with the first approach and have single ViewModel for both pages, I would probably have to pass down the status as navigation data to InitializeAsync and decide there which service method use to fetch orders.
With the second approach, I could have two separate ViewModels and in their InitializeAsync call the corresponding service method.
Which approach would adhere to MVVM more?
I also need to keep in mind, that more page specific behavior might be requested (another argument against single ViewModel for both pages)
I would suggest you to use the second approach you've described. Using a base viewmodel class and then implement derived classes.
EDIT
The answer to a similar question, in fact, is very similar to what I mean.
Generally I would recommend you not to have inheritance between different ViewModel classes, but instead having them inherit directly from a common abstract base class.
This is to avoid introducing unnecessary complexity by polluting the ViewModel classes' interfaces with members that come from higher up in the hierarchy, but are not fully cohesive to the class's main purpose.
The coupling that comes with inheritance will also likely make it hard to change a ViewModel class without affecting any of its derived classes.
Still feeling my way through MVVM, and have come up against this issue:
I have a top level ViewModel, lets call it ModelLevel1.
I have a sub ViewModel that belongs to it, that a control uses for all its bindings, lets call that ControlViewModel1. The Control only binds to ControlViewModel1.
In the Top level Viewmodel, there is a Repository, and a method to get a record from the repository from an id.
What is the best way to allow ControlViewModel1 to access the method so it can get a record from the repository?
Cheers,
Rob
Following the Single Responsibility Principle extract the logic of getting the record to a class that knows how to do it and inject that dependency to both ViewModels.
This approach can be reused by other classes, can be easily tested and is performant (unlike the aggregated events proposal).
What is the best way to allow ControlViewModel1 to access the method so it can get a record from the repository?
IMHO Controls are self contained units which have dependency properties which serve the function of a VM; hence no VM is needed.
Create a Dependency property on the control which takes in the target VM and hence has access to the method.
Create a static property on the app which will contain the VM in question and access it as a static call.
MVVM is simply a separation of concerns, remember Xaml ultimately gets compiled into C# code which is executed along side the code. Whatever process you use in code to access methods and objects can also be carried over to control instances on the page.
I'm currently using Model View Presenter Passive View on my .NET Compact Framework project using C#. Now, in my model, I have lots of Pinvoke from a C/C++ DLL. My project is a hardware testing equipment, typically with buttons and large LCD touchscreen. It then collects data (uses some database) and transfer to PC.
I created a model interface, and the class that implements it invokes those Pinvoke methods. One reason is I would like to encapsulate the Pinvoke and marshalling,interop inside the Model.
Now I have a presenter. An example scenario: a user presses a button then the click event on the view will call the method on presenter (via an interface), then finally call the model's method (via an interface again).
Now, it seems to me that the presenter is mostly becoming a wrapper of the model's business logic. If I add some methods to the model, I also need to add that method via an interface because the view's button's need to invoke some of the methods in the model. I feel that there is too many indirection. One example is that, in the model, I have a thread to wait for the events being pushed by a C/C++ DLL. Now, I have a thread on the Presenter which uses an observer pattern to queue and process that events coming from the model (changing of screen views and telling the user what is happening).
pseudo code
from the interface of view:
void viewChangeTestResultsText(string Text);
from the interface of presenter:
void PerformTest();
on the concrete class the implements the interface of presenter:
void PerformTest()
{
interfaceView.viewChangeTestResultsText("Test Started");
interceModel.PerformTest();
}
on the interface of Model:
void PerformTest();
on the concrete class of the Model:
PerformTest()
{
ModelPinvokeMethods.PerformTest();
}
in this code, the button click handler calls the performtest in the presenter, then presenter calls the performtest in the model. then model calls the pinvoke performtest. the indirection quite causing some pain already because I have lots of method calls to implement and the project is in a very tight deadline.
For my project, there is another variant and I know that I will be needing a changeable presenter and with this, I also need a changeable model because the business logic is somehow different although there are lots of similarities. Right now, I am thinking of pushing all the logic in the model to the presenter so that I will just be maintaining only the logic on the presenter view and use the model only for data handling (database, configurations, settings), which I think will be simpler in terms of development and code maintenance but I am not sure of the impact in terms of flexibility.
This is my first time to use MVP in passive view. I am not sure if I am missing something regarding the correct implementation of MVP. Any thoughts or suggestions on this?
Your understanding of MVP seems fine; you have correctly distinguished between presentation logic (perform test, synchronize view) and domain logic (PInvoke). With the interfaces you have set up, you can easily unit test the presenter (which is one of the main advantages of using MVP).
I would advise against putting all of the logic in the presenter as that can lead to a God Object.
Regarding your changeable presenter problem, I'm not sure what you mean. Do you mean that you need a different presenter/model for each type of device? If so it seems perfectly reasonable to have a MVP-triad for each type of device if they are sufficiently distinct from one another. If you identify common traits between them, you can either use inheritance or utility classes to provide common code.
I'm been experimenting with the oft-mentioned MVVM pattern and I've been having a hard time defining clear boundaries in some cases. In my application, I have a dialog that allows me to create a Connection to a Controller. There is a ViewModel class for the dialog, which is simple enough. However, the dialog also hosts an additional control (chosen by a ContentTemplateSelector), which varies depending on the particular type of Controller that's being connected. This control has its own ViewModel.
The issue I'm encountering is that, when I close the dialog by pressing OK, I need to actually create the requested connection, which requires information captured in the inner Controller-specific ViewModel class. It's tempting to simply have all of the Controller-specific ViewModel classes implement a common interface that constructs the connection, but should the inner ViewModel really be in charge of this construction?
My general question is: are there are any generally-accepted design patterns for how ViewModels should interact with eachother, particularly when a 'parent' VM needs help from a 'child' VM in order to know what to do?
EDIT:
I did come up with a design that's a bit cleaner than I was originally thinking, but I'm still not sure if it's the 'right' way to do this. I have some back-end services that allow a ContentTemplateSelector to look at a Controller instance and pseudo-magically find a control to display for the connection builder. What was bugging me about this is that my top-level ViewModel would have to look at the DataContext for the generated control and cast it to an appropriate interface, which seems like a bad idea (why should the View's DataContext have anything to do with creating the connection?)
I wound up with something like this (simplifying):
public interface IController
{
IControllerConnectionBuilder CreateConnectionBuilder();
}
public interface IControllerConnectionBuilder
{
ControllerConnection BuildConnection();
}
I have my inner ViewModel class implement IControllerConnectionBuilder and the Controller returns the inner ViewModel. The top-level ViewModel then visualizes this IControllerConnectionBuilder (via the pseudo-magical mechanism). It still bothers me a little that it's my inner ViewModel performing the building, but at least now my top-level ViewModel doesn't have to know about the dirty details (it doesn't even know or care that the visualized control is using a ViewModel).
I welcome additional thoughts if there are ways to clean this up further. It's still not clear to me how much responsibility it's 'okay' for the ViewModel to have.
An option which works well for interaction between viewmodels is to bind directly to observer classes sitting between the viewmodel classes.
I think you want to make your top-level ViewModel aware of the existence of the NestedViewModel, it makes sense from a hierarchical standpoint, the master view contains the child view.
In my opinion, your instinct is right, it doesn't feel correct for the nested ViewModel to expose behaviours which are initiated by user actions on the top-level. Instead, the top-level ViewModel should be providing behaviors for the view it is associated with.
But I'd consider moving responsibility for connection construction into an ICommand, and exposing this command via your top-level ViewModel. The OK button on your master dialog you would then bind to this command, and the command would just delegate to the top-level ViewModel, for example, call ViewModel.CreateConnection() when it is executed.
The responsibility of your nested control is then purely collecting and exposing the data to its NestedViewModel, for consumption by the containing ViewModel, and it is theoretically more re-usable in different contexts that require the same information to be entered (if any) - let's say you wanted to re-use it for editing already-created connections.
The only wrinkle would be if the different types of NestedViewModel expose a radically different set of data.
For example, one exposes HostName and Port as properties, and another exposes UserName and Password.
In which case you may need to do some infrastructural work to have your top-level ViewModel.CreateConnection() still work in a clean manner. Although if you have a small amount of nested control types, it may not be worth the effort, and a simple NestedViewModel type-check and cast may suffice.
Does this sound viable?
I recently experimented with Unity (Microsoft Enterprise library) to use dependency injection. That might be a route to go when using interfaces that completely define what both viewmodels need to no from each other. MEF would be another option for dependency injection I'm aware of.
HTH
Currently I have created a ABCFactory class that has a single method creating ABC objects. Now that I think of it, maybe instead of having a factory, I could just make a static method in my ABC Method. What are the pro's and con's on making this change? Will it not lead to the same? I don't foresee having other classes inherit ABC, but one never knows!
Thanks
Having a single, static method makes this much more difficult to test whereas having an instantiable object allows this to be easier to test. Also, dependency injection is later more of an option with the non-static solution.
Of course, if you don't need any of this, then these are not good arguments.
The main advantage of the factory method is the ability to hide reference to a specific class behind an interface. Since static methods can not be a part of the interface, static factory methods are basically the same as the constructor method itself. The only useful application of the static factory methods is to provide access to a private constructor - what is commonly used for singleton-pattern implementation.
In reality, if you want to get the benefits of a factory class, you need the static method in it's own class. This will allow you to later create new factory classes, or reconfigure the existing one to get different behaviors. For example, one factory class might create Unicorns which implement the IFourHoovedAnimal interface. You might have an algorithm written that does things with IFourHoovedAnimal's and needs to instantiate them. Later you can create a new factory class that instead instantiates Pegasus's which also implement IFourHoovedAnimal's. The old algorithm can now be reused for Pegasus's just by using the new factory! To make this work both the PegasusFactory and the UnicornFactory must inherit from some common base class(usually an abstract class).
So you see by placing the static method in it's own factory class, you can swap out factory classes with newer ones to reuse old algorithms. This also works for improving testability, because now unit tests can be fed a factory that creates mock objects.
I have done the latter before (static factory method on the class that you are creating instances of) for very small projects, but it was only because I needed it to help refactor some old code, but keep changes to a minimum. Basically in that case I had factored out a chunk of code that created a bunch of ASP.NET controls, and stuff all those controls into a user control. I wanted to make my new user control property based, but it was easier for the old legacy code to create the user control with a parameter based constructor.
So I created a static factory method that took all the parameters, and then instanced the user control and set it's properties based on the parameters. The old legacy code used this static method to create the user control, and future code would use the "prettier" properties instead.
For concrete classes, factory methods are really just a method of indirection around creating the actual type (which isn't to say they aren't useful, but as you've found, the factory method could really be anywhere).
Where the factory method really shines though is when your method creates instances of an interface type.
The "D" in Uncle Bob's SOLID Principles of Object Oriented Design is "The Dependency Inversion Priciple" Depend on abstractions, not on concretions.
An extreme following of that principle could have your main class create all your factories, with each factory using other factories via interfaces. The only appearance of "new" (creating concrete objects) would be in your main class, and your factories. All your objects would work with interfaces (abstractions), with the concrete dependencies obtained from supplied factory implementations.
You could then very easily adjust, or provide multiple Main classes customised for different scenarios.
Overuse of design patterns are dangerous, and creational design patterns make sense when you have class hierarchies with defined interfaces, or need to build rather complex objects. If you have a simple design, use simple solutions. Therefore, in your case, Factory Method would be enough
Yes, you are right, it is another design pattern :)