Apologise if this a really stupid question but I'm just getting started with caliburn.micro and I'm struggling with getting the eventAggregator, nothing seems to be subscribing...
I'm not sure whether the problem is with the view model or the bootstrapper. Here is the viewmodel:
class MainWindowViewModel : Screen
{
private readonly IEventAggregator _eventAggregator;
public MainWindowViewModel(IEventAggregator eventAggregator)
{
_eventAggregator = eventAggregator;
_eventAggregator.Subscribe(this);
}
public void SayHello()
{
_eventAggregator.Publish("Hello World!");
}
public void Handle(string message)
{
MessageBox.Show(message);
}
}
Bootstrapper:
class AppBootstrapper : Bootstrapper<MainWindowViewModel>
{
public static readonly Container ContainerInstance = new Container();
protected override void Configure()
{
ContainerInstance.Register<IWindowManager, WindowManager>();
ContainerInstance.RegisterSingle<IEventAggregator,EventAggregator>();
ContainerInstance.Register<MainWindowViewModel, MainWindowViewModel>();
ContainerInstance.Verify();
}
protected override IEnumerable<object> GetAllInstances(Type service)
{
return ContainerInstance.GetAllInstances(service);
}
protected override object GetInstance(System.Type service, string key)
{
return ContainerInstance.GetInstance(service);
}
protected override void BuildUp(object instance)
{
ContainerInstance.InjectProperties(instance);
}
}
Any ideas what I'm missing, I feel I must not be linking somewhere...
I am using SimpleInjector as the IOC Container
EDIT:
It seems like a very simple case of I didn't know what I was doing. RTFM.
Implementing IHandle does work. It seems to get called twice the first time the type is handled though. I'll do some investigating as to why.
It sounds like you've already arrived at a solution of sorts.
I believe it should work provided you implement an IHandle<T> interface using a type compatible with the even you're publishing. E.g:
class MainWindowViewModel : Screen, IHandle<string>
{
//... Your Code
public void Handle(string myEventstring)
{
// Do Something.
}
}
If at all helpful, when I use the EventAggregator, I tend to create a static EventAggregator instance (from a small helper class) which I use in any ViewModels that require it - it may help in cases where you've actually initialised the EventAggregator multiple times by accident (might be the cause of your double event).
I also sometimes create small helper classes to wrap up event information. E.g:
public sealed class DownloadFinishedEvent
{
public readonly string EventText = "Download Completed";
// Additional Download Info Here.
public override string ToString()
{
return this.EventText;
}
}
The caliburn micro doc example shows, that the subscriber has to implement the IHandle interface. I think that's the problem.
Related
I am playing around with using Akka.NET in a new WPF .NET Framework application I am currently working on.
Mostly the process of using actors in your application seems pretty self explanitory, however when it comes to actually utilising the actor output at the application view level I have gotten a bit stuck.
Specifically there appear to be two options on how you might handle receiving and processing events in your actor.
Create an actor with publically exposed event handlers. So maybe something like this:
public class DoActionActor : ReceiveActor
{
public event EventHandler<EventArgs> MessageReceived;
private readonly ActorSelection _doActionRemoteActor;
public DoActionActor(ActorSelection doActionRemoteActor)
{
this._doActionRemoteActor = doActionRemoteActor ?? throw new ArgumentNullException("doActionRemoteActor must be provided.");
this.Receive<GetAllStuffRequest>(this.HandleGetAllStuffRequestReceived);
this.Receive<GetAllStuffResponse>(this.HandleGetAllStuffResponseReceived);
}
public static Props Props(ActorSystem actorSystem, string doActionRemoteActorPath)
{
ActorSelection doActionRemoteActor = actorSystem.ActorSelection(doActionRemoteActorPath);
return Akka.Actor.Props.Create(() => new DoActionActor(doActionRemoteActor));
}
private void HandleGetAllStuffResponseReceived(GetAllTablesResponse obj)
{
this.MessageReceived?.Invoke(this, new EventArgs());
}
private void HandleGetAllStuffRequestReceived(GetAllTablesRequest obj)
{
this._doActionRemoteActor.Tell(obj, this.Sender);
}
}
So basically you can then create your view and invoke any calls by doing something like this _doActionActor.Tell(new GetStuffRequest()); and then handle the output through the event handler. This works well but seems to break the 'Actors 'everywhere' model' that Akka.NET encourages and I am not sure about the concurrency implications from such an approach.
The alternative appears to be to actually make it such that my ViewModels are actors themselves. So basically I have something that looks like this.
public abstract class BaseViewModel : ReceiveActor, IViewModel
{
public event PropertyChangedEventHandler PropertyChanged;
public abstract Props GetProps();
protected void RaisePropertyChanged(PropertyChangedEventArgs eventArgs)
{
this.PropertyChanged?.Invoke(this, eventArgs);
}
}
public class MainWindowViewModel : BaseViewModel
{
public MainWindowViewModel()
{
this.Receive<GetAllTablesResponse>(this.HandleGetAllTablesResponseReceived);
ActorManager.Instance.Table.Tell(new GetAllTablesRequest(1), this.Self);
}
public override Props GetProps()
{
return Akka.Actor.Props.Create(() => new MainWindowViewModel());
}
private void HandleGetAllTablesResponseReceived(GetAllTablesResponse obj)
{
}
}
This way I can handle actor events directly in actors themselves (which are actually my view models).
The problem I run into when trying to do this is correctly configuring my Ioc (Castle Windsor) to correctly build Akka.NET instances.
So I have some code to create the Akka.NET object that looks like this
Classes.FromThisAssembly()
.BasedOn<BaseViewModel>()
.Configure(config => config.UsingFactoryMethod((kernel, componentModel, context) =>
{
var props = Props.Create(context.RequestedType);
var result = ActorManager.Instance.System.ActorOf(props, context.RequestedType.Name);
return result;
}))
This works great at actually creating an instance of IActorRef BUT unfortunately I cannot cast the actor reference back to the actual object I need (in this case BaseViewModel).
So if I try to do this return (BaseViewModel)result; I get an invalid cast exception. Which obviously makes sense because I am getting an IActorRef object not a BaseViewModel.
So in conclusion I am hoping to get two questions answered.
What is the best way to deal with Akka.NET actors in MVVM applications, specifically when it comes to handling messages received and handling displaying the output.
Is there a way to correctly configure my Ioc system to both create an IActorRef instance and add it to the system BUT return an instance of the actual parent actor object concrete implementation of BaseViewModel?
Below is the current solution that I am using in the hope someone might propose something a bit better.
Basically I have abandoned my attempt at making my views actors and currently settled on using an interface to communicate between the ViewModel and Actor.
The current solution looks like this:
public class MainWindowViewModel : BaseViewModel, ITableResponseHandler
{
public void HandleResponse(IEnumerable<Entity> allEntities) { }
}
public interface ITableResponseHandler
{
void HandleResponse(IEnumerable<Entity> allEntities);
}
public class MyActor : ReceiveActor
{
public MyActor(ITableResponseHandler viewModel)
{
this.Receive<GetAllEntitiesResponse>(this.HandleGetAllEntitiesResponseReceived);
}
private void HandleGetAllEntitiesResponseReceived(GetAllTablesResponse obj)
{
this._ViewModel.HandleTablesResponse(obj.Result);
}
}
While I don't feel this is ideal it basically lets me avoid all the extra complexity of trying to make my view models themselves actors while sufficently decoupling the actor from the view.
I hope someone else has faced this problem and might be able to provide some insight at a better solution for handling Akka.NET output in a MVVM application.
I have two .NET parties who needs be bound by a contract. Now, party1 and party2 need to be able call some methods on each other (most of it is calls and reporting result back). I have duplex contract in mind, but the parties are not using WCF.
Is there a design pattern for this?
Edit
The parties are part of the same application. I create the application (party1) and someone else creates a dll (party2) that I load dynamically. Now, both of us should be able to call methods on each other. So, I am out to create an interface contract between us. The intent is to know whether there is a know pattern to do that?
A common solution is to use some kind of pub/sub pattern. By doing so you can avoid circular dependencies.
Basically you create some kind of class which are used to subscribe on events (and publish them).
So both your classes does something like this (but with different events):
public class ClassA : IEventHandler<UserCreated>
{
IEventManager _eventManager
public ClassA(IEventManager manager)
{
// I subscribe on this event (which is published by the other class)
manager.Subscribe<UserCreated>(this);
_eventManager = manager;
}
public void Handle(UserCreated theEvent)
{
//gets invoked when the event is published by the other class
}
private void SomeInternalMethod()
{
//some business logic
//and I publish this event
_eventManager.Publish(new EmailSent(someFields));
}
}
The event manager (simplified and not thread safe):
public class EventManager
{
List<Subscriber> _subscribers = new List<Subscriber>();
public void Subscribe<T>(IEventHandler<T> subscriber)
{
_subscribers.Add(new Subscriber{ EventType = typeof(T), Subscriber = subscriber});
}
public void Publish<T>(T theEvent)
{
foreach (var wrapper in subscribers.Where(x => x == typeof(theEvent)))
{
((IEventHandler<T>)wrapper.Subscriber).Handle(theEvent);
}
}
}
The small wrapper:
public class Subscriber
{
public Type EventType;
public object Subscriber;
}
Voila. the two classes are now loosely coupled from each other (while still being able to communicate with each other)
If you use an inversion of control container it get's easier since you can simplify the event manager and just use the container (service location) to resolve all subscribers:
public class EventManager
{
IYourContainer _container;
public EventManager(IYourContainer container)
{
_container = container;
}
public void Publish<T>(T theEvent)
{
foreach (var subscriber in _container.ResolveAll<IEventHandler<T>>())
{
subscriber.Handle(theEvent);
}
}
}
I think you can use next logic:
Class1: Interface1 , Class2:Interface2,
class Manager{
public Manager(Interface1 managedPart1,Interface2 managedPart2){
... some logic for connect to interfaces
}
}
This way reminds me pattern Bridge, but this is very subjective
I am having trouble with the prism event aggregator. If I subscribe to, and publish an event in the same module it works fine. Like this -
public class InfrastructureModule : IModule
{
private IEventAggregator eventAggregator;
public InfrastructureModule(IEventAggregator eventAggregator)
{
this.eventAggregator = eventAggregator;
eventAggregator.GetEvent<TestEvent>().Subscribe(TestSub);
}
public void Initialize()
{
eventAggregator.GetEvent<TestEvent>().Publish("Infrastructure module");
}
private void TestSub(string s)
{
MessageBox.Show(s);
}
}
However if I subscribe to the event in another module nothing happens when eventAggregator.GetEvent().Publish() is called -
public class OtherModule : IModule
{
private IEventAggregator eventAggregator;
public OtherModule (IEventAggregator eventAggregator)
{
this.eventAggregator = eventAggregator;
}
public void Initialize()
{
eventAggregator.GetEvent<TestEvent>().Publish("Other module");
}
}
The Infrastructure module is registered first so the problem is not that OtherModule is publishing an event before there is a subscriber. Any ideas whats going wrong?
Edit: Here is where I am registering the modules
class Bootstrapper : UnityBootstrapper
{
protected override DependencyObject CreateShell()
{
return new Shell();
}
protected override void InitializeShell()
{
base.InitializeShell();
App.Current.MainWindow = (Window)this.Shell;
App.Current.MainWindow.Show();
}
protected override void ConfigureModuleCatalog()
{
base.ConfigureModuleCatalog();
ModuleCatalog moduleCatalog = (ModuleCatalog)this.ModuleCatalog;
// Infrastructure module
moduleCatalog.AddModule(typeof(Infrastructure.InfrastructureModule));
moduleCatalog.AddModule(typeof(Other.OtherModule));
}
}
Based on the comments of the OP, the objects are instantiated then destroyed right after.
This makes the Publish("OtherModule"); code do nothing, because the listener was destroyed.
Now indeed, if you set KeepSubscriberReferenceAlive to true,
it will work because your EventAggregator will keep a reference to the subscriber object (InfrastructureModule).
That is not ideal, basically you went from using a Weak Event Pattern where you don't risk memory leaks, to having to handle objects lifetime and thus risk memory leaks just like a regular .NET event.
Don't get me wrong, I'm not saying you absolutely shouldn't use KeepSubscriberReferenceAlive, but it should only be used on rare occasions.
That being said, your test case is an odd scenario: the Bootstrapper will call Initialize on every Module you define, and then your shell does not hold those modules. Since nobody holds those Modules, they're destroyed.
The "normal" usage for Initialize, is to inject the module that is being initialized into the Shell (or any other UserControl), and it makes sense: you don't want to initialize something you will not use.
The command object pattern is one that I still haven't been able to truly grasp and I found an implementation in the code I'm currently working on so I studied it long and hard to see if I could finally get it with a real world example. The problem is that I am sure this is not properly implemented and it is just an attempt by someone who just read about it and thought it made sense here.
Allow me to show it to you (for confidentiality reasons it will be greatly simplified but I'll do my best to show the main concepts):
public class CommandOne
{
public CommandOne(Manager manager, MyForm form)
{
m_manager = manager;
m_form = form;
}
public void Execute()
{
m_manager.CommandOne(m_form);
}
}
public class CommandTwo
{
public CommandTwo(Manager manager, MyForm form)
{
m_manager = manager;
m_form = form;
}
public void Execute()
{
m_manager.CommandTwo(m_form);
}
}
The first thing that strikes me as odd is that these two classes are not inheriting from any abstract class nor implementing a common interface.
The code that uses these commands is as follows:
public class MyForm : System.Windows.Forms.Form
{
public MyForm(Manager manager)
{
m_manager = manager;
}
private void SomeMethod()
{
....
var cmd = new CommandOne(manager, this);
cmd.Execute();
...
}
private void OtherMethod()
{
....
var cmd = new CommandTwo(manager, this);
cmd.Execute();
...
}
}
So the way I see it, this form is absolutely coupled to all the classes involved except the manager which is being injected to it through its constructors. So with this code I really don't see any benefit of creating the "command" classes which basically are just delegating the call to the manager's methods since the form is instantiating them when it needs them and calling the execute method right afterwards.
So could someone please explain what pieces, if any, is this implementation missing to truly be a command object pattern and, although it might be too subjective, what would be the benefit to implement it in this case?
Thank you.
Based on what you're showing here it looks like the benefit of the command pattern is lost. There are a few reasons you might want to use the command pattern in the context of a WinForms app.
You want to execute a command later
public interface ICommand
{
void Execute();
}
Keep a history of executed commands so they can be undone by the user
public interface ICommand
{
void Execute();
void Undo();
}
Check permissions to see if the current user has rights to execute the command. For example, maybe you have a RefundCustomerCommand and not all customer service agents have the right to issue a refund so you want to disable a button on the form.
public interface ICommand
{
void Execute();
bool CanExecute { get; }
}
You can also roll multiple commands together in a composite like this:
public class CompositeCommand : ICommand
{
private readonly List<ICommand> commands;
public CompositeCommand()
{
commands = new List<ICommand>();
}
public void Add(ICommand command)
{
commands.Add(command);
}
public void Execute()
{
foreach (var command in commands) command.Execute();
}
}
The command pattern also works nicely with the decorator. You can easily add additional cross-cutting behavior to your commands like retry logic:
public class RetryOnTimeout : ICommand
{
private readonly ICommand command;
private int numberOfRetries;
public RetryOnTimeout(ICommand command, int numberOfRetries)
{
this.command = command;
this.numberOfRetries = numberOfRetries;
}
public void Execute()
{
try
{
command.Execute();
}
catch (TimeoutException)
{
if (++numberOfRetries > 3)
throw;
Execute();
}
}
}
We have a number of castle windsor components declared in a config file.
Some of the components somewhere deep inside might require the services of other components.
The problem is when the application is being closed and the Container is being disposed. During Dispose()/Stop() of the Startable/Disposable component (A) when it requires the services of some other component (B) ComponentNotFoundException then raised. By that time B is already removed from the container.
I've noticed that the order of components declarations in app config file is important. And reodering A and B solves the problem.
Is there a better way to influence the order in which the components are disposed?
Edited:
Following a request in comments I provide here a sample code that will throw ComponentNotFoundException:
class Program
{
static void Main()
{
IoC.Resolve<ICriticalService>().DoStuff();
IoC.Resolve<IEmailService>().SendEmail("Blah");
IoC.Clear();
}
}
internal class CriticalService : ICriticalService, IStartable
{
public void Start()
{}
public void Stop()
{
// Should throw ComponentNotFoundException, as EmailService is already disposed and removed from the container
IoC.Resolve<IEmailService>().SendEmail("Stopping");
}
public void DoStuff()
{}
}
internal class EmailService : IEmailService
{
public void SendEmail(string message)
{
Console.WriteLine(message);
}
public void Dispose()
{
Console.WriteLine("EmailService Disposed.");
GC.SuppressFinalize(this);
}
}
internal interface ICriticalService
{
void DoStuff();
}
internal interface IEmailService : IDisposable
{
void SendEmail(string message);
}
public static class IoC
{
private static readonly IWindsorContainer _container = new WindsorContainer(new XmlInterpreter());
static IoC()
{
_container.AddFacility<StartableFacility>();
// Swapping the following 2 lines resolves the problem
_container.AddComponent<ICriticalService, CriticalService>();
_container.AddComponent<IEmailService, EmailService>();
}
public static void Clear()
{
_container.Dispose();
}
public static T Resolve<T>()
{
return (T)_container[typeof(T)];
}
}
Note: See a comment in the code how swapping the order of inserting components in the container solves the problem.
By having a static IoC class you're actually using the container as a service locator, thus losing most of the benefits of dependency injection.
The problem is that without a proper injection, Windsor doesn't know about the CriticalService - IEmailService dependency, so it can't ensure the proper order of disposal.
If you refactor to make this dependency explicit, Windsor disposes the components in the correct order:
internal class CriticalService : ICriticalService, IStartable
{
private readonly IEmailService email;
public CriticalService(IEmailService email) {
this.email = email;
}
...
}
Here's how it would look like after refactoring.
I, personally, feel that any system that requires Dispose() to be called in a specific order has a flaw in the design.
Dispose() should always be safe to call. The errors should only occur if a component is used after disposal, and then ObjectDisposedException makes the most sense. In a case like this, I would rework your components so that they don't use other componetry during their Dispose() method (it really should be about cleaning each component's own, private resources). This would eliminate this issue entirely.