I'm working in C# WPF with a proprietary framework (essentially a blend of Caliburn Micro and Castle Windsor) and I've got two singleton modules that have a race condition:
DeviceService - A service that manages a connection to a physical device emitting data. The service is "Startable" and hence is automatically constructed and initialized asynchronously.
ConnectionIndicatorViewModel - A client ViewModel that chiefly concerns itself with communicating to the user the status of the connection managed by DeviceService. Changes state mainly based on events fired by DeviceService.
My problem lies at application startup. In the constructor for the ViewModel, I set the default state to "Pending" because I assume that the Service has not finished initializing. Then the ViewModel simply handles the "Initialized" event fired by the Service. It's in this handler that I asses the actual connection state via a property on the Service and update the ViewModel.
Now, all of this works just fine because it is extremely unlikely that the race condition poke its head in. However, in the unlikely case that the Service finishes its initialization before the ViewModel is constructed, it will never handle that "Initialized" event and will just stay in its "Pending" state.
I've considered changing the Service interface to return awaitable types for properties, so that any module trying to access properties will have to wait for initialization to finish, but I'm not sure that this is the best approach. I'm also wary of having part of the client kick off the Service because then who should initialize it if several modules use it?
Is there some conventional way of dealing with this sort of asynchronous initialization that I am missing?
You mention using events to do the communication between the service and the ViewModel, you could use Reactive Extensions (Rx) instead of using events and this has the ability to remove the race condition you describe above.
Put simply this turns the service from a pull-model into a push-model, it will push out data\events via a stream and allows you to compose LINQ queries over the stream. If you're not familiar with Rx there's plenty of good information out.
In this scenario using Rx I would have the service expose a property of IObservable<T>;, where T is your type (I guess some kind of State enum), the backing field for this property is the important part, this would be a ReplaySubject<T> with a size of one. What this means is anytime someone 'subscribes' to the property they will receive the last value published to the subject. This therefore means there isn't a race condition between publishing and subscribing to the stream.
This is probably a little easier to understand in code:
public enum State
{
Initializing,
Initialized,
}
public interface IMyService
{
IObservable<State> Status { get; }
}
public class MyService : IMyService
{
private ReplaySubject<State> _state;
public MyService()
{
_state = new ReplaySubject<State>(1);
_state.OnNext(State.Initializing);
// Do initialisation stuff
_state.OnNext(State.Initialized);
}
public IObservable<State> Status { get { return _state; } }
}
The example only accounts for initializing the service on the current thread (ie synchronously), this means it would block the calling thread and I guess this would be the Dispatcher thread if this is a XAML based app.
If you require the initialization to be done asynchronously you would look to using either Observable.Create<T> or Observable.Start<T> to start the work on a background thread so that it doesn't block the dispatcher (UI) thread.
To consume this service you would do something like this is your ViewModel:
public class MyViewModel
{
private State _state;
public MyViewModel(IMyService myService)
{
myService.Status.ObserveOn(DispatcherScheduler.Current)
.Subscribe(x =>
{
_state = x;
});
}
public bool IsReady { get { return _state == State.Initialized; } }
}
Now there isn't a race condition between the Service and the ViewModel.
There can be a lot to learn about Reactive Extensions but it is a very good way to handle asynchronous calls when you're implementing an MVVM application.
Related
I have the following Actor where I am trying to restart and resend the failing message back to the actor :
public class BuildActor : ReceivePersistentActor
{
public override string PersistenceId => "asdad3333";
private readonly IActorRef _nextActorRef;
public BuildActor(IActorRef nextActorRef)
{
_nextActorRef = nextActorRef;
Command<Workload>(x => Build(x));
RecoverAny(workload =>
{
Console.WriteLine("Recovering");
});
}
public void Build(Workload Workload)
{
var context = Context;
var self = Self;
Persist(Workload, async x =>
{
//after this line executes
//application goes into break mode
//does not execute PreStart or Recover
var workload = await BuildTask(Workload);
_nextActorRef.Tell(workload);
context.Stop(self);
});
}
private Task<Workload> BuildTask(Workload Workload)
{
//works as expected if method made synchronous
return Task.Run(() =>
{
//simulate exception
if (Workload.ShowException)
{
throw new Exception();
}
return Workload;
});
}
protected override void PreRestart(Exception reason, object message)
{
if (message is Workload workload)
{
Console.WriteLine("Prestart");
workload.ShowException = false;
Self.Tell(message);
}
}
}
Inside the success handler of Persist I am trying to simulate an exception being thrown but on exception the application goes in to break mode and PreRestart hook is not invoked. But if I make BuildTask method synchronous by removing Task.Run then on exception both PreRestart and Recover<T> methods are invoked.
I would really appreciated if someone can point to me what should be the recommended pattern for this and where I am going wrong.
Most probably, Akka.Persistence is not the good solution for your problem here.
Akka.Persistence uses eventsourcing principles for storing actor's state. Few key points important in this context:
What you're sending to actor, is a command. It describes a job, you want to be done. Executing that command may result in doing some actual processing and eventually may lead to persist actor's linear state change history in form of the events.
In Akka.NET Persist method is used only to store events - they describe the fact, that something has happened: because of that, they cannot be denied and they cannot fail (a thing that you're doing in your Persist callback).
When an actor restarts at any point in time, it will always try to recreate its own state by replaying all events Persisted up to the last known point in time. For this reason it's important that Recover method should only focus on replaying actor's state (it can be called multiple times over the same event) and never result in side effects (example of side effect is sending an email). Any exception thrown there will mean, that actor state is irrecoverably corrupted and that actor will be killed.
If you want to resend the message to your actor, you could:
Put a reliable message queue (i.e. RabbitMQ or Azure Service Bus) or log (Kafka or Event Hub) in front of your actor processing pipeline. This is actually the most reasonable scenario in many cases.
Use at-least-once delivery semantics from Akka.Persistence - but IMHO only if for some reason you cannot use 1st solution.
The most simplistic and unreliable option (since messages are residing only in memory and never persisted) is dead letter queue. Every unhandled message is send there. You can subscribe to it and filter the incoming data to detect which messages should be send again to their recipients.
I'm trying to build a file download actor, using Akka.net. It should send messages on download completion but also report download progress.
In .NET there are classes supporting asynchronous operations using more than one event. For example WebClient.DownloadFileAsync has two events: DownloadProgressChanged and DownloadFileCompleted.
Preferably, one would use the task based async version and use the .PipeTo extension method. But, I can't see how that would work with an async method exposing two events. As is the case with WebClient.DownloadFileAsync. Even with WebClient.DownloadFileTaskAsync you still need to handle DownloadProgressChanged using an event handler.
The only way I found to use this was to hook up two event handlers upon creation of my actor. Then in the handlers, I messages to Self and the Sender. For this, I must refer to some private fields of the actor from inside the event handlers. This feels wrong to me, but I cannot see another way out.
Is there a safer way to use multiple event handlers in an Actor?
Currently, my solution looks like this (_client is a WebClient instance created in the constructor of the actor):
public void HandleStartDownload(StartDownload message)
{
_self = Self;
_downloadRequestor = Sender;
_uri = message.Uri;
_guid = message.Guid;
_tempPath = Path.GetTempFileName();
_client.DownloadFileAsync(_uri, _tempPath);
}
private void Client_DownloadFileCompleted(object sender, System.ComponentModel.AsyncCompletedEventArgs e)
{
var completedMessage = new DownloadCompletedInternal(_guid, _tempPath);
_downloadRequestor.Tell(completedMessage);
_self.Tell(completedMessage);
}
private void Client_DownloadProgressChanged(object sender, DownloadProgressChangedEventArgs e)
{
var progressedMessage = new DownloadProgressed(_guid, e.ProgressPercentage);
_downloadRequestor.Tell(progressedMessage);
_self.Tell(progressedMessage);
}
So when the download starts, some fields are set. Additionally, I make sure I Become a state where further StartDownload messages are stashed, until the DownloadCompleted message is received by Self:
public void Ready()
{
Receive<StartDownload>(message => {
HandleStartDownload(message);
Become(Downloading);
});
}
public void Downloading()
{
Receive<StartDownload>(message => {
Stash.Stash();
});
Receive<DownloadCompleted>(message => {
Become(Ready);
Stash.UnstashAll();
});
}
For reference, here's the entire Actor, but I think the important stuff is in this post directly: https://gist.github.com/AaronLenoir/4ce5480ecea580d5d283c5d08e8e71b5
I must refer to some private fields of the actor from inside the event
handlers. This feels wrong to me, but I cannot see another way out.
Is there a safer way to use multiple event handlers in an Actor?
There's nothing inherently wrong with an actor having internal state, and members that are part of that state raising events which are handled within the actor. No more wrong than this would be if taking an OO approach.
The only real concern is if that internal state gets mixed between multiple file download requests, but I think your current code is sound.
A possibly more palatable approach may be to look at the FileDownloadActor as a single use actor, fire it up, download the file, tell the result to the sender and then kill the actor. Starting up actors is a cheap operation, and this completely sidesteps the possibility of sharing the internal state between multiple download requests.
Unless of course you specifically need to queue downloads to run sequentially as your current code does - but the queue could be managed by another actor altogether and still treat the download actors as temporary.
I don't know if that is your case, but I see people treating Actors as micro services when they are simply objects. Remember Actors have internal state.
Now think about scalability, you can't scale messages to one Actor in a distributed Actor System. The messages you're sending to one Actor will be executed in the node executing that Actor.
If you want to execute download operations in parallel (for example), you do as Patrick said and create one Actor per download operation and that Actor can be executed in any available node.
How do people structure their code when using the c# stateless library?
https://github.com/nblumhardt/stateless
I'm particularly interested in how this ties in with injected dependencies, and a correct approach of responsibilities and layering correctly.
My current structure involves the following:
public class AccountWf
{
private readonly AspNetUser aspNetUser;
private enum State { Unverified, VerificationRequestSent, Verfied, Registered }
private enum Trigger { VerificationRequest, VerificationComplete, RegistrationComplete }
private readonly StateMachine<State, Trigger> machine;
public AccountWf(AspNetUser aspNetUser, AccountWfService userAccountWfService)
{
this.aspNetUser = aspNetUser;
if (aspNetUser.WorkflowState == null)
{
aspNetUser.WorkflowState = State.Unverified.ToString();
}
machine = new StateMachine<State, Trigger>(
() => (State)Enum.Parse(typeof(State), aspNetUser.WorkflowState),
s => aspNetUser.WorkflowState = s.ToString()
);
machine.Configure(State.Unverified)
.Permit(Trigger.VerificationRequest, State.VerificationRequestSent);
machine.Configure(State.VerificationRequestSent)
.OnEntry(() => userAccountWfService.SendVerificationRequest(aspNetUser))
.PermitReentry(Trigger.VerificationRequest)
.Permit(Trigger.VerificationComplete, State.Verfied);
machine.Configure(State.Verfied)
.Permit(Trigger.RegistrationComplete, State.Registered);
}
public void VerificationRequest()
{
machine.Fire(Trigger.VerificationRequest);
}
public void VerificationComplete()
{
machine.Fire(Trigger.VerificationComplete);
}
public void RegistrationComplete()
{
machine.Fire(Trigger.RegistrationComplete);
}
}
Should we implement all processes (call to services) within the OnEntry hook, or implement the processes on the outside after the state transition has been verified that it is allowed to take place? I'm wonder how to do the transaction management if so.
I guess what I'm after is some best guidance from those who have already implemented something using stateless and how to approach the code structure.
Before addressing the structure itself a couple remarks:
OnEntry actions are only executed if the trigger has been successfully fired.
Triggers fired that are not allowed in the current state will throw an InvalidOperationException. Consider overriding OnUnhandledTrigger if you're not expecting an exception (I've found that logging unhandled triggers is a good approach to finding the flaws in the logic).
My rule of thumb for the OnEntry/OnExit structuring is that any creation and logic will be placed OnEntry and any required clean-up is done OnExit.
So in your case, given that the you're using injected dependencies (and assuming you're not taking ownership of those, i.e, someone else will manage their lifecycle) you can place all your logic OnEntry.
With that in mind, the way that your state machine is currently structured is perfectly fine.
One last note, keep in mind that firing triggers from within the same thread that's advancing the state machine and doing the state machine logic can and will lead to stackoverflow exceptions (see here on how to solve the auto advance issue).
I've created an app which uses Observable Lists. I've made the ObservableList class threadsafe (I think) and it's working fine now in my application.
Now I'm trying to install my application as a service. This works fine as well, up untill the point something gets added to the list. I think the thread there just dies. I've got the following code:
/// <summary>
/// Creates a new empty ObservableList of the provided type.
/// </summary>
public ObservableList()
{
//Assign the current Dispatcher (owner of the collection)
_currentDispatcher = Dispatcher.CurrentDispatcher;
}
/// <summary>
/// Executes this action in the right thread
/// </summary>
///<param name="action">The action which should be executed</param>
private void DoDispatchedAction(Action action)
{
if (_currentDispatcher.CheckAccess())
action.Invoke();
else
_currentDispatcher.Invoke(DispatcherPriority.DataBind, action);
}
/// <summary>
/// Handles the event when a collection has changed.
/// </summary>
/// <param name="e"></param>
protected override void OnCollectionChanged(NotifyCollectionChangedEventArgs e)
{
DoDispatchedAction(() => base.OnCollectionChanged(e));
}
While debugging, I've seen the Collection.Add(object) being called. It starts the DoDispatchedAction function, and the last thing the debugger hits, is _currentDispatcher.Invoke(DispatcherPriority.DataBind, action);. After this, the application continues but the code after Collection.Add(object) doesn't get executed anymore. The code which initially added the item to an ObservableList doesn't continue neither. That's why I think the Thread dies or something like that.
When checking the action in the debugger, I found out that the following message was there:
ApartmentState = '_currentDispatcher.Thread.ApartmentState' threw an
exception of type 'System.Threading.ThreadStateException'
How can I solve this problem? Am I even thinking in the right direction?
As this is a hardware dependent service, this is a little bit different from the usual LOB-style application. The difference is: the changes which should trigger events come from the backend of the application, while the whole UI framework and service architecture is intended to be used so that the frontend asks for data which the backend provides.
You could bring the two together by creating some sort of "neutral ground" where they meet.
In the hardware handling component, I would have a background thread which runs continually or runs triggered by hardware interrupts, and updates its data structures with whatever data it collects from the hardware. Then, I would have a synchronized method which can create a consistent snapshot of the hardware data at the point of time when it is called.
In the WPF client, there would be a dispatcher timer which calls this method in set intervals and updates the ObservableCollections using the data snapshots. This is possible, because it would happen on the UI thread. Actually, if possible you should try to add and remove items from the ObservableCollections, not create new collection instances, unless the data in the collection changes completely from one call to the next.
The WCF client would only be a wrapper around the method which creates data snapshots: it would only send back such a snapshot when it is called.
The WPF client for the WCF service would work as the local WPF client, only it would call the service instead of the hardware library directly, and probably I'd choose a longer interval for the DispatcherTimer, in order to avoid excessive network traffic. You could further optimize this by returning a special code which means "nothing has changed", in order to avoid sending the same data several times, or have separate methods for asking whether data has changed and retrieving the changed data.
From what I understand, you have a core code that should run as a Windows service, and a WPF application that uses the same core code.
So basically you should have something like 3 projects in your solution:
a core assembly that does some hardware-related job
an executable that will be installed as a Windows service. This executable references the core assembly
a WPF application that also references the core assembly
Dispatchers are helpful to marshall back an action to the UI thread. This is basically used to execute some code in the UI thread in WPF applications. For example, when you bind a collection to a DataGrid, the CollectionChanged event must be fired on the UI thread because it'll cause, thanks to the binding, the UI to be updated. And UI must be updated from the UI thread.
Your core assembly shouldn't have to deal with dispatchers as there is no UI to update. You could use simple Collection here, as you won't bind it to any UI component. Same for your Windows service executable.
For your WPF application, on the other hand, you could use an ObservableCollection binded on a UI component (DataGrid for example). Only in this assembly you'll have to ensure UI components are always updated from the UI thread (which means you need the Dispatcher for that).
So, a code example:
Core assembly:
public IEnumerable<SomeClass> GetHardwareInfo()
{
return new List<SomeClass> { ... };
}
Windows Service executable:
internal static void Main(string[] args)
{
...
var objs = new MyCoreInstance().GetHardwareInfo();
...
}
WPF application (let's say it's the ViewModel):
// Some UI component is binded to this collection that is obersvable
public ObservableCollection<SomeClass> MyCol
{
get
{
return this.myCol;
}
set
{
if (this.myCol != value)
{
this.myCol = value;
this.RaisePropertyChanged("MyCol");
}
}
}
public void UpdateList()
{
var info = new MyCoreInstance().GetHardwareInfo();
// Now, marshall back to the UI thread to update the collection
Application.Current.Dispatcher.Invoke(() =>
{
this.MyCol = new ObservableCollection(info);
});
}
I have just started using MvvmCross, but i didn't find any info about to how i can execute UI code from a ViewModel.
On Caliburn there are coroutine so i can access the view and keep the ui code separated from the viewmodel code.
on my first case i need to open a dialow from a command inside a ViewModel, what is the correct way?
Right now i'm developing a WinRT app.
Thanks
There isn't any hard/fast rule on this within MvvmCross.
Generally, when I need to do this I use the Messenger plugin.
This answer assumes you are using the latest Alpha v3 code. For older vNext code you'll have to do some translation - see notes below.
To use this approach:
I reference Cirrious.MvvmCross.Plugins.Messenger.dll from both Core and UI projects.
Then I add a line somewhere in Setup.cs (e.g. in InitializeLastChance) to:
Cirrious.MvvmCross.Plugins.Messenger.PluginLoader.Instance.EnsureLoaded();
Then in the Core project I add a message:
public class InputIsNeededMessage : MvxMessage
{
public InputIsNeededMessage(object sender) : base(sender) {}
}
In the ViewModel I can get the Messenger by constructor injection or by:
var messenger = Mvx.Resolve<IMvxMessenger>();
and I can send messages by calling:
messenger.Publish(new InputIsNeededMessage(this));
In the View I can again get to the messenger and subscribe to messages using:
var messenger = Mvx.Resolve<IMvxMessenger>();
_token = messenger.SubscribeOnMainThread<InputIsNeededMessage>(OnInputIsNeeded);
where _token must be a member variable - if it isn't then the subscription won't persist - the subscription itself is weak by default (so you never have to unsubscribe)
and where OnInputIsNeeded is something like:
private void OnInputIsNeeded(InputIsNeededMessage message)
{
if (message.Sender != ViewModel)
return;
// do stuff here - you are already on the UI thread
}
The above sequence is what I normally do for 'proper code'
To start with using a Messenger/EventAggregator can feel uncomfortable - it certainly took me a while to get used to it - but after I did get used to it, then I now use it everywhere - the pub/sub Message decoupling is very flexible for testing and for future maintenance of code (IMO)
As alternatives to this approach above I do sometimes take shortcuts:
sometimes I fire normal C# events from the ViewModel and have the View respond to these
sometimes I have special marker properties and fire the UI code from them
Sorry for using v3 syntax - but the changeover is coming and it's what I'm now coding in...
To switch back to vNext I think you might need to:
use IMessenger instead of IMvxMessenger
use BaseMessage instead of the MvxMessage
use Subscribe instead of SubscribeOnMainThread - but then you will need to marshall the message onto the UI thread yourself.
There exists an easier way. Here is the method I use for executing any action on the main
thread:
protected void RunOnUIThread(Action action) {
var dispatcher = Mvx.Resolve<IMvxMainThreadDispatcher>();
dispatcher.RequestMainThreadAction(action);
}
Hope it helps. Cheers.