Blazor StateHasChanged() doesnt update global class values on page - c#

So im trying to implement a multi-site server-side blazor application that has two services implemented as singletons like this:
services.AddSingleton<MQTTService>();
services.AddHostedService(sp => sp.GetRequiredService<MQTTService>());
services.AddSingleton<DataCollectorService>();
services.AddHostedService(sp => sp.GetRequiredService<DataCollectorService>());
The MQTT Service is connecting to the broker and managing the subscriptions and stuff, while the DataCollectorService subscribes to an event from the MQTT Service to be notified when a new message arrives. The business logic with the received data is then happening within the DataCollectorService, stuff like interpreting the topic and the payload of the mqtt message. If its valid, the DataCollectorService stores the Data in a (example) global static class:
if (mqtt.IsTopic(topic, MQTTService.TopicDesc.FirstTopic))
{
if(topic.Contains("Data1"))
{
if(topic.Contains("Temperature"))
{
DataCenter.Data1.Temperature= Encoding.UTF8.GetString(message, 0, message.Length);
}
}
}
The DataCenter is just a static class in the namespace:
public static class DataCenter
{
public static DataBlock Data1 = new DataBlock();
public static DataBlock Data2 = new DataBlock();
public static string SetMode;
public class DataBlock
{
public string Temperature { get; set; }
public string Name{ get; set; }
}
}
My Goal with this approach is that every different page in my project can just bind these global variables to show them.
The first problem that occurs then is that obviously the page is not aware of the change if the DataCollectorService updates a variable. Thats why i implemented a notifying event for the pages, which can then call StateHasChanged. So my examplePage "Monitor" wants to just show all these values and injects the DataCollectorService:
#page "/monitor"
#inject DataCollectorService dcs
<MudText>DataBlock Data1: #DataCenter.Data1.Temperature/ Data2: #DataCenter.Data2.Temperature</MudText>
#code
{
protected override void OnInitialized()
{
dcs.OnRefresh += OnRefresh;
}
void OnRefresh()
{
InvokeAsync(() =>
{
Console.WriteLine("OnRefresh CALLED");
StateHasChanged();
});
}
}
This actually works, but adds a new problem to the table, everytime i switch to my monitor site again a NEW OnRefresh Method gets hooked to the Action and that results in multiple calls of "OnRefresh". I find this behaviour rather logical, cuz i never delete an "old" OnRefresh Method from the Action when I'm leaving the site, cuz i dont know WHEN i leave the site.
Thinking about this problem i came up with a solution:
if (!dcs.IsRegistered("monitor"))
{
dcs.OnRefresh += OnRefresh;
dcs.RegisterSubscription("monitor");
}
I wrapped the action subscription with a system that registers token whenever the handler is already correctly assigned. the problem now: the variables on the site dont refresh anymore!
And thats where i'm not sure how to understand whats going on anymore. If i keep it like in the first example, so just adding dcs.OnRefresh += OnRefresh; and letting it "stack up", it actually works - because there is always a "new" and "correctly" bound method which, in my limited understanding, has the correct context.
if i forbid this behaviour i only have an somehow "old" method connected which somehow cant execute the StateHasChanged correctly. But i dont know why.
I'm not sure if i could:
"Change" the context of the Invoke Call so that StateHasChanged works again?
Change the way I register the Action Handling method
I'm additionally confused as to why the first way seems to call the method multiple times. Because if its not able to correctly call StateHasChanged() in the old method, why can it be called in the first place?
I would very much appreciate some input here, googling this kind of stuff was rather difficult because i dont know the exact root of the problem.

Not only do you have multiple calls, you also have a memory leak. The event subscription will prevent the Monitor object to be collected.
Make the page IDisposable:
#page "/monitor"
#inject DataCollectorService dcs
#implements IDisposable
...
#code
{
protected override void OnInitialized()
{
dcs.OnRefresh += OnRefresh;
}
...
public void Dispose()
{
dcs.OnRefresh -= OnRefresh;
}
}

Related

Blazor (Server) scoped object in dependency injection creating multiple instances

For demonstration purposes let's say I have a class called StateManager:
public class StateManager
{
public StateManager()
{
IsRunning = false;
}
public void Initialize()
{
Id = Guid.NewGuid().ToString();
IsRunning = true;
KeepSession();
}
public void Dispose()
{
Id = null;
IsRunning = false;
}
public string Id { get; private set; }
public bool IsRunning { get; private set; }
private async void KeepSession()
{
while(IsRunning)
{
Console.WriteLine($"{Id} checking in...");
await Task.Delay(5000);
}
}
}
It has a method that runs after it is initiated that writes it's Id to the console every 5 seconds.
In my Startup class I add it as a Scoped service:
services.AddScoped<StateManager>();
Maybe I am using the wrong location but in my MainLayout.razor file I am initializing it on OnInitializedAsync()
#inject Models.StateManager StateManager
...
#code{
protected override async Task OnInitializedAsync()
{
StateManager.Initialize();
}
}
When running the application after it renders the first page the console output is showing that there are 2 instances running:
bcf76a96-e343-4186-bda8-f7622f18fb27 checking in...
e5c9824b-8c93-45e7-a5c3-6498b19ed647 checking in...
If I run Dispose() on the object it ends the KeepSession() while loop on one of the instances but the other keeps running. If I run Initialize() a new instance appears and every time I run Initialize() new instances are generated and they are all writing to the console with their unique id's. I am able to create as many as I want without limit.
I thought injecting a Scoped<> service into the DI guaranteed a single instance of that object per circuit? I also tried initializing within the OnAfterRender() override in case the pre-rendering process was creating dual instances (although this does not explain why I can create so many within a page that has the service injected).
Is there something I am not handling properly? Is there a better location to initialize the StateManager aside from MainLayout?
I also tried initializing within the OnAfterRender() override in case the pre-rendering process was creating dual instances
It is caused by pre-rendering & the StateManager is not disposed.
But you cannot avoid it by putting the initialization within OnAfterRender(). An easy way is to use the RenderMode.Server instead.
<app>
#(await Html.RenderComponentAsync<App>(RenderMode.ServerPrerendered))
#(await Html.RenderComponentAsync<App>(RenderMode.Server))
</app>
Since your StateManager requires a knowledge on StateManagerEx, let's firstly take a dummy StateManagerEx as an example, which is easier than your scenario:
public class StateManagerEx
{
public StateManagerEx()
{
this.Id = Guid.NewGuid().ToString();
}
public string Id { get; private set; }
}
When you render it in Layout in RenderMode.Server Mode:
<p> #StateManagerEx.Id </p>
You'll get the Id only once. However, if you render it in RenderMode.ServerPrerendered mode, you'll find that:
When browser sends a request to server ( but before Blazor connection has been established), the server pre-renders the App and returns a HTTP response. This is the first time the StateManagerEx is created.
And then after the Blazor connection is established, another StateManagerEx is created.
I create a screen recording and increase the duration of each frame by +100ms, you can see that its behavior is exactly the same as what we describe above (The Id gets changed):
The same goes for the StateManager. When you render in ServerPrerendered mode, there will be two StateManager, one is created before the Blazor connection has been established, and the other one resides in the circuit. So you'll see two instances running.
If I run Initialize() a new instance appears and every time I run Initialize() new instances are generated and they are all writing to the console with their unique id's.
Whenever you run Initialize(), a new Guid is created. However, the StateManager instance keeps the same ( while StateManager.Id is changed by Initialize()).
Is there something I am not handling properly?
Your StateManager did not implements the IDisposable. If I change the class as below:
public class StateManager : IDisposable
{
...
}
even if I render the App in ServerPrerendered mode, there's only one 91238a28-9332-4860-b466-a30f8afa5173 checking in... per connection at the same time:

How to buffer messages on signal hub and send them when the right client appears?

I have two type of clients connecting my signalR server (ASP.NET Core). Some of them are senders, and some of them are receivers. I need to route messages from senders to the receivers, which is not a problem, but when there is no receivers, I need to somehow buffer messages and not lose them (probably the best is ConcurrentQueue in some kind of a singleton class) but when the first receiver connect, the message buffer needs to start dequeue. Which is the best approach for this?
I created singleton class that wraps arround ConcurrentQueue collection and I enqueue and dequeue messages there. Also I have a separate singleton class which persist collection of the receivers connectionIDs. And I implemented event in this second class that fires event when first receiver connects after the list of receivers was empty but maybe this is not a good approach, I don't know how to use id in Hub, because there is more than one instance of a signalR hub.
Second approach is to mark persistance class as controller and inject the ContextHub and message buffer in this class and dequeue buffer from there and directly send messages to the receivers???
If I understood well, you want to defer SignalR message sending by using something like a synchronized call in some IHostedService. Here is what I managed to achieve so far.
Your approach that consists in using a ConcurrentQueue that contains invokable Action delegates to handle the concurrent hub calls is the right one. As you mention, it has to be injected as a singleton.
So here the Queues class:
public class Queues {
public ConcurrentQueue<Action<IHubContext<MyHub, IMyEvents>>> MessagesQueue { get; set; }
}
Now we need to capture the ConnectionId of the caller so a call can get an answer later. SendMessage enqueue the necessary action delegate to perform a call against a hub instance as a parameter.
As an example SendMessage will trigger an answer back to the caller, and BroadcastMessage will send a message to all clients.
Using a captured hub instance instead would lead to an exception here because the hub will be disposed, quickly. That's why it would be injected later in another class. Have a look on SendMessage_BAD
Here is the MyHub class and the corresponding IMyEvents interface:
public interface IMyEvents {
void ReceiveMessage(string myMessage);
}
public class MyHub : Hub<IMyEvents> {
Queues queues;
public MyHub(Queues queues) {
this.queues = queues;
}
public void SendMessage(string message) {
var callerId = Context.ConnectionId;
queues.MessagesQueue.Enqueue(hub => hub.Clients.Client(callerId).ReceiveMessage(message));
}
// This will crash
public void SendMessage_BAD(string message) {
this.callerId = Context.ConnectionId;
queues.MessagesQueue.Enqueue(_ => this.Clients.Client(callerId).ReceiveMessage(message));
}
public void BroadcastMessage(string message) {
queues.MessagesQueue.Enqueue(hub => hub.Clients.All.ReceiveMessage(message));
}
}
Now, using a naive approach, this code will trigger the message sending a deferred way. (At work, a timer ensure a regular cadence, and the class is an IHostedService but it is does not appear here). This class has to be injected as a singleton.
Here the DeferredMessageSender class:
public class DeferredMessageSender {
Queues queues;
IHubContext<MyHub, IMyEvents> hub;
public DeferredMessageSender(Queues queues, IHubContext<MyHub, IMyEvents> hub) {
this.queues = queues;
this.hub = hub;
}
public void GlobalSend() {
while(queues.MessagesQueue.TryDequeue(out var evt)) {
evt.Invoke(hub);
}
}
}
Hope it helps.

How to clear data of ViewModel in MVVM Light xamrin?

I am working on Xamrin Form right now. I have problem with clear data of ViewModel.
When I logout and login with different user, it shows me data of previous user because the value of UserProfileViewModel doesn't get clear.
When user logout, I want to clear user data from UserProfileViewModel class file. Currently I do this manually when user click on logout. I want any default method like dispose to clear all class member.
I have tried to inherit IDisposable interface with this.Dispose(); but that also didn't work.
I have also tried with default constructor as following but it throws error of
`System.TypeInitializationException`
on this line in app.xaml.cs: public static ViewModelLocator Locator => _locator ?? (_locator = new ViewModelLocator());
public UserProfileViewModel()
{
//initialize all class member
}
In given code, you can see that on Logout call, I call method
`ClearProfileData` of `UserProfileViewModel`
which set default(clear)
data. It is manually. I want to clear data when user logout.
View Model Logout Page
[ImplementPropertyChanged]
public class LogoutViewModel : ViewModelBase
{
public LogoutViewModel(INavigationService nService, CurrentUserContext uContext, INotificationService inService)
{
//initialize all class member
private void Logout()
{
//call method of UserProfileViewModel
App.Locator.UserProfile.ClearProfileData();
//code for logout
}
}
}
User Profile View Model
[ImplementPropertyChanged]
public class UserProfileViewModel : ViewModelBase
{
public UserProfileViewModel(INavigationService nService, CurrentUserContext uContext, INotificationService inService)
{
//initialize all class member
}
//Is there any other way to clear the data rather manually?
public void ClearProfileData()
{
FirstName = LastName = UserName = string.Empty;
}
}
ViewModel Locator
public class ViewModelLocator
{
static ViewModelLocator()
{
MySol.Default.Register<UserProfileViewModel>();
}
public UserProfileViewModel UserProfile => ServiceLocator.Current.GetInstance<UserProfileViewModel>();
}
Firstly there is no need to cleanup these kinds of primitive data types, the gc will do that for you.
However if you use Messages or any other Strong Reference for that matter you WILL have to Unsubscribe from them otherwise your viewmodal will hang around in memory and will never go out of scope
The garbage collector cannot collect an object in use by an
application while the application's code can reach that object. The
application is said to have a strong reference to the object.
With Xamarin it really depends how you are coupling your View to Viewmodals to determine which approach you might take to cleanup your viewmodals.
As it turns out MVVM Light ViewModelBase implements an ICleanup interface which has an overridable Cleanup method for you.
ViewModelBase.Cleanup Method
To cleanup additional resources, override this method, clean up and
then call base.Cleanup().
public virtual void Cleanup()
{
// clean up your subs and stuff here
MessengerInstance.Unregister(this);
}
Now your just left with where to call ViewModelBase.Cleanup
You can just call it when your View Closes, if you get a reference to the DataContext (I.e ViewModalBase) on the DataContextChanged Event
Or you can wire up a BaseView that plumbs this for you, or you can implement your own NagigationService which calls Cleanup on Pop. It really does depend on who is creating your views and viewmodels and how you are coupling them

Using delegates and events with DI in a Controller

I have a system which fundamentally is used to resolve exceptions and output a CSV on demand which details every resolved item. Each day, there will be new exceptions which need to be dealt with. I have a POST method for this in my controller:
[HttpPost]
private ActionResult Resolve(ExceptionViewModel modifiedExceptionViewModel, string currentFilter)
{
// resolve database records...
return RedirectToAction("Index", "Exceptions");
}
I have had a new requirement however, the user wants the system to identify when the last outstanding has been resolved and then automatically output the CSV to the file share, rather than having to go and do this manually.
I firstly created a method for checking whether or not that was the last exception, and called this WasLastException(); I knew that I could just wrap this in an IF statement and on true call a method I have called OutputMasterFileCsv(); but before doing this I thought I would try out delegates/events for the first time which has led me to a similar result but has also raised a few questions.
Some background to my application
This is an Entity Framework Code First MVC web application that is making use of using Unity DI, I have wrapped all my repository calls in an ProcessDataService class in my core layer, which has an interface IProcessDataService that is being registered with Unity.
This is how I have tried to add my event:
Controller's constructor
public ExceptionsController(IProcessDataService service)
{
_service = service; //publisher
//event for delegate
OutputService outputService = new OutputService(_service); //subscriber
_service.LastException += outputService.OnLastException;
}
Output Service
public void OnLastException(object source, EventArgs e)
{
// output the CSV
}
Process Data Service
public delegate void LastExceptionEventHandler(object source, EventArgs args);
public class ProcessDataService : IProcessDataService
{
private readonly IExceptionRepository _exceptionRepository;
public ProcessDataService(IExceptionRepository evpRepo)
{
_exceptionRepository = evpRepo;
}
public event LastExceptionEventHandler LastException;
public void OnLastException()
{
if (LastException != null)
LastException(this, EventArgs.Empty);
}
}
New Resolve method in the Controller
[HttpPost]
private ActionResult Resolve(ExceptionViewModel modifiedExceptionViewModel, string currentFilter)
{
// resolve database records...
if(_service.WasLastException())
{
//raise the event
_service.OnLastException();
}
return RedirectToAction("Index", "Exceptions");
}
This all works well, however I feel like I am not using delagates and events in the right place here somehow, Instead of calling the OnLastException() above and making use of the event, why wouldn't I just simply call _service.OutputMasterFileCsv(); which is already located in my ProcessDataService class?
I believe this has something to do with loose coupling but I dont fully understand what the benefits of this actually are, or am I completely off the mark with all this...?
I thought I would give it ago anyway while I had the chance and hopefully learn something new. If anyone with abit more experience could step in and provide some guidance it would be greatly appreciated as I am a little lost now.
As you are correctly pointing out, using events in this way does not make much sense:
if(_service.WasLastException())
{
//raise the event
_service.OnLastException();
}
You can fix this by making IProcessDataService expose a ResolveException action, and moving the resolving logic from the controller to the service:
[HttpPost]
private ActionResult Resolve(ExceptionViewModel modifiedExceptionViewModel, string currentFilter)
{
// make needed preparations...
_service.ResolveException(...prepared parameters...);
return RedirectToAction("Index", "Exceptions");
}
Then, inside the ProcessDataService.ResolveException method check
if you are currently processing the last exception, and raise the LastException event.
public class ProcessDataService : IProcessDataService
{
//...
public ResolveException(...prepared parameters...) {
// resolve an exception and set lastException
if(lastException) {
this.OnLastException();
}
}
// notice the private modifier
private void OnLastException()
{
if (LastException != null)
LastException(this, EventArgs.Empty);
}
}
This way the data processing service simply notifies the outside world when the last exception is processed. The service has no idea if anyone cares or does something when this happens. The controller knows even less. Only the output service contains processing logic for last exceptions.
With that said, the real power of events lies in the fact that there can be many subscribers, with each subscriber performing its own tasks without knowing anything about the other subscribers. So, you could for instance add another event handler to say, send an email to a supervisor saying that all the exceptions for the day have been resolved.
What matters is that in this case you would not need to modify the controller or other services to account for this newly introduced email sending functionality.
You have decoupled the controller from the service and service from storage. That is fine. But I don't really understand the sense of the event LastException in the ProcessDataService. This is already decoupled by interface IProcessDataService, why to use event?
Another think I don't understand is where is the last exception?
If you want to decouple outputService from ProcessDataService, you can do it like:
public ProcessDataService(IExceptionRepository evpRepo, IOutputService outputService)
{
_exceptionRepository = evpRepo;
_outputService = _outputService;
}
public void ProcessLastException()
{
_outputService.Command() //or whatever suitable name you for your method
}
And in controller:
if(_service.WasLastException())
{
//call service
_service.ProcessLastException();
}
Or even more simple add some method for processing last exception to IProcessDataService.
There are more ways how to inject dependency. You have injected the dependency into constructor and that is why, you don't need the event for decoupling.

Async WCF: wait for another call

We have an old Silverlight UserControl + WCF component in our framework and we would like to increase the reusability of this feature. The component should work with basic functionality by default, but we would like to extend it based on the current project (without modifying the original, so more of this control can appear in the full system with different functionality).
So we made a plan, where everything looks great, except one thing. Here is a short summary:
Silverlight UserControl can be extended and manipulated via ContentPresenter at the UI and ViewModel inheritance, events and messaging in the client logic.
Back-end business logic can be manipulated with module loading.
This gonna be okay I think. For example you can disable/remove fields from the UI with overriden ViewModel properties, and at the back-end you can avoid some action with custom modules.
The interesting part is when you add new fields via the ContentPresenter. Ok, you add new properties to the inherited ViewModel, then you can bind to them. You have the additional data. When you save base data, you know it's succeeded, then you can start saving your additional data (additional data can be anything, in a different table at back-end for example). Fine, we extended our UserControl and the back-end logic and the original userControl still doesn't know anything about our extension.
But we lost transaction. For example we can save base data, but additional data saving throws an exception, we have the updated base data but nothing in the additional table. We really doesn't want this possibility, so I came up with this idea:
One WCF call should wait for the other at the back-end, and if both arrived, we can begin cross thread communication between them, and of course, we can handle the base and the additional data in the same transaction, and the base component still doesn't know anything about the other (it just provide a feature to do something with it, but it doesn't know who gonna do it).
I made a very simplified proof of concept solution, this is the output:
1 send begins
Press return to send the second piece
2 send begins
2 send completed, returned: 1
1 send completed, returned: 2
Service
namespace MyService
{
[ServiceContract]
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public class Service1
{
protected bool _sameArrived;
protected Piece _same;
[OperationContract]
public Piece SendPiece(Piece piece)
{
_sameArrived = false;
Mediator.Instance.WaitFor(piece, sameArrived);
while (!_sameArrived)
{
Thread.Sleep(100);
}
return _same;
}
protected void sameArrived(Piece piece)
{
_same = piece;
_sameArrived = true;
}
}
}
Piece (entity)
namespace MyService
{
[DataContract]
public class Piece
{
[DataMember]
public long ID { get; set; }
[DataMember]
public string SameIdentifier { get; set; }
}
}
Mediator
namespace MyService
{
public sealed class Mediator
{
private static Mediator _instance;
private static object syncRoot = new Object();
private List<Tuple<Piece, Action<Piece>>> _waitsFor;
private Mediator()
{
_waitsFor = new List<Tuple<Piece, Action<Piece>>>();
}
public static Mediator Instance
{
get
{
if (_instance == null)
{
lock (syncRoot)
{
_instance = new Mediator();
}
}
return _instance;
}
}
public void WaitFor(Piece piece, Action<Piece> callback)
{
lock (_waitsFor)
{
var waiter = _waitsFor.Where(i => i.Item1.SameIdentifier == piece.SameIdentifier).FirstOrDefault();
if (waiter != null)
{
_waitsFor.Remove(waiter);
waiter.Item2(piece);
callback(waiter.Item1);
}
else
{
_waitsFor.Add(new Tuple<Piece, Action<Piece>>(piece, callback));
}
}
}
}
}
And the client side code
namespace MyClient
{
class Program
{
static void Main(string[] args)
{
Client c1 = new Client(new Piece()
{
ID = 1,
SameIdentifier = "customIdentifier"
});
Client c2 = new Client(new Piece()
{
ID = 2,
SameIdentifier = "customIdentifier"
});
c1.SendPiece();
Console.WriteLine("Press return to send the second piece");
Console.ReadLine();
c2.SendPiece();
Console.ReadLine();
}
}
class Client
{
protected Piece _piece;
protected Service1Client _service;
public Client(Piece piece)
{
_piece = piece;
_service = new Service1Client();
}
public void SendPiece()
{
Console.WriteLine("{0} send begins", _piece.ID);
_service.BeginSendPiece(_piece, new AsyncCallback(sendPieceCallback), null);
}
protected void sendPieceCallback(IAsyncResult result)
{
Piece returnedPiece = _service.EndSendPiece(result);
Console.WriteLine("{0} send completed, returned: {1}", _piece.ID, returnedPiece.ID);
}
}
}
So is it a good idea to wait for another WCF call (which may or may not be invoked, so in a real example it would be more complex), and process them together with cross threading communication? Or not and I should look for another solution?
Thanks in advance,
negra
If you want to extend your application without changing any existing code, you can use MEF that is Microsoft Extensibility Framework.
For using MEF with silverlight see: http://development-guides.silverbaylabs.org/Video/Silverlight-MEF
I would not wait for 2 WCF calls from Silverlight, for the following reasons:
You are making your code more complex and less maintainable
You are storing business knowledge, that two services should be called together, in the client
I would call a single service that aggreagated the two services.
It doesn't feel like a great idea to me, to be honest. I think it would be neater if you could package up both "partial" requests in a single "full" request, and wait for that. Unfortunately I don't know the best way of doing that within WCF. It's possible that there's a generalized mechanism for this, but I don't know about it. Basically you'd need some loosely typed service layer where you could represent a generalized request and a generalized response, routing the requests appropriately in the server. You could then represent a collection of requests and responses easily.
That's the approach I'd look at, personally - but I don't know how neatly it will turn out in WCF.

Categories