Blazor (Server) scoped object in dependency injection creating multiple instances - c#

For demonstration purposes let's say I have a class called StateManager:
public class StateManager
{
public StateManager()
{
IsRunning = false;
}
public void Initialize()
{
Id = Guid.NewGuid().ToString();
IsRunning = true;
KeepSession();
}
public void Dispose()
{
Id = null;
IsRunning = false;
}
public string Id { get; private set; }
public bool IsRunning { get; private set; }
private async void KeepSession()
{
while(IsRunning)
{
Console.WriteLine($"{Id} checking in...");
await Task.Delay(5000);
}
}
}
It has a method that runs after it is initiated that writes it's Id to the console every 5 seconds.
In my Startup class I add it as a Scoped service:
services.AddScoped<StateManager>();
Maybe I am using the wrong location but in my MainLayout.razor file I am initializing it on OnInitializedAsync()
#inject Models.StateManager StateManager
...
#code{
protected override async Task OnInitializedAsync()
{
StateManager.Initialize();
}
}
When running the application after it renders the first page the console output is showing that there are 2 instances running:
bcf76a96-e343-4186-bda8-f7622f18fb27 checking in...
e5c9824b-8c93-45e7-a5c3-6498b19ed647 checking in...
If I run Dispose() on the object it ends the KeepSession() while loop on one of the instances but the other keeps running. If I run Initialize() a new instance appears and every time I run Initialize() new instances are generated and they are all writing to the console with their unique id's. I am able to create as many as I want without limit.
I thought injecting a Scoped<> service into the DI guaranteed a single instance of that object per circuit? I also tried initializing within the OnAfterRender() override in case the pre-rendering process was creating dual instances (although this does not explain why I can create so many within a page that has the service injected).
Is there something I am not handling properly? Is there a better location to initialize the StateManager aside from MainLayout?

I also tried initializing within the OnAfterRender() override in case the pre-rendering process was creating dual instances
It is caused by pre-rendering & the StateManager is not disposed.
But you cannot avoid it by putting the initialization within OnAfterRender(). An easy way is to use the RenderMode.Server instead.
<app>
#(await Html.RenderComponentAsync<App>(RenderMode.ServerPrerendered))
#(await Html.RenderComponentAsync<App>(RenderMode.Server))
</app>
Since your StateManager requires a knowledge on StateManagerEx, let's firstly take a dummy StateManagerEx as an example, which is easier than your scenario:
public class StateManagerEx
{
public StateManagerEx()
{
this.Id = Guid.NewGuid().ToString();
}
public string Id { get; private set; }
}
When you render it in Layout in RenderMode.Server Mode:
<p> #StateManagerEx.Id </p>
You'll get the Id only once. However, if you render it in RenderMode.ServerPrerendered mode, you'll find that:
When browser sends a request to server ( but before Blazor connection has been established), the server pre-renders the App and returns a HTTP response. This is the first time the StateManagerEx is created.
And then after the Blazor connection is established, another StateManagerEx is created.
I create a screen recording and increase the duration of each frame by +100ms, you can see that its behavior is exactly the same as what we describe above (The Id gets changed):
The same goes for the StateManager. When you render in ServerPrerendered mode, there will be two StateManager, one is created before the Blazor connection has been established, and the other one resides in the circuit. So you'll see two instances running.
If I run Initialize() a new instance appears and every time I run Initialize() new instances are generated and they are all writing to the console with their unique id's.
Whenever you run Initialize(), a new Guid is created. However, the StateManager instance keeps the same ( while StateManager.Id is changed by Initialize()).
Is there something I am not handling properly?
Your StateManager did not implements the IDisposable. If I change the class as below:
public class StateManager : IDisposable
{
...
}
even if I render the App in ServerPrerendered mode, there's only one 91238a28-9332-4860-b466-a30f8afa5173 checking in... per connection at the same time:

Related

Blazor StateHasChanged() doesnt update global class values on page

So im trying to implement a multi-site server-side blazor application that has two services implemented as singletons like this:
services.AddSingleton<MQTTService>();
services.AddHostedService(sp => sp.GetRequiredService<MQTTService>());
services.AddSingleton<DataCollectorService>();
services.AddHostedService(sp => sp.GetRequiredService<DataCollectorService>());
The MQTT Service is connecting to the broker and managing the subscriptions and stuff, while the DataCollectorService subscribes to an event from the MQTT Service to be notified when a new message arrives. The business logic with the received data is then happening within the DataCollectorService, stuff like interpreting the topic and the payload of the mqtt message. If its valid, the DataCollectorService stores the Data in a (example) global static class:
if (mqtt.IsTopic(topic, MQTTService.TopicDesc.FirstTopic))
{
if(topic.Contains("Data1"))
{
if(topic.Contains("Temperature"))
{
DataCenter.Data1.Temperature= Encoding.UTF8.GetString(message, 0, message.Length);
}
}
}
The DataCenter is just a static class in the namespace:
public static class DataCenter
{
public static DataBlock Data1 = new DataBlock();
public static DataBlock Data2 = new DataBlock();
public static string SetMode;
public class DataBlock
{
public string Temperature { get; set; }
public string Name{ get; set; }
}
}
My Goal with this approach is that every different page in my project can just bind these global variables to show them.
The first problem that occurs then is that obviously the page is not aware of the change if the DataCollectorService updates a variable. Thats why i implemented a notifying event for the pages, which can then call StateHasChanged. So my examplePage "Monitor" wants to just show all these values and injects the DataCollectorService:
#page "/monitor"
#inject DataCollectorService dcs
<MudText>DataBlock Data1: #DataCenter.Data1.Temperature/ Data2: #DataCenter.Data2.Temperature</MudText>
#code
{
protected override void OnInitialized()
{
dcs.OnRefresh += OnRefresh;
}
void OnRefresh()
{
InvokeAsync(() =>
{
Console.WriteLine("OnRefresh CALLED");
StateHasChanged();
});
}
}
This actually works, but adds a new problem to the table, everytime i switch to my monitor site again a NEW OnRefresh Method gets hooked to the Action and that results in multiple calls of "OnRefresh". I find this behaviour rather logical, cuz i never delete an "old" OnRefresh Method from the Action when I'm leaving the site, cuz i dont know WHEN i leave the site.
Thinking about this problem i came up with a solution:
if (!dcs.IsRegistered("monitor"))
{
dcs.OnRefresh += OnRefresh;
dcs.RegisterSubscription("monitor");
}
I wrapped the action subscription with a system that registers token whenever the handler is already correctly assigned. the problem now: the variables on the site dont refresh anymore!
And thats where i'm not sure how to understand whats going on anymore. If i keep it like in the first example, so just adding dcs.OnRefresh += OnRefresh; and letting it "stack up", it actually works - because there is always a "new" and "correctly" bound method which, in my limited understanding, has the correct context.
if i forbid this behaviour i only have an somehow "old" method connected which somehow cant execute the StateHasChanged correctly. But i dont know why.
I'm not sure if i could:
"Change" the context of the Invoke Call so that StateHasChanged works again?
Change the way I register the Action Handling method
I'm additionally confused as to why the first way seems to call the method multiple times. Because if its not able to correctly call StateHasChanged() in the old method, why can it be called in the first place?
I would very much appreciate some input here, googling this kind of stuff was rather difficult because i dont know the exact root of the problem.
Not only do you have multiple calls, you also have a memory leak. The event subscription will prevent the Monitor object to be collected.
Make the page IDisposable:
#page "/monitor"
#inject DataCollectorService dcs
#implements IDisposable
...
#code
{
protected override void OnInitialized()
{
dcs.OnRefresh += OnRefresh;
}
...
public void Dispose()
{
dcs.OnRefresh -= OnRefresh;
}
}

How to efficiently count HTTP Calls in asp.net core?

I have an abstract class called HttpHelper it has basic methods like, GET, POST, PATCH, PUT
What I need to achieve is this:
Store the url, time & date in the database each time the function is called GET, POST, PATCH, PUT
I don't want to store directly to the database each time the functions are called (that would be slow) but to put it somewhere (like a static queue-memory-cache) which must be faster and non blocking, and have a background long running process that will look into this cache-storage-like which will then store the values in the database.
I have no clear idea how to do this but the main purpose of doing so is to take the count of each calls per hour or day, by domain, resource and url query.
I'm thinking if I could do the following:
Create a static class which uses ConcurrentQueue<T> to store data and call that class in each function inside HttpHelper class
Create a background task similar to this: Asp.Net core long running/background task
Or use Hangfire, but that might be too much for simple task
Or is there a built-in method for this in .netcore?
Both Hangfire and background tasks would do the trick as consumers of the queue items.
Hangfire was there before long running background tasks (pre .net core), so go with the long running tasks for net core implementations.
There is a but here though.
How important is to you that you will not miss a call? If it is, then neither can help you.
The Queue or whatever static construct you have will be deleted the time your application crashes/machine restarts or just plain recycling of the application pools.
You need to consider some kind of external Queuing mechanism like rabbit mq with persistence on.
You can also append to a file, but that might also cause some delays as read/write.
I do not know how complex your problem is but I would consider two solutions.
First is calling Async Insert Method which will not block your main thread but will start task. You can return response without waiting for your log to be appended to database. Since you want it to be implemented in only some methods, I would do it using Attributes and Middleware.
Simplified example:
public IActionResult SomePostMethod()
{
LogActionAsync("This Is Post Method");
return StatusCode(201);
}
public static Task LogActionAsync(string someParameter)
{
return Task.Run(() => {
// Communicate with database (X ms)
});
}
Better solution is creating buffer which will not communicate with database each time but only when filled or at interval. It would look like this:
public IActionResult SomePostMethod()
{
APILog.Log(new APILog.Item() { Date = DateTime.Now, Item1 = "Something" });
return StatusCode(201);
}
public partial class APILog
{
private static List<APILog.Item> _buffer = null;
private cont int _msTimeout = 60000; // Timeout between updates
private static object _updateLock = new object();
static APILog()
{
StartDBUpdateLoopAsync();
}
private void StartDBUpdateLoopAsync()
{
// check if it has been already and other stuff
Task.Run(() => {
while(true) // Do not use true but some other expression that is telling you if your application is running.
{
Thread.Sleep(60000);
lock(_updateLock)
{
foreach(APILog.Item item in _buffer)
{
//Import into database here
}
}
}
});
}
public static void Log(APILog.Item item)
{
lock(_updateLock)
{
if(_buffer == null)
_buffer = new List<APILog.Item>();
_buffer.Add(item);
}
}
}
public partial class APILog
{
public class Item
{
public string Item1 { get; set; }
public DateTime Date { get; set; }
}
}
Also in this second example I would not call APILog.Log() each time but use Middleware in combination with Attribute

Keeping a service alive throughout the lifetime of the application

I have a simple service interface I am using to synchronize data with a server via HTTP. The service interface has a method to start and stop the synchronization process. The idea is to start the synchronization process after the user signs in, and stop the synchronization at the end of the application before the user signs out. The synchronization service will check for new messages every few minutes, and then notify the ViewModel(s) of new/changed data using the MvxMessenger plugin.
What is the recommended way to ensure the synchronization service lives for the duration of the app? I am currently using a custom IMvxAppStart which registers the service interface as a singleton, and then holds a static reference to the service interface. Is that enough to keep the service alive for the lifetime of the app, or is there a better way?
public class App : MvxApplication
{
public override void Initialize()
{
...
RegisterAppStart(new CustomAppStart());
}
}
public class CustomAppStart : MvxNavigatingObject, IMvxAppStart
{
public static ISyncClient SynchronizationClient { get; set; }
public void Start(object hint = null)
{
SynchronizationClient = Mvx.Resolve<ISyncClient>();
ShowViewModel<SignInViewModel>();
}
}
public interface ISyncClient
{
void StartSync();
void StopSync();
bool IsSyncActive { get; }
}
You don't need a static property for this. When you register the Interface as a singleton, the IoC do the work for you. Example: In one of our apps wee need a state-property with important data for the whole lifetime of the app.
The models who need this state, just uses following code snippet:
protected IApplicationState AppState
{
get { return _appstate ?? (_appstate = Mvx.GetSingleton<IApplicationState>()); }
}
private IApplicationState _appstate;
But: You can do it also with a static property. But in this case you don't need a singleton-value in the IoC.

Share an object instance between multiple session/application instances in ASP.NET MVC

I am developing a project for a log monitor and I am using an ASP.NET application with SignalR.
The main objective of the application is to provide a monitor of error logs in multiple clients across different locations (LCD monitors). Every moment when a log error is created in database, the application should notify all the clients with the new error.
I am wondering to create a static Timer variable in the web application, that will be started by the Application_Start method.
But, knowing the application will have a single thread per session, I think the web server will have a lot of timers running together.
I need to know how to make this Timer instance unique for all the session instances in the web server.
Application_Start is not triggered by a new session, but by the start of the application. If you initialize your timer in Application_Start, you don't need to worry about multiple timer instances.
You can create an instance class that has a timer.
For instance:
public class MyTimerHolder
{
private static Lazy<MyTimerHolder> _instance = new Lazy<MyTimerHolder>(() => new MyTimerHolder());
private readonly TimeSpan _checkPeriod = TimeSpan.FromSeconds(3);
private IHubContext _hubProxy;
// Threaded timer
private Timer _timer;
public MyTimerHolder()
{
_timer = new Timer(CheckDB, null, _checkPeriod, _checkPeriod);
}
public void BroadcastToHub(IHubContext context)
{
_hubProxy = context;
}
public void CheckDB(object state)
{
if (_hubProxy != null)
{
// Logic to check your database
_hubProxy.Clients.All.foo("Whatever data you want to pass");
}
}
public static MyTimerHolder Instance
{
get
{
return _instance.Value;
}
}
}
Then you can change the hubContext at any point from any method. So lets say you want to broadcast to clients connected to hub "MyDBCheckHub". At any point in your application all you have to do is:
MyTimerHolder.Instance.BroadcastToHub(GlobalHost.ConnectionManager.GetHubContext<MyDBCheckHub>());
You could throw this in your application start or wherever you please, there'll only be 1 instance of MyTimerHolder within the app domain.

Async WCF: wait for another call

We have an old Silverlight UserControl + WCF component in our framework and we would like to increase the reusability of this feature. The component should work with basic functionality by default, but we would like to extend it based on the current project (without modifying the original, so more of this control can appear in the full system with different functionality).
So we made a plan, where everything looks great, except one thing. Here is a short summary:
Silverlight UserControl can be extended and manipulated via ContentPresenter at the UI and ViewModel inheritance, events and messaging in the client logic.
Back-end business logic can be manipulated with module loading.
This gonna be okay I think. For example you can disable/remove fields from the UI with overriden ViewModel properties, and at the back-end you can avoid some action with custom modules.
The interesting part is when you add new fields via the ContentPresenter. Ok, you add new properties to the inherited ViewModel, then you can bind to them. You have the additional data. When you save base data, you know it's succeeded, then you can start saving your additional data (additional data can be anything, in a different table at back-end for example). Fine, we extended our UserControl and the back-end logic and the original userControl still doesn't know anything about our extension.
But we lost transaction. For example we can save base data, but additional data saving throws an exception, we have the updated base data but nothing in the additional table. We really doesn't want this possibility, so I came up with this idea:
One WCF call should wait for the other at the back-end, and if both arrived, we can begin cross thread communication between them, and of course, we can handle the base and the additional data in the same transaction, and the base component still doesn't know anything about the other (it just provide a feature to do something with it, but it doesn't know who gonna do it).
I made a very simplified proof of concept solution, this is the output:
1 send begins
Press return to send the second piece
2 send begins
2 send completed, returned: 1
1 send completed, returned: 2
Service
namespace MyService
{
[ServiceContract]
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public class Service1
{
protected bool _sameArrived;
protected Piece _same;
[OperationContract]
public Piece SendPiece(Piece piece)
{
_sameArrived = false;
Mediator.Instance.WaitFor(piece, sameArrived);
while (!_sameArrived)
{
Thread.Sleep(100);
}
return _same;
}
protected void sameArrived(Piece piece)
{
_same = piece;
_sameArrived = true;
}
}
}
Piece (entity)
namespace MyService
{
[DataContract]
public class Piece
{
[DataMember]
public long ID { get; set; }
[DataMember]
public string SameIdentifier { get; set; }
}
}
Mediator
namespace MyService
{
public sealed class Mediator
{
private static Mediator _instance;
private static object syncRoot = new Object();
private List<Tuple<Piece, Action<Piece>>> _waitsFor;
private Mediator()
{
_waitsFor = new List<Tuple<Piece, Action<Piece>>>();
}
public static Mediator Instance
{
get
{
if (_instance == null)
{
lock (syncRoot)
{
_instance = new Mediator();
}
}
return _instance;
}
}
public void WaitFor(Piece piece, Action<Piece> callback)
{
lock (_waitsFor)
{
var waiter = _waitsFor.Where(i => i.Item1.SameIdentifier == piece.SameIdentifier).FirstOrDefault();
if (waiter != null)
{
_waitsFor.Remove(waiter);
waiter.Item2(piece);
callback(waiter.Item1);
}
else
{
_waitsFor.Add(new Tuple<Piece, Action<Piece>>(piece, callback));
}
}
}
}
}
And the client side code
namespace MyClient
{
class Program
{
static void Main(string[] args)
{
Client c1 = new Client(new Piece()
{
ID = 1,
SameIdentifier = "customIdentifier"
});
Client c2 = new Client(new Piece()
{
ID = 2,
SameIdentifier = "customIdentifier"
});
c1.SendPiece();
Console.WriteLine("Press return to send the second piece");
Console.ReadLine();
c2.SendPiece();
Console.ReadLine();
}
}
class Client
{
protected Piece _piece;
protected Service1Client _service;
public Client(Piece piece)
{
_piece = piece;
_service = new Service1Client();
}
public void SendPiece()
{
Console.WriteLine("{0} send begins", _piece.ID);
_service.BeginSendPiece(_piece, new AsyncCallback(sendPieceCallback), null);
}
protected void sendPieceCallback(IAsyncResult result)
{
Piece returnedPiece = _service.EndSendPiece(result);
Console.WriteLine("{0} send completed, returned: {1}", _piece.ID, returnedPiece.ID);
}
}
}
So is it a good idea to wait for another WCF call (which may or may not be invoked, so in a real example it would be more complex), and process them together with cross threading communication? Or not and I should look for another solution?
Thanks in advance,
negra
If you want to extend your application without changing any existing code, you can use MEF that is Microsoft Extensibility Framework.
For using MEF with silverlight see: http://development-guides.silverbaylabs.org/Video/Silverlight-MEF
I would not wait for 2 WCF calls from Silverlight, for the following reasons:
You are making your code more complex and less maintainable
You are storing business knowledge, that two services should be called together, in the client
I would call a single service that aggreagated the two services.
It doesn't feel like a great idea to me, to be honest. I think it would be neater if you could package up both "partial" requests in a single "full" request, and wait for that. Unfortunately I don't know the best way of doing that within WCF. It's possible that there's a generalized mechanism for this, but I don't know about it. Basically you'd need some loosely typed service layer where you could represent a generalized request and a generalized response, routing the requests appropriately in the server. You could then represent a collection of requests and responses easily.
That's the approach I'd look at, personally - but I don't know how neatly it will turn out in WCF.

Categories