I have an extension to enqueue my view models pointing to an implementation of an interface IBackgroundJob
this are my extensions methods
private static readonly ActivitySource activitySource = new("MC.Hangfire.Extensions");
public static string Enqueue<T>(this T job, IBackgroundJobClient client)
{
return client.Enqueue<IBackgroundJob<T>>(ps => ps.AddTelemetry(null).EnqueueJob(null, job, JobCancellationToken.Null));
}
public static IBackgroundJob<T> AddTelemetry<T>(this IBackgroundJob<T> job, PerformContext context)
{
using var activity = activitySource.StartActivity($"Start Job {typeof(T).FullName} id {context.BackgroundJob.Id}", ActivityKind.Server);
activity?.SetTag("JobId", context.BackgroundJob.Id);
activity?.SetTag("JobJson", Newtonsoft.Json.JsonConvert.SerializeObject(job));
activity?.SetTag("Job", Newtonsoft.Json.JsonConvert.SerializeObject(context.BackgroundJob.Job));
return job;
}
My problem is that the EnqueueJob method is called, but the AddTelemetry method is not called before, how can I Add the telemetry information before calling all of my jobs, but in the context of the jobs, and of course not adding this code in all of my enqueue methods?
I'm looking for the Hangfire filters, but I think that there should be a way to inject the filter with the DI of the ASP.NET core application
I created this issue on github because I think that the problem with instrumentation is a little deeper in the code
https://github.com/HangfireIO/Hangfire/issues/2017
Related
There is a class that implements an interface called SomethingManager.cs
like:
public class SomethingManager : ISomethingManager
This is a worker service in .net 6 and there is another class library project in the same solution that contains the interface and implementation of SomethingManager.
Dependencies are being registered in the worker service project like
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.UseSerilog()
.ConfigureServices((hostContext, services) =>
{
//AddSingleton or Transient here?
services.AddSingleton<ISomethingManager, SomethingManager>();
...
The problem is that, in the entrypoint project that works in an async way,there is a loop like:
foreach (xml in xmls)
{
tasks.Add(StartProcessAsync(xml));
}
await Task.WhenAll(tasks);
Inside StartProcessAsync it uses SomethingManager instance hat was previously registered and injected in the constructor.
The problem is that the class SomethingManager has some private members that are supposed to be unique for every task and I noticed that in this way it causes fatal errors between the tasks.Actualy this class needs to share a sessionId that a method .Connect() is giving the value every time.We have to call .Connect() method, one time before other actions,inside every task.
So,
My question is how can I register the SomethingManager with Dependency Injection and every task that uses this instance (which is registered with DI) to have different values for its private members?
And, if can't do it in this way, am I supposed to create new instance for this every time?
public Task StartProcessAsync(xmlFileInfo xml)
{
return Task.Run(async () =>
{
//this one doesn't work inside tasks loop it cases problems because
//the sessionId that contains has to be different for every task
//_somethingManager.DoSomething();
//Like this?
var somethingManager= SomethingManager(_someSettings);
somethingManager.DoSomething();
var mem = somethingManager.ThePrivateMember;
//another object which has also private members in the same class.
});
}
I am interested in the architectural solution of the following moment.
I have:
public class GenericRepository<T> : IDisposable {
public GenericRepository(ISession session){
_session = session;
};
public T InsertAsync(T entity){...};
public IQueryable<T> Read(){...};
public T UpateAsync(T entity){...};
public void DeleteAsync(T entity){...};
public Task Commit(){
return _session.Transaction.Commit();
};
public void Dispose(){
if(_session.Transaction.IsActive){
_session.Transaction.Rollback();
}
};
}
public class UserService{
public UserService(GenericRepository<User> repository){...}
public long CreateUser(string userName){
...
_repository.Commit(); // [1]
};
}
public class OrganizationService{
public OrganizationService(GenericRepository<Organization> repository){...}
public int CreateOrganization(string code){
...
_repository.Commit(); // [2]
};
}
The following registration is used:
services.AddScoped<ISession>(x => x.GetRequiredService<NHSessionProvider>().OpenSession());
services.AddScoped(typeof(GenericRepository<>));
services.AddScoped<UserService>();
services.AddScoped<OrganizationService>();
These CreateOrganization and CreateUser can be used independently in any parts of the code:
public IActionResult Post([FromServices] OrganizationService service, [FromBody] string code){
service.CreateOrganization(code);
return Ok();
}
public IActionResult Post([FromServices] UserService service, [FromBody] string userName){
service.CreateUser(userName);
return Ok();
}
However, now I have a new service:
public class MyBillingService{
public MyBillingService(GenericRepository<Contractor> repository, OrganizationService organizationService, UserService userService){...}
public int CreateNewContractor(string organizationCode, string userName){
...
_organizationService.CreateOrganization(organizationCode);
...
_userService.CreateUser(userName);// [3]
...
_repository.Commit(); // [4]
}
}
In this implementation, CreateOrganization and CreateUser have their own transactions, and if [3] throws an exception, then the organization will be created anyway.
Ok, because ISession is registered as Scoped, then I can delete _repository.Commit from CreateOrganization and CreateUser([1] and [2]). In this case, [4] will be responsible for committing all changes.
But what then to do when OrganizationService and UserService are used independently? After all, now they have become non-independent services and cannot save data without delegating the commit of changes to some other service:
public IActionResult Post([FromServices] UserService service, [FromServices] TransactionService transaction, [FromBody] string userName){
service.CreateUser(userName);
transaction.Commit();
return Ok();
}
As far as this decision is a good one?
Transactions requires a unit of work. There is no other way to coordinate repositories. The reason you're facing issues here is that your entire design is wrong.
First and foremost, you should not have these repositories at all. You're using EF Core, which is an ORM, and already implements the repository and unit of work patterns. Using an ORM is opting to use a third-party library for your DAL. Wrapping your own DAL layer around that is pointless and imposes needless maintenance and testing costs on your application with zero benefit. Your services should depend on your context directly.
Then, services should be self-contained units of functionality. If they depend on other services, you're doing it wrong. The service should correspond with a particular subdomain of your application. If users and organization need to be managed together transactionally, then you should have one service that encompasses both.
Alternatively, if you want/need to keep the two separate, then you would need to incorporate the concept of sagas.
So I've started to move more towards what Chris mentioned in his answer and use the ISession directly, but I have used a generic repository in the past. Your repos can't correctly handle transactions that are already started.
So my generic repo has a couple of methods
protected virtual TResult Transact<TResult>(Func<TResult> func)
{
if (_session.Transaction.IsActive)
return func.Invoke();
TResult result;
using (var tx = _session.BeginTransaction(IsolationLevel.ReadCommitted))
{
result = func.Invoke();
tx.Commit();
}
return result;
}
protected virtual void Transact(System.Action action)
{
Transact(() =>
{
action.Invoke();
return false;
});
}
Then the methods that are implementing the repo functionality look like this
public bool Remove(T item)
{
Transact(() => _session.Delete(item));
return true;
}
This allows the method to use an existing Transaction if it is already started, otherwise create your transaction for this work.
You also should not have a Dispose in your repo since you don't own the reference to ISession. It's life cycle should be handled by whoever created that instance.
The generic repository also shouldn't have commit functionality except when it is explicitly starting a new transaction. So now you need to have something that handles starting and committing said transaction. In a web scenario you are typically in a session per request scenario. This would mean you are creating your session in BeginRequest and disposing of it in EndRequest. I then use a transaction attribute to manage creating transactions prior to executing the controller action and commit/rollback after the execution of the controller method.
Good morning everyone!
I just started working in the project where I see there is an memory leak.
The situtation is as below. There is a console application which basically runs all the time in the while(true) loop.
There are bunch on classes which does some logic in the loop.
Each class has Execute() method where inside create uses Task.Run() method where the call is not awaited by anyone.
The list of above classes are called Engines. All engines are stateless classes which are stored in in array in main Program.cs class.
The code basically looks like:
private static List<BaseEngine> Engines;
public static void Main(string[] args)
{
InitializeDI();
RunProgram();
}
private static void RunProgram()
{
while (true)
{
try
{
foreach (var engine in Engines)
{
engine.Execute();
}
}
catch (Exception ex)
{
//handle
}
finally
{
Thread.Sleep(TimeSpan.FromSeconds(3));
}
}
}
private static void InitializeDI()
{
_kernel = new StandardKernel();
ServiceLocator.SetLocatorProvider(() => new NinjectServiceLocator(_kernel));
NinjectConfig.Setup(_kernel);
}
The sample engine looks like:
public class SampleEngine : BaseEngine
{
public override void Execute(Task task)
{
var someService = ServiceLocator.Current.GetInstance<IDbContext>();
System.Threading.Tasks.Task.Run(() =>
{
// some action using dbcontext
});
}
}
In above example of SampleEngine it uses to get IDbContext from Ninject DI. However other engines could use another services regiestred in DI.
All the dependencies are registered as InCallScope()
Basically its like mostly all engine its about fire and forget the given method using Task.Run().
What I did is changed Execute method to return the Task and after this task ran to completion I used to Dispose() this task. This did not bring any value.
I did some investigations and I saw that the problem is inside Ninject.Activation.Cache. I can do the manual cache clean which helps but I know the problem is somewhere in the code but I cannot find it.
Since every dependency is registered as InCallScope() they should be disposed after each task begin to the end. I dont see anything holding reference to these objects because every engine is stateless .
I used ANTS to see the some information and this just keeps growing each minute:
And this points to the Ninject caching as below:
Looks like the DbContext is not disposed and still exist in Ninject cache. Is it a problem of alot of tasks in the system or I do anything wrong ?
Thanks in advance
Cheers!
The most simple approach seems to be embedding the using in your task. But it is a blind shot, as it seems your code is simplified. You don't use the task parameter in your method.
public class SampleEngine : BaseEngine
{
public override void Execute(Task task)
{
System.Threading.Tasks.Task.Run(() =>
{
using (var someService = ServiceLocator.Current.GetInstance<IDbContext>())
{
// some action using dbcontext
}
});
}
}
For a more advanced approach, here is an interesting link. It features a InTaskScope binding. It is based on AsyncLocal and custom tasks through extensions of TaskFactory
How can I pass a SignalR hub context to a Hangfire job on ASP .NET Core 2.1?
It seems that since passing arguments to Hangfire is done via serialization/deserialization, it seems that Hangfire has hard-time reconstructing the SignalR hub context.
I schedule the job (in my controller) using :
BackgroundJob.Schedule(() => _hubContext.Clients.All.SendAsync(
"MyMessage",
"MyMessageContent",
System.Threading.CancellationToken.None),
TimeSpan.FromMinutes(2));
Then after 2 minutes, when the job tries to execute, I have the error :
Newtonsoft.Json.JsonSerializationException: Could not create an
instance of type Microsoft.AspNetCore.SignalR.IClientProxy. Type is an
interface or abstract class and cannot be instantiated.
Any idea?
Update 1
I ended up using a static context defined in Startup.cs, and assigned from Configure()
hbctx = app.ApplicationServices.GetRequiredService<IHubContext<MySignalRHub>>();
So now Hangfire schedules instead a hub helper that uses the static context :
BackgroundJob.Schedule(() => new MyHubHelper().Send(), TimeSpan.FromMinutes(2));
and the hub helper gets the context with Startup.hbctx
Even though this is working, it is a little smelly
Update 2
I tried also using the approach in Access SignalR Hub without Constructor Injection:
My background job scheduling became :
BackgroundJob.Schedule(() => Startup.GetService().SendOutAlert(2), TimeSpan.FromMinutes(2));
However this time, I have an exception when I reach the above line:
An unhandled exception has occurred while executing the request
System.ObjectDisposedException: Cannot access a disposed object.
Object name: 'IServiceProvider'.
Update 3
Thanks all. The solution was to create a helper that gets the hubcontext via its constructor via DI, and then using hangfire to schedule the helper method Send as the background job.
public interface IMyHubHelper
{
void SendOutAlert(String userId);
}
public class MyHubHelper : IMyHubHelper
{
private readonly IHubContext<MySignalRHub> _hubContext;
public MyHubHelper(IHubContext<MySignalRHub> hubContext)
{
_hubContext = hubContext;
}
public void SendOutAlert(String userId)
{
_hubContext.Clients.All.SendAsync("ReceiveMessage", userId, "msg");
}
}
Then launching the background job from anywhere with :
BackgroundJob.Schedule<MyHubHelper>( x => x.SendOutAlert(userId), TimeSpan.FromMinutes(2));
The answer from Nkosi suggesting to use Schedule<T> generics pointed me to the final solution I used:
First, my MySignalRHub is just an empty class inheriting from Hub.
public class MySignalRHub
{
}
Then, I created a hub helper which maintains a hubcontext on my MySignalRHub. The hubcontext is injected in the helper class via the ASP.Net Core built-in DI mechanism (as explained here).
The helper class:
public class MyHubHelper : IMyHubHelper
{
private readonly IHubContext<MySignalRHub> _hubContext;
public MyHubHelper(IHubContext<MySignalRHub> hubContext)
{
_hubContext = hubContext;
}
public void SendData(String data)
{
_hubContext.Clients.All.SendAsync("ReceiveMessage", data);
}
}
The helper interface:
public interface IMyHubHelper
{
void SendData(String data);
}
Finally, I can use hangfire to schedule from anywhere in the code the method SendData() of the hub helper as a background job with:
BackgroundJob.Schedule<MyHubHelper>(h => h.SendData(myData), TimeSpan.FromMinutes(2));
Using Schedule<T> generics you should be able to take advantage of the dependency injection capabilities of the framework.
BackgroundJob.Schedule<IHubContext<MySignalRHub>>(hubContext =>
hubContext.Clients.All.SendAsync(
"MyMessage",
"MyMessageContent",
System.Threading.CancellationToken.None),
TimeSpan.FromMinutes(2));
We are starting with ASP.NET Core 2. We need a way for each element that is involved in a request to write a message to a message handler.
Some limitations:
We won't use HttpContext.Items (HttpContext is not available in the class that we are using inside the Controller, and we don't like to forward the whole context there).
We tried to use it without dependency injection because if we have multiple different services, we will have too many parameters in the constructors.
Must also work with async/await.
We tried an approach using AsyncLocal<T>.
For that we created a class:
public class NotificationExecutionContext
{
private static readonly AsyncLocal<NotificationHandler> NotificationHandler =
new AsyncLocal<NotificationHandler>();
public static NotificationHandler Instance =>
NotificationHandler.Value ?? (NotificationHandler.Value = new NotificationHandler());
}
There will be a NotificationHandler created, which should live per-request. The NotificationHandler is a simple class where you can add/get messages to/from a collection:
public class NotificationHandler : INotificationHandler
{
public List<NotificationBase> Notifications { get; } = new List<NotificationBase>();
public void AddNotification(NotificationBase notification)
{
Notifications.Add(notification);
}
public void AddNotificationRange(List<NotificationBase> notifications)
{
Notifications.AddRange(notifications);
}
}
With this solution, I can easily get the NotificationHandler for this context and add a notification.
NotificationExecutionContext.Instance.AddNotification(new NotificationBase(){..})
Inside a middleware, we are waiting on the Response.OnStarting() event and then we take all messages from the NotificationHandler and add them the response header:
public async Task Invoke(HttpContext context)
{
var e = NotificationExecutionContext.Instance; // Required so that notification handler will be created in this context
context.Response.OnStarting((state) =>
{
List<NotificationBase> notifications = NotificationExecutionContext.Instance.Notifications;
if (notifications.Count > 0)
{
string messageString = JsonConvert.SerializeObject(notifications, Formatting.None);
context.Response.Headers.Add("NotificationHeader", messageString);
}
return Task.FromResult(0);
}, null);
await Next(context);
}
This code works, but are there pitfalls that we do not know? Or are there better solutions?
You should not use static singletons like that. Having static dependencies like that inside your code defeats the whole purpose of dependency injection. You should just embrace dependency injection here, which would make this super simple:
/* in Startup.ConfigureServices */
// register the notification handler as a scoped dependency, this automatically makes the
// instance shared per request but not outside of it
services.AddScoped<INotificationHandler, NotificationHandler>();
/* in Startup.Configure */
// register your custom middleware
app.Use<NotificationHandlerMiddleware>();
public class NotificationHandlerMiddleware
{
private readonly RequestDelegate _next;
private readonly NotificationHandler _notificationHandler;
public NotificationHandlerMiddleware(RequestDelegate next, INotificationHandler notificationHandler)
{
_next = next;
_notificationHandler = notificationHandler;
}
public void Invoke(HttpContext context)
{
// do whatever with _notificationHandler
await _next(context);
}
}
And that’s all. No need to introduce statics, but using full dependency injection making your code completely testable and all dependencies clear.
We tried to use it without dependency injection because if we have multiple different services we will have to many parameters in the constructors.
Too many constructor parameters is a clear sign for a violation of the single responsibility principle. If you find your services take many dependencies, you should consider splitting it up. You may also want to consider refactoring to facade services.