Hangfire .NET Core - Get enqueued jobs list - c#

Is there a method in the Hangfire API to get an enqueued job (probably by a Job id or something)?
I have done some research on this, but I could not find anything.
Please help me.

I have found the answer in the official forum of Hangfire.
Here is the link:
https://discuss.hangfire.io/t/checking-for-a-job-state/57/4
According to an official developer of Hangfire, JobStorage.Current.GetMonitoringApi() gives you all the details regarding Jobs, Queues and the configured servers too!
It seems that this same API is being used by the Hangfire Dashboard.
:-)

I ran into a case where I wanted to see ProcessingJobs, EnqueuedJobs, and AwaitingState jobs for a particular queue. I never found a great way to do this out of the box, but I did discover a way to create a "set" of jobs in Hangfire. My solution was to add each job to a set, then later query for all items in the matching set. When the job reaches a final state, remove the job from the set.
Here's the attribute to create the set:
public class ProcessQueueAttribute : JobFilterAttribute, IApplyStateFilter
{
private readonly string _queueName;
public ProcessQueueAttribute()
: base() { }
public ProcessQueueAttribute(string queueName)
: this()
{
_queueName = queueName;
}
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
if (string.IsNullOrEmpty(context.OldStateName))
{
transaction.AddToSet(_queueName, context.BackgroundJob.Id);
}
else if (context.NewState.IsFinal)
{
transaction.RemoveFromSet(_queueName, context.BackgroundJob.Id);
}
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction) { }
}
You decorate your job this way:
[ProcessQueue("queueName")]
public async Task DoSomething() {}
Then you can query that set as follows:
using (var conn = JobStorage.Current.GetConnection())
{
var storage = (JobStorageConnection)conn;
if (storage != null)
{
var itemsInSet = storage.GetAllItemsFromSet("queueName");
}
}

Related

How to efficiently count HTTP Calls in asp.net core?

I have an abstract class called HttpHelper it has basic methods like, GET, POST, PATCH, PUT
What I need to achieve is this:
Store the url, time & date in the database each time the function is called GET, POST, PATCH, PUT
I don't want to store directly to the database each time the functions are called (that would be slow) but to put it somewhere (like a static queue-memory-cache) which must be faster and non blocking, and have a background long running process that will look into this cache-storage-like which will then store the values in the database.
I have no clear idea how to do this but the main purpose of doing so is to take the count of each calls per hour or day, by domain, resource and url query.
I'm thinking if I could do the following:
Create a static class which uses ConcurrentQueue<T> to store data and call that class in each function inside HttpHelper class
Create a background task similar to this: Asp.Net core long running/background task
Or use Hangfire, but that might be too much for simple task
Or is there a built-in method for this in .netcore?
Both Hangfire and background tasks would do the trick as consumers of the queue items.
Hangfire was there before long running background tasks (pre .net core), so go with the long running tasks for net core implementations.
There is a but here though.
How important is to you that you will not miss a call? If it is, then neither can help you.
The Queue or whatever static construct you have will be deleted the time your application crashes/machine restarts or just plain recycling of the application pools.
You need to consider some kind of external Queuing mechanism like rabbit mq with persistence on.
You can also append to a file, but that might also cause some delays as read/write.
I do not know how complex your problem is but I would consider two solutions.
First is calling Async Insert Method which will not block your main thread but will start task. You can return response without waiting for your log to be appended to database. Since you want it to be implemented in only some methods, I would do it using Attributes and Middleware.
Simplified example:
public IActionResult SomePostMethod()
{
LogActionAsync("This Is Post Method");
return StatusCode(201);
}
public static Task LogActionAsync(string someParameter)
{
return Task.Run(() => {
// Communicate with database (X ms)
});
}
Better solution is creating buffer which will not communicate with database each time but only when filled or at interval. It would look like this:
public IActionResult SomePostMethod()
{
APILog.Log(new APILog.Item() { Date = DateTime.Now, Item1 = "Something" });
return StatusCode(201);
}
public partial class APILog
{
private static List<APILog.Item> _buffer = null;
private cont int _msTimeout = 60000; // Timeout between updates
private static object _updateLock = new object();
static APILog()
{
StartDBUpdateLoopAsync();
}
private void StartDBUpdateLoopAsync()
{
// check if it has been already and other stuff
Task.Run(() => {
while(true) // Do not use true but some other expression that is telling you if your application is running.
{
Thread.Sleep(60000);
lock(_updateLock)
{
foreach(APILog.Item item in _buffer)
{
//Import into database here
}
}
}
});
}
public static void Log(APILog.Item item)
{
lock(_updateLock)
{
if(_buffer == null)
_buffer = new List<APILog.Item>();
_buffer.Add(item);
}
}
}
public partial class APILog
{
public class Item
{
public string Item1 { get; set; }
public DateTime Date { get; set; }
}
}
Also in this second example I would not call APILog.Log() each time but use Middleware in combination with Attribute

Logging Hangfire jobs to Application Insights and correlating activity to an Operation Id

I feel like this should be a lot simpler than it's turning out to be, or I am just over thinking it too much.
I have a .NET Core 3.1 Web API application, which is using HangFire to process some jobs in the background. I have also configured Application Insights to log Telemetry from the .NET Core API.
I can see logging events and dependency telemetry data logged in Application Insights. However, each event/log/dependency is recorded against a unique OperationId and Parent Id.
I am trying to determine how to ensure that any activity which is logged, or any dependencies which are used in the context of the background job are logged against the OperationId and/or Parent Id of the original request which queued the background job.
When I queue a job, I can get the current OperationId of the incoming HTTP request, and I push that into the HangFire queue with the job. When the job is then performed, I can get back that OperationId. What I then need to do is make that OperationID available throughout the context/lifetime of the job execution, so that it is attached to any Telemetry sent to Application Insightd.
I thought I could create a IJobContext interface, which could be injected into the class which performs the job. Within that context I could push the OperationID. I could then create a ITelemetryInitializer which would also take the IJobContext as a dependency. In the ITelemetryInitializer I could then set the OperationID and ParentId of the telemetry being sent to Application Insights. Here's some simple code:
public class HangFirePanelMessageQueue : IMessageQueue
{
private readonly MessageProcessor _messageProcessor;
private readonly IHangFireJobContext _jobContext;
private readonly TelemetryClient _telemetryClient;
public HangFirePanelMessageQueue(MessageProcessor panelMessageProcessor,
IIoTMessageSerializer iotHubMessageSerialiser,
IHangFireJobContext jobContext, TelemetryClient telemetryClient)
{
_messageProcessor = panelMessageProcessor;
_jobContext = jobContext;
_telemetryClient = telemetryClient;
}
public async Task ProcessQueuedMessage(string message, string operationId)
{
var iotMessage = _iotHubMessageSerialiser.GetMessage(message);
_jobContext?.Set(iotMessage.CorrelationID, iotMessage.MessageID);
await _messageProcessor.ProcessMessage(iotMessage);
}
public Task QueueMessageForProcessing(string message)
{
var dummyTrace = new TraceTelemetry("Queuing message for processing", SeverityLevel.Information);
_telemetryClient.TrackTrace(dummyTrace);
string opId = dummyTrace.Context.Operation.Id;
BackgroundJob.Enqueue(() =>
ProcessQueuedMessage(message, opId));
return Task.CompletedTask;
}
}
The IJobContext would look something like this:
public interface IHangFireJobContext
{
bool Initialised { get; }
string OperationId { get; }
string JobId { get; }
void Set(string operationId, string jobId);
}
And then I would have an ITelemetryInitializer which enriches any ITelemetry:
public class EnrichBackgroundJobTelemetry : ITelemetryInitializer
{
private readonly IHangFireJobContext jobContext;
public EnrichBackgroundJobTelemetry(IHangFireJobContext jobContext)
{
this.jobContext = jobContext;
}
public void Initialize(ITelemetry telemetry)
{
if (!jobContext.Initialised)
{
return;
}
telemetry.Context.Operation.Id = jobContext.OperationId;
}
}
The problem I have however is that the ITelemetryInitializer is a singleton, and so it would be instantiated once with a IHangFireJobContext which would then never update for any subsequent HangFire job.
I did find the https://github.com/skwasjer/Hangfire.Correlate project, which extends https://github.com/skwasjer/Correlate. Correlate creates a correlation context which can be accessed via a ICorrelationContextAccessor which is similar to the IHttpContextAccessor.
However, the footnotes for Correlate state "Please consider that .NET Core 3 now has built-in support for W3C TraceContext (blog) and that there are other distributed tracing libraries with more functionality than Correlate." which lists Application Insights as one of the alternatives for more Advanced distributed tracing.
So can anyone help me understand how I can enrich any Telemetry going to Application Insights when it is created within the context of a HangFire job? I feel the correct answer is to use an ITelemetryInitializer and populate the OperationId on that ITelemetry item, however, I am not sure what dependancy to inject into the ITelemetryInitialzer in order to get access to the HangFire Job Context.
When I queue a job, I can get the current OperationId of the incoming HTTP request, and I push that into the HangFire queue with the job.
So, am I correct to say that you have a controller action that pushes work to hangfire? If so What you can do is inside the controller method get the operation id and pass it to the job. Use that operation id to start a new operation using the operation Id. That operation, together with all the telemetry generated during that operation, will be linked to the original request.
I have no hangfire integration but the code below shows the general idea: some work is queued to be done in the background and should be linked to the request regarding the telemetry:
[HttpGet("/api/demo5")]
public ActionResult TrackWorker()
{
var requestTelemetry = HttpContext.Features.Get<RequestTelemetry>();
_taskQueue.QueueBackgroundWorkItem(async ct =>
{
using(var op = _telemetryClient.StartOperation<DependencyTelemetry>("QueuedWork", requestTelemetry.Context.Operation.Id))
{
_ = await new HttpClient().GetStringAsync("http://blank.org");
await Task.Delay(250);
op.Telemetry.ResultCode = "200";
op.Telemetry.Success = true;
}
});
return Accepted();
}
The full example can be found here.
Working from Peter Bons' example I did it like this:
Code originally triggered from a controller action:
// Get the current ApplicationInsights Id. Could use .RootId if
// you only want the OperationId, but I want the ParentId too
var activityId = System.Diagnostics.Activity.Current?.Id;
_backgroundJobClient.Enqueue<JobDefinition>(x =>
x.MyMethod(queueName, otherMethodParams, activityId));
In my JobDefinition class:
// I use different queues, but you don't need to.
// otherMethodParams is just an example. Have as many as you need, like normal.
[AutomaticRetry(OnAttemptsExceeded = AttemptsExceededAction.Delete, Attempts = 10)]
[QueueNameFromFirstParameter]
public async Task MyMethod(string queueName, string otherMethodParams,
string activityId)
{
var (operationId, parentId) = SplitCorrelationIdIntoOperationIdAndParentId(
activityId);
// Starting this new operation will initialise
// System.Diagnostics.Activity.Current.
using (var operation = _telemetryClient.StartOperation<DependencyTelemetry>(
"JobDefinition.MyMethod", operationId, parentId))
{
try
{
operation.Telemetry.Data = $"something useful here";
// If you have other state you'd like in App Insights logs,
// call AddBaggage and they show up as a customDimension,
// e.g. in any trace logs.
System.Diagnostics.Activity.Current.AddBaggage("QueueName", queueName);
// ... do the real background work here...
operation.Telemetry.Success = true;
}
catch (Exception)
{
operation.Telemetry.Success = false;
throw;
}
}
}
// Splits full value from System.Diagnostics.Current.Activity.Id
// like "00-12994526f1cb134bbddd0f256e8bc3f0-872b3bd78c345a46-00"
// into values ( "12994526f1cb134bbddd0f256e8bc3f0", "872b3bd78c345a46" )
private static (string, string) SplitCorrelationIdIntoOperationIdAndParentId(string activityId)
{
if (string.IsNullOrEmpty(activityId))
return (null, null);
var splits = activityId.Split('-');
// This is what should happen
if (splits.Length >= 3)
return (splits[1], splits[2]);
// Must be in a weird format. Try to return something useful.
if (splits.Length == 2)
return (splits[0], splits[1]);
return (activityId, null);
}
I'm not sure using the OperationId and ParentId is quite right here, e.g. it does tie the background job to the originating request's OperationId, but if the originating Request has a ParentId then this background job should really have its ParentId set as the Request, not as the Request's ParentId. Anyone know?

Hangfire retry pattern

It is possible to retry task till condition is not completed ? e.g.
internal class Program
{
private static void Main(string[] args)
{
var user = new User();
var jobs = new Jobs();
Hangfire.BackgroundJob.Enqueue(() => jobs.SendNotification(user));
}
}
public class Jobs
{
Rules rules = new Rules();
public void SendNotification(User user)
{
if (rules.Rule1() && rules.Rule2())
{
// send notification
return;
}
// somehow retry the execution of this method (throw exception does not seem right)
}
}
public class Rules
{
public bool Rule1() { return true; }
public bool Rule2() { return true; }
}
public class User { }
I know that it is possible to retry execution of method by throwing exception but that does not seem right, since I know that recovering from exception is rather costly, and it will mark job as failed in hangfire admin interface which is not true.
I could write retry pattern myself, but I like the way hangfire is saving all the information related to background job processing to the persistent storage (SQL in my case), no data is kept in a process’ memory. So I assume it can recover the queue from storage even after server was shut down and continue processing.
Note: i would like to use hangfire because we already using it for the jobs but if it is not suitable I have free hand. Could you recommend some library that can do what I want and you have good experience with it ?

Asp.net Core DI: Using SemaphoreSlim for write AND read operations with Singleton

I am re-tooling an ASP.NET CORE 2.2 app to avoid using the service locator pattern in conjunction with static classes. Double bad!
The re-tooling is involving the creation and injection of Singleton object as a repository for some global data. The idea here to avoid hits to my SQL server for some basic/global data that gets used over and over again in requests. However, this data needs to be updated on an hourly basis (not just at app startup). So, to manage the situation I am using SemaphoreSlim to handle one-at-a-time access to the data objects.
Here is a paired down sketch of what what I'm doing:
namespace MyApp.Global
{
public interface IMyGlobalDataService
{
Task<List<ImportantDataItem>> GetFilteredDataOfMyList(string prop1);
Task LoadMyImportantDataListAsync();
}
public class MyGlobalDataService: IMyGlobalDataService
{
private MyDbContext _myDbContext;
private readonly SemaphoreSlim myImportantDataLock = new SemaphoreSlim(1, 1);
private List<ImportantDataItem> myImportantDataList { get; set; }
public async Task<List<ImportantDataItem>> GetFilteredDataOfMyList(string prop1)
{
List<ImportantDataItem> list;
myImportantDataLock.WaitAsync();
try
{
list = myImportantDataList.Where(itm => itm.Prop1 == prop1).ToList();
}
finally
{
myImportantDataLock.Release();
}
return list;
}
public async Task LoadMyImportantDataListAsync()
{
// this method gets called when the Service is created and once every hour thereafter
myImportantDataLock.WaitAsync();
try
{
this.MyImportantDataList = await _myDbContext.ImportantDataItems.ToListAsync();
}
finally
{
myImportantDataLock.Release();
}
return;
}
public MyGlobalDataService(MyDbContext myDbContext) {
_myDbContext = myDbContext;
};
}
}
So in effect I am using the SemaphoreSlim to limit to one-thread-at-a-time access, for both READ and UPDATING to myImportantDataList. This is really uncertain territory for me. Does this seem like an appropriate approach to handle my injection of a global data Singleton throughout my app? Or should I expect insane thread locking/blocking?
The problem with using SemaphoreSlim is scalability.
As this is in a web application, it's fair to assume that you want to have the potential for more than one reader to access the data simultaneously. However, you are (understandably) limiting the number of requests for the semaphore that can be generated concurrently to 1 (to prevent concurrent read and write requests). This means you will serialize all reads too.
You need to use something like ReaderWriterLockSlim to allow multiple threads for reading, but ensure exclusive access for writing.
Creyke's answer hit the nail on the head for me: using ReaderWriterLockSlim. So I've marked it as the accepted answer. But I am posting my revised solution in case it might be helpful to anyone. Important to note that I'm using the following package to provide async functionality to ReaderWriterLockSlim: https://www.nuget.org/packages/Nito.AsyncEx/
using Nito.AsyncEx;
using System;
using System.Collections.Generic;
using System.Text;
namespace MyApp.Global
{
public interface IMyGlobalDataService
{
Task<List<ImportantDataItem>> GetFilteredDataOfMyList(string prop1);
Task LoadMyImportantDataListAsync();
}
public class MyGlobalDataService : IMyGlobalDataService
{
private MyDbContext _myDbContext;
private readonly AsyncReaderWriterLock myImportantDataLock = new AsyncReaderWriterLock();
private List<ImportantDataItem> myImportantDataList { get; set; }
public async Task<List<ImportantDataItem>> GetFilteredDataOfMyList(string prop1)
{
List<ImportantDataItem> list;
using (await myImportantDataLock.ReaderLockAsync())
{
list = myImportantDataList.Where(itm => itm.Prop1 == prop1).ToList();
}
return list;
}
public async Task LoadMyImportantDataListAsync()
{
// this method gets called when the Service is created and once every hour thereafter
using (await myImportantDataLock.WriterLockAsync())
{
this.MyImportantDataList = await _myDbContext.ImportantDataItems.ToListAsync();
}
return;
}
public MyGlobalDataService(MyDbContext myDbContext)
{
_myDbContext = myDbContext;
};
}
}

DependencyInjection issue during using FluentScheduler in .NET Core API application

I want to update data in my database each hour. So, I find good FluentScheduler library for this and create my IJob:
public class InfoLoader : IJob
{
private readonly DataContext _db;
public InfoLoader(DataContext db)
{
_db = db;
}
public void Execute()
{
foreach (User user in _db.Users.ToList())
{
foreach (Info i in user.Info.ToList())
{
UpdateInfo(i);
}
}
}
private void UpdateInfo(Info info)
{
// do some operations to update information in db
}
}
And of course I create my Registry implementation to schedule all tasks which I need:
public class LoadersRegistry : Registry
{
public LoadersRegistry()
{
Schedule<InfoLoader>().ToRunNow().AndEvery(1).Hours();
}
}
Also I add following code in my Program.cs file to initalize scheduler and start it:
JobManager.JobException += (obj) => { logger.LogError(obj.Exception.Message); };
JobManager.Initialize(new LoadersRegistry());
But when I run my application I see following error:
I understand that LoadersRegistry can't create instance of InfoLoader (during JobManger initializes it in Program.cs), because InfoLoader receives DataContext. But I can't do not receive DataContext, because I need it to add data in my database.
Unfortunately I can't find a way to fix this issue.
Thanks for any help.
P.S. I read about using FluentScheduler in asp.net core, but developers of this library said that this feature will not be available in the future because of this, so I still don't know how I can solve the issue.
As per the API document you'll have to change the way you register.
Following is just one way of doing it.
public LoadersRegistry()
{
var dataContext = new DataContext();
Schedule(()=> new InfoLoader(dataContext)).ToRunNow().AndEvery(1).Hours();
}
Here I'm doing new DataContext() but you could make dataContext available however you like as long as you are newing up InfoLoader with it.
If someone will meet with similar issue, this is solution which helps me:
You just need to do initialization inside Startup.cs file:
public void ConfigureServices(IServiceCollection services)
var provider = services.BuildServiceProvider();
JobManager.Initialize(new LoadersRegistry(
provider.GetRequiredService<DataContext>()
));
services.AddMvc();
}
And of course, LoadersRegistry should receive DataContext instance, and InfoLoader should receive this instance in constructor:
public LoadersRegistry(DataContext db)
{
Schedule(new InfoLoader(db)).ToRunNow().AndEvery(30).Seconds();
}
Good luck! :)

Categories