I am using Hangfire to do jobs, and I'd like to change the behaviour that succeeded jobs are deleted from the database after a day - I'd like them to be stored for a year.
Following the instructions in this thread, which is the same as in this SO question, I have created a class:
public class OneYearExpirationTimeAttribute : JobFilterAttribute, IApplyStateFilter
{
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
context.JobExpirationTimeout = TimeSpan.FromDays(365);
}
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
context.JobExpirationTimeout = TimeSpan.FromDays(365);
}
}
and I register it in my Asp.net web api startup class as a global filter:
public class Startup
{
public void Configuration(IAppBuilder app)
{
// ... other stuff here ...
GlobalJobFilters.Filters.Add(new OneYearExpirationTimeAttribute());
GlobalConfiguration.Configuration.UseSqlServerStorage("HangFireDBConnection");
app.UseHangfireDashboard();
}
}
The web api is the place where jobs are posted (i.e., the call to BackgroundJob.Enqueue(() => ...) happens). I have not changed the configuration of the clients that do the actual jobs.
If I now post a job and it succeeds, it still has a expiry of one day as you can see in the screenshot, which shows both the dashboard and the entry in the HangfireDb,
What am I doing wrong or what am I missing?
My mistake in setup was that the attribute was set on the wrong application. As I stated in the question, I added the filter in the startup.cs file of the asp.net web api where jobs are posted.
Instead I should have added the configuration in the Console application where the jobs are being executed, i.e., my console app starts with
static void Main(string[] args)
{
GlobalConfiguration.Configuration.UseSqlServerStorage("HangFireDBConnection");
GlobalJobFilters.Filters.Add(new OneYearExpirationTimeAttribute());
// ... more stuff ...
}
Then it works. The Hangfire documentation could be a bit clearer on where the filter should be configured.
Using version:
// Type: Hangfire.JobStorage
// Assembly: Hangfire.Core, Version=1.7.11.0, Culture=neutral, PublicKeyToken=null
This can be done directly (apparently)
JobStorage.Current.JobExpirationTimeout = TimeSpan.FromDays(6 * 7);
Related
I have a .net core 3.0 web application. In Startup.cs, I register an EventProcessor (from Microsoft.Azure.EventHubs.Processor) that listens to an Azure EventHub for events. I do it like this:
await eventProcessorHost.RegisterEventProcessorAsync<TwinChangesEventHandler>();
I'm interested in device twin changes in an IoT Hub that's connected to the EventHub.
So, in the EventProcessor, I want to access the SignalR IHubContext interface (from Microsoft.AspNetCore.SignalR - the new version of SignalR) to be able to notify connected browsers of a device twin property change. My problem is that the EventProcessor can't get a handle to IHubContext. How can I get it?
I see online that people are using dependency injection but because my EventProcessor is created by RegisterEventProcessorAsync() like I showed above, its default constructor is ALWAYS called and NOT the one with IHubContext as a parameter! Even if I use a factory to create the EventProcessor and call RegisterEventProcessorFactoryAsync() in Startup.cs, I can't get the IHubContext handle in the factory, because the call is not originating from a controller. It either originates from Startup.ConfigureServices() or a callback from whenever something happens in the EventHub, which is not a controller method. I'm really stuck, so any help would be much appreciated. Does anyone know the answer to this?
You can add your Factory and processor to services
.ConfigureServices((hostContext, services) =>
{
...
services.AddSingleton<IEventProcessorFactory, EventProcessorFactory>();
services.AddSingleton<IEventProcessor, TwinChangesEventHandler>();
...
});
public class EventProcessorFactory : IEventProcessorFactory
{
private readonly IEventProcessor _fluxEventProcessor;
public EventProcessorFactory(IEventProcessor fluxEventProcessor)
{
_fluxEventProcessor = fluxEventProcessor;
}
public IEventProcessor CreateEventProcessor(PartitionContext context)
{
return _fluxEventProcessor;
}
}
Then in your handler you can have access to the injected hub
public class TwinChangesEventHandler : IEventProcessor
{
private readonly IHubContext<MyHub> _myHubContext;
public TwinChangesEventHandler(IHubContext<MyHub> myHubContext)
{
_myHubContext= myHubContext;
}
...
async Task IEventProcessor.ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
await _myHubContext.Clients.All.SendAsync("Update", eventData);
}
//Call checkpoint every 5 minutes, so that worker can resume processing from 5 minutes back if it restarts.
if (_checkpointStopWatch.Elapsed > TimeSpan.FromMinutes(5))
{
await context.CheckpointAsync();
_checkpointStopWatch.Restart();
}
}
}
Good morning everyone!
I just started working in the project where I see there is an memory leak.
The situtation is as below. There is a console application which basically runs all the time in the while(true) loop.
There are bunch on classes which does some logic in the loop.
Each class has Execute() method where inside create uses Task.Run() method where the call is not awaited by anyone.
The list of above classes are called Engines. All engines are stateless classes which are stored in in array in main Program.cs class.
The code basically looks like:
private static List<BaseEngine> Engines;
public static void Main(string[] args)
{
InitializeDI();
RunProgram();
}
private static void RunProgram()
{
while (true)
{
try
{
foreach (var engine in Engines)
{
engine.Execute();
}
}
catch (Exception ex)
{
//handle
}
finally
{
Thread.Sleep(TimeSpan.FromSeconds(3));
}
}
}
private static void InitializeDI()
{
_kernel = new StandardKernel();
ServiceLocator.SetLocatorProvider(() => new NinjectServiceLocator(_kernel));
NinjectConfig.Setup(_kernel);
}
The sample engine looks like:
public class SampleEngine : BaseEngine
{
public override void Execute(Task task)
{
var someService = ServiceLocator.Current.GetInstance<IDbContext>();
System.Threading.Tasks.Task.Run(() =>
{
// some action using dbcontext
});
}
}
In above example of SampleEngine it uses to get IDbContext from Ninject DI. However other engines could use another services regiestred in DI.
All the dependencies are registered as InCallScope()
Basically its like mostly all engine its about fire and forget the given method using Task.Run().
What I did is changed Execute method to return the Task and after this task ran to completion I used to Dispose() this task. This did not bring any value.
I did some investigations and I saw that the problem is inside Ninject.Activation.Cache. I can do the manual cache clean which helps but I know the problem is somewhere in the code but I cannot find it.
Since every dependency is registered as InCallScope() they should be disposed after each task begin to the end. I dont see anything holding reference to these objects because every engine is stateless .
I used ANTS to see the some information and this just keeps growing each minute:
And this points to the Ninject caching as below:
Looks like the DbContext is not disposed and still exist in Ninject cache. Is it a problem of alot of tasks in the system or I do anything wrong ?
Thanks in advance
Cheers!
The most simple approach seems to be embedding the using in your task. But it is a blind shot, as it seems your code is simplified. You don't use the task parameter in your method.
public class SampleEngine : BaseEngine
{
public override void Execute(Task task)
{
System.Threading.Tasks.Task.Run(() =>
{
using (var someService = ServiceLocator.Current.GetInstance<IDbContext>())
{
// some action using dbcontext
}
});
}
}
For a more advanced approach, here is an interesting link. It features a InTaskScope binding. It is based on AsyncLocal and custom tasks through extensions of TaskFactory
I want to update data in my database each hour. So, I find good FluentScheduler library for this and create my IJob:
public class InfoLoader : IJob
{
private readonly DataContext _db;
public InfoLoader(DataContext db)
{
_db = db;
}
public void Execute()
{
foreach (User user in _db.Users.ToList())
{
foreach (Info i in user.Info.ToList())
{
UpdateInfo(i);
}
}
}
private void UpdateInfo(Info info)
{
// do some operations to update information in db
}
}
And of course I create my Registry implementation to schedule all tasks which I need:
public class LoadersRegistry : Registry
{
public LoadersRegistry()
{
Schedule<InfoLoader>().ToRunNow().AndEvery(1).Hours();
}
}
Also I add following code in my Program.cs file to initalize scheduler and start it:
JobManager.JobException += (obj) => { logger.LogError(obj.Exception.Message); };
JobManager.Initialize(new LoadersRegistry());
But when I run my application I see following error:
I understand that LoadersRegistry can't create instance of InfoLoader (during JobManger initializes it in Program.cs), because InfoLoader receives DataContext. But I can't do not receive DataContext, because I need it to add data in my database.
Unfortunately I can't find a way to fix this issue.
Thanks for any help.
P.S. I read about using FluentScheduler in asp.net core, but developers of this library said that this feature will not be available in the future because of this, so I still don't know how I can solve the issue.
As per the API document you'll have to change the way you register.
Following is just one way of doing it.
public LoadersRegistry()
{
var dataContext = new DataContext();
Schedule(()=> new InfoLoader(dataContext)).ToRunNow().AndEvery(1).Hours();
}
Here I'm doing new DataContext() but you could make dataContext available however you like as long as you are newing up InfoLoader with it.
If someone will meet with similar issue, this is solution which helps me:
You just need to do initialization inside Startup.cs file:
public void ConfigureServices(IServiceCollection services)
var provider = services.BuildServiceProvider();
JobManager.Initialize(new LoadersRegistry(
provider.GetRequiredService<DataContext>()
));
services.AddMvc();
}
And of course, LoadersRegistry should receive DataContext instance, and InfoLoader should receive this instance in constructor:
public LoadersRegistry(DataContext db)
{
Schedule(new InfoLoader(db)).ToRunNow().AndEvery(30).Seconds();
}
Good luck! :)
I have some code in my ConfigureServices that fails when running a migration:
dotnet ef migrations list
I'm trying to add a Certificate but it can't find the file (it works when starting the project as a whole). So is there a way to do something like this:
if (!CurrentEnvironment.IsMigration()) {
doMyStuffThatFailsInMigration()
}
That way I could keep my code as it is but just execute it when not running it in a migration.
Thanks
Just set a static flag in the Main method (which is not called by the dotnet-ef tool):
public class Program
{
public static bool IsStartedWithMain { get; private set; }
public static void Main(string[] args)
{
IsStartedWithMain = true;
...
}
}
and then check it when needed:
internal static void ConfigureServices(WebHostBuilderContext context, IServiceCollection services)
{
if (Program.IsStartedWithMain)
{
// do stuff which must not be run upon using the dotnet-ef tool
}
}
EDIT: in Dotnet 6.0 there's no separate ConfigureServices method. Everything is initialized in the Main method (can be created with dotnet new .. --use-program-main). In this case a flag can be used for skipping EF stuff:
private static bool IsStartedWithMain =>
Assembly.GetEntryAssembly() == Assembly.GetExecutingAssembly();
My current solution to detecting if a migration has not occurred:
using System.Linq;
// app is of type IApplicationBuilder
// RegisteredDBContext is the DBContext I have dependency injected
using (var serviceScope = app.ApplicationServices.GetRequiredService<IServiceScopeFactory>().CreateScope())
{
var context = serviceScope.ServiceProvider.GetService<RegisteredDBContext>();
if (context.Database.GetPendingMigrations().Any())
{
var msg = "There are pending migrations application will not start. Make sure migrations are ran.";
throw new InvalidProgramException(msg);
// Instead of error throwing, other code could happen
}
}
This assumes that the migrations have been synced to the database already. If only EnsureDatabase has been called, then this approach does not work, because the migrations are all still pending.
There are other method options on the context.Database. GetMigrations and GetAppliedMigrations.
I recently added OWIN to my existing ASP.NET MVC 5 project.
I'm using it to log the request and response data that comes to my server.
Everything is set up properly, and the logging works great, except for one issue that I'm not entirely thrilled about: it logs any static file requests.
How can I avoid logging .js/images/css/etc. requests using the OWIN pipeline?
One of my custom logs:
Request
Method: GET
Path: http://localhost:12345/content/stylesheets/site.css.map
Headers: ...
Body: ...
I find when I load one of my web pages, I could see 8 log entries get generated from all of the static file loads. I only care about the main request.
Now, I could go in and whitelist or blacklist request paths, but I thought before I do that, there had to be an easier way.
I want to avoid doing this:
_urlsToNotLog = new[]{
"content/stylesheets/site.css.map",
"Scripts/jquery.validate.unobtrusive.js",
.
.
};
if(_urlsToNotLog.Contains(environment["owin.RequestPath"]))
{
//log request
}
else
{
//Don't log
}
My code:
public class Startup
{
public void Configuration(IAppBuilder app)
{
app.Use<LoggingMiddleware>();
}
}
public class LoggingMiddleware
{
public LoggingMiddleware(AppFunc next)
{
_next = next;
}
public async Task Invoke(IDictionary<string, object> environment)
{
LogRequest(environment);
await _next(environment);
LogResponse(environment);
}