I am using Quartz.NET 3.0.7 in my application for some scheduled tasks but when i start scheduler it will increasing memory when scheduler run.i also checked with simple schedule task which print some list of string on console the same issue is happening in this also.i can not understand what is actual issue.my observation is IJob object is released after job work done.I have no idea other than that.
I am creating Jobs Dynamically by passing JobType
private IJobDetail GetJobDetailForType<TInput>(string jobKey, string jobGroup) where TInput : IJob
{
IJobDetail jobDetail = JobBuilder.Create<TInput>()
.WithIdentity(jobKey, jobGroup)
.Build();
return jobDetail;
}
And initialize schedule jobs in this function
private void InitializeSchedulerJob(ScheduleJob scheduleJob)
{
IJobDetail jobDetail;
ITrigger trigger = this.GetTrigger(scheduleJob.Code.ToString(), tenantCode, scheduleJob.CornSchedule);
if (scheduleJob.Code == (int)EnumHelper.Scheduler.Job.EmailJob)
{
this.logger.LogDebug($"EmailJob");
jobDetail = this.GetJobDetailForType<EmailJob>(scheduleJob.Code.ToString(), tenantCode);
}
this.scheduler.ScheduleJob(jobDetail, trigger);
}
Email Job Code
public class EmailJob : IJob
{
private IServiceProvider serviceProvider;
public EmailJob(IServiceProvider serviceProvider)
{
this.serviceProvider = serviceProvider;
}
public async Task Execute(IJobExecutionContext context)
{
if (this.serviceProvider != null)
{
JobKey jobKey = context.JobDetail.Key;
IEmailScheduleSendService emailScheduleSendServiceNew = this.serviceProvider.GetRequiredService<IEmailScheduleSendService>();
ILogger<EmailJob> logger = this.serviceProvider.GetRequiredService<ILogger<EmailJob>>();
logger.LogDebug($"FROM EXECUTE METHOD | {jobKey.Name} | {jobKey.Group} | START");
await emailScheduleSendServiceNew.EmailScheduleSendAsync(context);
logger.LogDebug($"FROM EXECUTE METHOD | {jobKey.Name} | {jobKey.Group} | END");
}
}
}
You may want to start running some performance counters to monitor CPU usage and memory stats and figure out what's going on.
If that doesn't lead you to any obvious answers, it's time to start profiling.
JetBrains dotTrace (Not free, 30 day trial)
Microsoft's CLR Profiler
I guess that the services requested by your service locator are not released after use. You can check that by implementing the IDisposable interface in one of your requested services and verify that Dispose() is called. I guess its not.
To fix that, you can register your services as scoped and open the scope by hand. This will ensure that all service are disposed after use.
public async Task Execute(IJobExecutionContext context)
{
// this check is superfluous and can be omitted.
// if you want to ensure that the service locator is there, check this in the constructor.
// with a good DI framework, it can't be null.
if (this.serviceProvider != null)
{
// open the scope and dispose it after use.
using (var serviceScope = host.Services.CreateScope())
{
// get the service locator from the scope.
var services = serviceScope.ServiceProvider;
try
{
JobKey jobKey = context.JobDetail.Key;
IEmailScheduleSendService emailScheduleSendServiceNew = services.GetRequiredService<IEmailScheduleSendService>();
ILogger<EmailJob> logger = services.GetRequiredService<ILogger<EmailJob>>();
logger.LogDebug($"FROM EXECUTE METHOD | {jobKey.Name} | {jobKey.Group} | START");
await emailScheduleSendServiceNew.EmailScheduleSendAsync(context);
logger.LogDebug($"FROM EXECUTE METHOD | {jobKey.Name} | {jobKey.Group} | END");
}
catch (Exception ex)
{
logger.LogDebug($"SOMETHING WENT WRONG!");
}
}
}
}
Related
We are trying to use quartz for some of our scheduled tasks such as email and record duplication. It seems to be a perfect fit however, the documentation hasn't really been updated for DI stuff and repo unit of work patterns so we have an issue. We have a separate project in our solution where we want to house our scheduler and different jobs, which will have access to our service layer where each Job calls our service. The issue is, we aren't sure how to get access to the scheduler after it gets instantiated in the job runner project's main file.
Main file:
private static async Task Main(string[] args)
{
LogProvider.SetCurrentLogProvider(new ConsoleLogProvider());
// Grab the Scheduler instance from the Factory
StdSchedulerFactory factory = new StdSchedulerFactory();
IScheduler scheduler = await factory.GetScheduler();
// and start it off
await scheduler.Start();
// and last shut down the scheduler when you are ready to close your program
await scheduler.Shutdown();
}
Typical Job:
using Quartz;
using Compyl.AppLogic.AssessmentService;
namespace JobRunner.Jobs
{
public class AssessmentDuplicationJob : IJob
{
private readonly IAssessmentService _assessmentService;
public AssessmentDuplicationJob(IAssessmentService assessmentService)
{
_assessmentService = assessmentService;
}
public async Task Execute(IJobExecutionContext context)
{
var dataMap = context.MergedJobDataMap;
var id = dataMap.GetInt("id");
var shouldPopulateAnswers = dataMap.GetBoolean("shouldPopulateAnswers");
_assessmentService.DuplicateAssessment(id, shouldPopulateAnswers);
}
}
}
we want to have a class or method in the job runner project that houses the scheduler through DI and a method(s) like
[Inject] IScheduler scheduler
public ScheduleEntity (T entity)
{
var job = createJob(entity)//another method that creates the job and jobdatamap this is easy enough
var trigger = createTrigger(entity)// same as above
scheduler.ScheduleJob(job, trigger)
}
Then we would call the ScheduleEntity method wherever we needed it in our webapp.
Not sure if this is the "correct" way, but what we're doing is setting up the default scheduler in the DI container and then using it in each component we need it by calling the factory's get scheduler method, and then using a static class from our other project to create and schedule jobs, using the scheduler as a parameter.
In an MVC5 web application using Net 4.8, I am trying to do the following:
When launching the application, I would like a scheduler to trigger a job every 2 Minutes.
The job reads and processes messages from a message queue.
I previously used Hangfire for this and it worked quite well.
However, I was told not to use Hangfire for this application. As an alternative, I opted for Quartz.NET here but I am currently having trouble setting up and triggering
the desired action. Following the Quartz.NET documentation, this is my current setup.
public class DummyJob :IJob
{
private readonly ISomeInterface _someInterface;
public DummyJob(ISomeInterface someInterface)
{
_someInterface = someInterface;
}
public async Task Execute(IJobExecutionContext context)
{
await _someInterface.ProcessMessages();
}
}
Next, the job configuration
public class JobScheduler
{
public static void Start()
{
// Run every 2 minutes
const string cron = #"0 0/2 * * * ?";
ISchedulerFactory schedulerFactory = new StdSchedulerFactory();
var scheduler = (IScheduler)schedulerFactory.GetScheduler().Result;
scheduler.Start();
var job = JobBuilder.Create<DummyJob>().Build();
var trigger = TriggerBuilder
.Create()
.WithCronSchedule(cron)
.Build();
scheduler.ScheduleJob(job, trigger);
}
}
I tried starting the job from both Startup.cs and Global.asax. Although the job was created, it did not fire when launching the application.
public void Configuration(IAppBuilder app)
{
[...]
JobScheduler.JobScheduler.Start();
}
What am I missing in terms of setting up QuartzNet? Do I need to put JobScheduler.JobScheduler.Start(); into Global.asax.cs or Startup.cs?
I've implemented the BackgroundQueue as explained here, and as shown:
public ActionResult SomeAction()
{
backgroundQueue.QueueBackgroundWorkItem(async ct =>
{
//Do some work...
});
return Ok();
}
I registered the BackgroundQueue with Autofac as:
builder.RegisterType<BackgroundQueue>()
.As<IBackgroundQueue>()
.SingleInstance();
So far so good. I call my controller action and the task is added to the queue. And there it stays without being executed.
So how do I get the task to execute?
The BackgroundQueue implementation that you took from the documentation is only one part to the solution: The background queue will just keep track of the jobs that you want to be executed.
What you will also need is right below that in the docs: The QueuedHostedService. This is a background service that gets registered with the DI container and is started when the application starts. From then on, it will monitor your BackgroundQueue and work off jobs as they get queued.
A simplified example implementation of this background service, without logging or error handling, could look like this:
public class QueuedHostedService : BackgroundService
{
private readonly IBackgroundQueue _backgroundQueue;
public QueuedHostedService(IBackgroundQueue backgroundQueue)
{
_backgroundQueue = backgroundQueue;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
var workItem = await _backgroundQueue.DequeueAsync(stoppingToken);
await workItem(stoppingToken);
}
}
}
I'm developing a web api with asp.net core 2.2 and ef core 2.2.1. The api, besides handling the restful requests made by an angular app, is in charge of processing some xml files that are used as an interface with other software. Files are local to the application server and detected through a FileWatcher.
I've noticed during my tests that when I reprocess multiple times an xml test file, starting from the second time the file is being reprocessed, I obtain the Exception:
System.InvalidOperationException: The instance of entity type
'QualityLot' cannot be tracked because another instance with the key
value '{QualityLotID: ...}' is already being tracked. When
attaching existing entities, ensure that only one entity instance with
a given key value is attached.
when I call the method DbContext.QualityLot.Update(qualityLot);
The "processing file" service and the service it's using are configured into the Startup.cs file as following:
services.AddHostedService<InterfaceDownloadService>();
services.AddTransient<IQLDwnldService, QLDwnldService>();
the db context is configued like this:
services.AddDbContext<MyDbContext>(cfg =>
{
cfg.UseSqlServer(_config.GetConnectionString("LIMSConnectionString"));
});
and the class looks like:
public class InterfaceDownloadService : BackgroundServiceBase
{
[...]
public InterfaceDownloadService(IHostingEnvironment env,
ILogger<InterfaceDownloadService> logger,
IServiceProvider serviceProvider)
{
_ServiceProvider = serviceProvider;
}
[...]
private void processFiles()
{
[...]
_ServiceProvider.GetService<IQLDwnldService>().QLDownloadAsync(ev);
}
}
public abstract class BackgroundServiceBase : IHostedService, IDisposable
{
private Task _executingTask;
private readonly CancellationTokenSource _stoppingCts =
new CancellationTokenSource();
protected abstract Task ExecuteAsync(CancellationToken stoppingToken);
public virtual Task StartAsync(CancellationToken cancellationToken)
{
// Store the task we're executing
_executingTask = ExecuteAsync(_stoppingCts.Token);
// If the task is completed then return it,
// this will bubble cancellation and failure to the caller
if (_executingTask.IsCompleted)
{
return _executingTask;
}
// Otherwise it's running
return Task.CompletedTask;
}
public virtual async Task StopAsync(CancellationToken cancellationToken)
{
// Stop called without start
if (_executingTask == null)
{
return;
}
try
{
// Signal cancellation to the executing method
_stoppingCts.Cancel();
}
finally
{
// Wait until the task completes or the stop token triggers
await Task.WhenAny(_executingTask, Task.Delay(Timeout.Infinite,
cancellationToken));
}
}
public virtual void Dispose()
{
_stoppingCts.Cancel();
}
}
Here the critical point, where I have the exception:
public async Task QLDownloadAsync(FileReceivedEvent fileReceivedEvent)
{
Logger.LogInformation($"QLDwnld file {fileReceivedEvent.Event.FullPath} received for Processing");
try
{
QualityLotDownload qualityRoutingDwnld = deserializeObject<QualityLotDownload>(fileReceivedEvent.XsltPath, fileReceivedEvent.Event.FullPath);
Logger.LogDebug($"QLDwnld file {fileReceivedEvent.Event.FullPath} deserialized correctly. Need to determinate whether Insert or Update QualityLot {qualityRoutingDwnld.QualityLots.QualityLot.QualityLotID}");
for (int remainingRetries = fileReceivedEvent.MaxRetries; remainingRetries > 0; remainingRetries--)
{
using (var transaction = await DbContext.Database.BeginTransactionAsync())
{
try
{
var qualityLotDeserialized = qualityRoutingDwnld.QualityLots.QualityLot;
// insert the object into the database
var qualityLot = await DbContext.QualityLot.Where(x => x.QualityLotID == qualityLotDeserialized.QualityLotID).FirstOrDefaultAsync();
if (qualityLot == null) // INSERT QL
{
await InsertQualityLot(qualityLotDeserialized);
}
else // UPDATE QL
{
await UpdateQualityLot(qualityLot, qualityLotDeserialized);
}
[...]
transaction.Commit();
}
catch (Exception ex)
{
Logger.LogError(ex, $"Retry {fileReceivedEvent.MaxRetries - remainingRetries +1}: Exception processing QLDwnld file {fileReceivedEvent.Event.FullPath}.");
transaction.Rollback();
if (remainingRetries == 1)
{
return;
}
}
The method UpdateQualityLot(qualityLot, qualityLotDeserialized); is invoked because the entity already exists in the db
private async Task UpdateQualityLot(QualityLot qualityLot, QualityLotDownloadQualityLotsQualityLot qualityLotDeserialized)
{
[fields update]
DbContext.QualityLot.Update(qualityLot);
await DbContext.SaveChangesAsync();
}
The call to DbContext.QualityLot.Update(qualityLot); fails.
From what I can see the instance of QLDwnldService is new for every file being processed, in other words the following method returns every time a new object (as configured into Startup.cs)
_ServiceProvider.GetService<IQLDwnldService>().QLDownloadAsync(ev);
, while the DbContext is reused and that's probably the reason why the entity results already tracked.
I also tride to setup the non-tracking option in the DbContext OnConfiguring()
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
base.OnConfiguring(optionsBuilder);
optionsBuilder
.UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking);
}
So my question is. What's wrong here? Maybe an architecture problematic or maybe a misleading configuration of e core? Thanks in advance for any support.
To be honest I could not figure out where your DBContext is actually injected from your code.
But from the error message I'd say your context is reused in a place where it should not be. So it's injected once and then used over and over and over.
You have registered your service as "Scoped" (because that's the default).
You should register it as "Transient" to ensure you will get a new instance on every call to your service provider:
services.AddDbContext<MyDbContext>(cfg =>
{
cfg.UseSqlServer(_config.GetConnectionString("LIMSConnectionString"));
},
ServiceLifetime.Transient);
Brad mentioned that this will have consequences for the rest of your application and he's right.
The better option might be to leave your DbContext scoped and inject the IServiceScopeFactory into your hosted service. Then create a new scope where you need it:
using(var scope = injectedServiceScopeFactory.CreateScope())
{
var dbContext = scope.ServiceProvider.GetService<DbContext>();
// do your processing with context
} // this will end the scope, the scoped dbcontext will be disposed here
Please note that this still does not mean that you should access the DbContext in parallel. I don't know why your calls are all async. If you are actually doing parallel work, make sure you create one DbContext per thread.
I am quite new to Asp.net and have a website using Entity Framework. Every night, I need to do some work on my Person entities.
Thus I installed Quartz.Net et tried to use it this way in Global.asax :
<%# Application Language="C#" %>
<%# Import Namespace="Quartz" %>
<%# Import Namespace="Quartz.Impl" %>
<script runat="server">
private IScheduler Scheduler { get; set; }
void Application_Start(object sender, EventArgs e)
{
Scheduler = StdSchedulerFactory.GetDefaultScheduler();
Scheduler.Start();
IJobDetail dailyReset = JobBuilder.Create<ApplicationJobs.DailyReset>()
.WithIdentity("dailyReset", "group1")
.Build();
ITrigger dailyResetTrigger = TriggerBuilder.Create()
.WithIdentity("dailyResetTrigger", "group1")
.StartAt(DateBuilder.DateOf(3, 0, 0))
.WithSimpleSchedule(x => x
.WithIntervalInHours(24)
.RepeatForever())
.Build()
Scheduler.ScheduleJob(dailyReset, dailyResetTrigger);
}
</script>
Then my ApplicationJobs class :
public class ApplicationJobs : System.Web.HttpApplication
{
public class DailyReset : IJob
{
public void Execute(IJobExecutionContext context)
{
using (var uow = new UnitOfWork())
{
foreach (Person person in uof.Context.Persons)
{
//do something
}
}
}
}
}
And finally the UnitOfWork :
public class UnitOfWork : IDisposable
{
private const string _httpContextKey = "_unitOfWork";
private MyEntities _dbContext;
public static UnitOfWork Current
{
get { return (UnitOfWork)HttpContext.Current.Items[_httpContextKey]; }
}
public UnitOfWork()
{
HttpContext.Current.Items[_httpContextKey] = this;
}
public MyEntities Context
{
get
{
if (_dbContext == null)
_dbContext = new MyEntities();
return _dbContext;
}
}
}
But using (var uow = new UnitOfWork()) is not working because of HttpContext.Current.Items[_httpContextKey] = this; in uow's constructor ; I read that HttpContext.Current was not available in Application_Start.
In read related posts, notably this one but I don't really understand if I do need to create something like UnitOfWorkScope described here, or if there could be a way to do that as it currently is.
Then is there any clean and safe way to schedule some task which would use my UnitOfWork in order to update entities ?
Thanks a lot.
Your problem come from the fact that when your job will run, it wil be called by the quartz scheduller, not from an http request (even if the job is in an ASP website).
So HttpContext.Current will be most likely null.
Keep in mind when using Quartz that you shoudl see it as a totally paralle process to your website, almost like a separate service.
If you need to pass "argument" to your job, you can use the job data map
JobDataMap dataMap = jobContext.JobDetail.JobDataMap;
(see here for more info : http://www.quartz-scheduler.net/documentation/quartz-2.x/tutorial/more-about-jobs.html)
If you need to access your job, just use the same key and group when creating a jobkey (the one you used in WithIdentity
Note that it is recommended for entity context to be alive only for the time of the action you need it, so you could probably just instantiate a new context at the start of the job and dispose it at the end.
The issue is that you're not executing the job within a web request. As in, web request starts, you check outstanding work, do work if required, request ends. Without a web request you have no context - as the context is for the lifetime of the web request and accessible via the request thread.
Another issue you're going to have is app-pool, using default settings, may end if there's no activity. So you would need a way to keep it alive.
An alternative method is to use something like win task scheduler to hit the website to kick off the work.