How to persist data in a Quartz.net job Trigger between executions? - c#

There are a couple SO articles about this, but only one directly addresses this issue. However, the solution doesn't make sense for me. I'm using a straight string.
UPDATE:
This:
[PersistJobDataAfterExecution]
public class BackgroundTaskTester : IJob
{
public void Execute(IJobExecutionContext context)
{
Debug.WriteLine("Value Is: " + context.Trigger.JobDataMap["field1"] as string);
context.Trigger.JobDataMap["field1"] = DateTimeOffset.Now.ToString();
}
}
Outputs This:
Value Is:
Value Is:
Value Is:
Value Is:
Value Is:
Value Is:
Value Is:
But this:
[PersistJobDataAfterExecution]
public class BackgroundTaskTester : IJob
{
public void Execute(IJobExecutionContext context)
{
Debug.WriteLine("Value Is: " + context.JobDetail.JobDataMap["field1"] as string);
context.JobDetail.JobDataMap["field1"] = DateTimeOffset.Now.ToString();
}
}
Outputs This:
Value Is: 10/6/2014 9:26:23 AM -05:00
Value Is: 10/6/2014 9:26:28 AM -05:00
Value Is: 10/6/2014 9:26:33 AM -05:00
However, I want to store things in the Trigger. How do I get the Trigger to persist?
ORIGINAL QUESTION:
I have a class:
[PersistJobDataAfterExecution]
public class BackgroundTaskNotification : IJob
{
public void Execute(IJobExecutionContext context)
{
<see below>
}
}
The following code doesn't function as expected:
public void Execute(IJobExecutionContext context)
{
string x = context.MergedJobDataMap["field1"];
context.Put("field1", "test string");
string y = context.MergedJobDataMap["field1"];
// PROBLEM: x != y
}
I've tried context.JobDetail.JobDataMap.Put() and also context.Trigger.JobDataMap.Put() None of them update MergedJobDataMap.
Maybe that's OK though. There is a JobDataMap on the JobDetail object and the Trigger. What I'm trying to do is this:
public void Execute(IJobExecutionContext context)
{
string x = context.MergedJobDataMap["field1"]; //get last x
<do something with x>
context.Put("field1", x); //save updated x
}
I'm trying to do something with x and have x persist between runs.
I'm not sure if it's relevant, but I'll add that when I create the job, I actually put field1 into the trigger's JobDataMap. This is because I have a single Job, and multiple triggers. I want the data to be stored at the tigger level.

Original Answer
MergedDataMap is a combination of the TriggerDataMap and the JobDataMap (with trigger entries overriding job entries). Updating this will do nothing, since it doesn't propagate changes back to the original JobDataMap or TriggerDataMap, and it's only JobDataMap that is re-persisted.
You want to set context.JobDetail.JobDataMap["field1"] in order for it to be persisted.
Update 1 (based on question edit):
If you want to save to the Trigger datamap, you do have to do a little more work.
If you look in the IJobExecutionContext you are given in Execute(), you have an instance of the scheduler that started the job, and an instance of the trigger that started the job. Combine with info here:
Update Quart.NET Trigger
to update the trigger as part of the job execution. Note that this updates the trigger immediately, not after the job run (as when Quartz manages your job data for you).
This can be adapted to work for the job's data map as well, and have the changes persisted immediately vs. automatically at the end of the job execution.

Related

How can I get a Hangfire job's end time?

Given a Hangfire job's ID, how can I get the time at which the job finished running?
I've tried the below, but the JobData class doesn't have a property for job end time.
IStorageConnection connection = JobStorage.Current.GetConnection();
JobData jobData = connection.GetJobData(jobId);
I have had a similar requirement before. Here is a method I wrote to get the SucceededAt property using the name of the running method and the current PerformContext:
public static DateTime? GetCompareDate(PerformContext context, string methodName)
{
return long.TryParse(context.BackgroundJob.Id, out var currentJobId)
? JobStorage.Current
?.GetMonitoringApi()
?.SucceededJobs(0, (int)currentJobId)
?.LastOrDefault(x => x.Value?.Job?.Method?.Name == methodName).Value?.SucceededAt
: null;
}
You could just as easily get DeletedJobs, EnqueuedJobs, FailedJobs, etc.
You can call it from a job method like this:
public async Task SomeJob(PerformContext context, CancellationToken token)
{
⋮
var compareDate = GetCompareDate(context, nameof(SomeJob));
⋮
}
You just have to add the PerformContext when adding the job by passing in null:
RecurringJobManager.AddOrUpdate(
recurringJobId: "1",
job: Job.FromExpression(() => SomeJob(null, CancellationToken.None)),
cronExpression: Cron.Hourly(15),
options: new RecurringJobOptions
{
TimeZone = TimeZoneInfo.Local
});
Note: It will only work if the succeeded job has not expired yet. Successful jobs expire after one day - if you need to keep them longer (to get the SucceededAt property), here is a reference for that: How to configure the retention time of job?
You could try using the Stopwatch class when you first call the task and stop it when the task is completed.
Then you could use a log nugget to generate a txt log file containing the start time and end time of your job. Or directly save this values to your database so that you can review them later.

Filter Change Notifications in Active Directory: Create, Delete, Undelete

I am currently using the Change Notifications in Active Directory Domain Services in .NET as described in this blog. This will return all events that happen on an selected object (or in the subtree of that object). I now want to filter the list of events for creation and deletion (and maybe undeletion) events.
I would like to tell the ChangeNotifier class to only observe create-/delete-/undelete-events. The other solution is to receive all events and filter them on my side. I know that in case of the deletion of an object, the atribute list that is returned will contain the attribute isDeleted with the value True. But is there a way to see if the event represents the creation of an object? In my tests the value for usnchanged is always usncreated+1 in case of userobjects and both are equal for OUs, but can this be assured in high-frequency ADs? It is also possible to compare the changed and modified timestamp. And how can I tell if an object has been undeleted?
Just for the record, here is the main part of the code from the blog:
public class ChangeNotifier : IDisposable
{
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request
);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set; }
}
I participated in a design review about five years back on a project that started out using AD change notification. Very similar questions to yours were asked. I can share what I remember, and don't think things have change much since then. We ended up switching to DirSync.
It didn't seem possible to get just creates & deletes from AD change notifications. We found change notification resulted enough events monitoring a large directory that notification processing could bottleneck and fall behind. This API is not designed for scale, but as I recall the performance/latency were not the primary reason we switched.
Yes, the usn relationship for new objects generally holds, although I think there are multi-dc scenarios where you can get usncreated == usnchanged for a new user, but we didn't test that extensively, because...
The important thing for us was that change notification only gives you reliable object creation detection under the unrealistic assumption that your machine is up 100% of the time! In production systems there are always some case where you need to reboot and catch up or re-synchronize, and we switched to DirSync because it has a robust way to handle those scenarios.
In our case it could block email to a new user for an indeterminate time if an object create were missed. That obviously wouldn't be good, we needed to be sure. For AD change notifications, getting that resync right that would have some more work and hard to test. But for DirSync, its more natural, and there's a fast-path resume mechanism that usually avoids resync. For safety I think we triggered a full re-synchronize every day.
DirSync is not as real-time as change notification, but its possible to get ~30-second average latency by issuing the DirSync query once a minute.

How to seed an observable from a database

I'm trying to expose an observable sequence that gives observers all existing records in a database table plus any future items. For the sake of argument, lets say it's log entries. Therefore, I'd have something like this:
public class LogService
{
private readonly Subject<LogEntry> entries;
public LogService()
{
this.entries = new Subject<LogEntry>();
this.entries
.Buffer(...)
.Subscribe(async x => WriteLogEntriesToDatabaseAsync(x));
}
public IObservable<LogEntry> Entries
{
get { return this.entries; }
}
public IObservable<LogEntry> AllLogEntries
{
get
{
// how the heck?
}
}
public void Log(string message)
{
this.entries.OnNext(new LogEntry(message));
}
private async Task<IEnumerable<LogEntry>> GetLogEntriesAsync()
{
// reads existing entries from DB table and returns them
}
private async Task WriteLogEntriesToDatabaseAsync(IList<LogEntry> entries)
{
// writes entries to the database
}
}
My initial thought for the implementation of AllLogEntries was something like this:
return Observable.Create<LogEntry>(
async observer =>
{
var existingEntries = await this.GetLogEntriesAsync();
foreach (var existingEntry in existingEntries)
{
observer.OnNext(existingEntry);
}
return this.entries.Subscribe(observer);
});
But the problem with this is that there could log entries that have been buffered and not yet written to the database. Hence, those entries will be missed because they are not in the database and have already passed through the entries observable.
My next thought was to separate the buffered entries from the non-buffered and use the buffered when implementing AllLogEntries:
return Observable.Create<LogEntry>(
async observer =>
{
var existingEntries = await this.GetLogEntriesAsync();
foreach (var existingEntry in existingEntries)
{
observer.OnNext(existingEntry);
}
return this.bufferedEntries
.SelectMany(x => x)
.Subscribe(observer);
});
There are two problems with this:
It means clients of AllLogEntries also have to wait for the buffer timespan to pass before they receive their log entries. I want them to see log entries instantaneously.
There is still a race condition in that log entries could be written to the database between the point at which I finish reading the existing ones and the point at which I return the future entries.
So my question is: how would I actually go about achieving my requirements here with no possibility of race conditions, and avoiding any major performance penalties?
To do this via the client code, you will probably have to implement a solution using polling and then look for differences between calls. Possibly combining a solution with
Observable.Interval() : http://rxwiki.wikidot.com/101samples#toc28 , and
Observable.DistinctUntilChanged()
will give you sufficient solution.
Alternatively, I'd suggest you try to find a solution where the clients are notified when the DB/table is updated. In a web application, you could use something like SignalR to do this.
For example: http://techbrij.com/database-change-notifications-asp-net-signalr-sqldependency
If its not a web-application, a similar update mechanism via sockets may work.
See these links (these came from the accepted answer of SignalR polling database for updates):
http://xsockets.net/api/net-c#snippet61
https://github.com/codeplanner/XSocketsPollingLegacyDB

How do I create a hot RX observable from multiple DBContexts

DBContexts are short lived, created and destroyed with every request. I have a number of tasks that I'd like to perform prior to and post save and I'd like to handle these with some sort of eventing model. I'm wondering in RX is the right route.
Is there some way of creating a singleton "hub" then causing my DBContext to raise BeforeChange (SavingChanges event) and post save (no applicable event) Observables and "push" them into the long lived hub.
In effect I'd like to do this in my "hub" singleton
public IObservable<EventPattern<EventArgs>> Saves = new Subject<EventPattern<EventArgs>>();
public void AttachContext(DbContext context)
{
Saves = Observable.FromEventPattern<EventArgs>(((IObjectContextAdapter)context).ObjectContext, "SavingChanges");
}
but in such a way that AttachContext simply feed its generated observable into the exisitng Saves observabe, rather than replacing it (and all of its subscriptions)?
Yes. Use a nested observable + merge:
private readonly Subject<IObservable<EventPattern<EventArgs>> _contexts = new Subject<IObservable<EventPattern<EventArgs>>();
private readonly IObservable<EventPattern<EventArgs>> _saves = _contexts.Merge();
public IObservable<EventPattern<EventArgs>> Saves { get { return _saves; } }
public void AttachContext(DbContext context)
{
_contexts.OnNext(Observable.FromEventPattern<EventArgs>(((IObjectContextAdapter)context).ObjectContext, "SavingChanges"));
}
The only problem with this is that the list of contexts being observed will grow unbounded since the Observable.FromEventPattern never completes. So this is effectively a memory leak as coded.
If you know that the db context will be used for a single save, then you could add a .FirstAsync() to the end of the call to Observable.FromEventPattern. This will cause your subject to stop watching the context once it has seen an event from it.
This still suffers from the problem that maybe a context is attached but its Save is never performed (due to logic, or an error or whatever).
The only way I know to resolve the problem is to change AttachContext to return an IDisposable that the caller must use when they want to detach the context:
public IDisposable AttachContext(DbContext context)
{
var detachSignal = new AsyncSubject<Unit>();
var disposable = Disposable.Create(() =>
{
detachSignal.OnNext(Unit.Default);
detachSignal.OnCompleted();
});
var events = Observable.FromEventPattern<EventArgs>(((IObjectContextAdapter)context).ObjectContext, "SavingChanges");
_contexts.OnNext(events.TakeUntil(detachSignal));
return disposable;
}

How to schedule custom task through c#

I am using Task Scheduler for scheduling my task in c# application. I think i got the basic understanding of this library.
But now i stuck in a place where i want to create a custom action which will execute on the set schedule.Like the built-in action i.e EmailAction ( Which will send mail on set schedule ), ShowMessageAction ( Which will show alert message on set schedule ), i want to create an action which will run my c# code and that code will save some data to my database.
What I tried yet is: I created a class CustomAction which inherits Action, like :
public class NewAction : Microsoft.Win32.TaskScheduler.Action
{
public override string Id
{
get
{
return base.Id;
}
set
{
base.Id = value;
}
}
public NewAction()
{
}
}
And here is my task scheduler code :
..
...
// Get the service on the local machine
using (TaskService ts = new TaskService())
{
// Create a new task definition and assign properties
TaskDefinition td = ts.NewTask();
td.RegistrationInfo.Description = "Does something";
// Create a trigger that will fire the task at this time every other day
TimeTrigger tt = new TimeTrigger();
tt.StartBoundary = DateTime.Today + TimeSpan.FromHours(19) + TimeSpan.FromMinutes(1);
tt.EndBoundary = DateTime.Today + TimeSpan.FromHours(19) + TimeSpan.FromMinutes(3);
tt.Repetition.Interval = TimeSpan.FromMinutes(1);
td.Triggers.Add(tt);
// Create an action that will launch Notepad whenever the trigger fires
td.Actions.Add(new NewAction()); <==========================
// Register the task in the root folder
ts.RootFolder.RegisterTaskDefinition(#"Test", td);
// Remove the task we just created
//ts.RootFolder.DeleteTask("Test");
}
...
....
On the line (pointed by arrow) i am getting the exception :
value does not fall within the expected range task scheduler
I am not sure what i am trying to achieve is even possible or not, if it is possible than please guid me on the correct direction?
According my understanding of your question. I had implemented same thing, but I had used "Quartz" Scheduler instead of "Task Scheduler". It is very easy to implement. May be you can also try with this.
To reference:
http://quartznet.sourceforge.net/tutorial/
Please correct me if I am wrong.

Categories