I'm using Hangfire version "1.6.8".
var datetime = DateTime.Now;
var cron = Cron.Monthly(datetime.Day,datetime.Hour);
RecurringJob.AddOrUpdate<IService>( recurringId, x =>x.CreateRecurring(id), cron);
How can I end this Recurring Job after 'n' times executing it?
The simplest way to do this could be to pass in a specific number of times when the method that is called and prevent it from being executed once that number has been reached:
public class MyService : IService
{
public int runCount = 0;
public void CreateRecurring(id, int? maxTimes = null)
{
if (maxTimes.HasValue && (runCount >= maxTimes))
{
// Has run enough times now, don't do it again
return;
}
// do something...
}
}
// Run a max of 5 times
RecurringJob.AddOrUpdate<IService>( recurringId, x =>x.CreateRecurring(id, 5), cron);
Related
I've been working on a hobby project being developed in C# + Xamarin Forms + Prism + EF Core + Sqlite, debugging in UWP app.
I've written the following code to store tick data received from broker to Sqlite.
First, the OnTick call back that receives the ticks (approx. 1 tick per sec per instrument):
private void OnTick(Tick tickData)
{
foreach (var instrument in IntradayInstruments.Where(i => i.InstrumentToken == tickData.InstrumentToken))
{
instrument.UpdateIntradayCandle(tickData);
}
}
And the UpdateIntradayCandle method is:
public void UpdateIntradayCandle(Tick tick)
{
if (LastIntradayCandle != null)
{
if (LastIntradayCandle.Open == 0m)
{
LastIntradayCandle.Open = tick.LastPrice;
}
if (LastIntradayCandle.High < tick.LastPrice)
{
LastIntradayCandle.High = tick.LastPrice;
}
if (LastIntradayCandle.Low == 0m)
{
LastIntradayCandle.Low = tick.LastPrice;
}
else if (LastIntradayCandle.Low > tick.LastPrice)
{
LastIntradayCandle.Low = tick.LastPrice;
}
LastIntradayCandle.Close = tick.LastPrice;
}
}
The LastIntradayCandle is a property:
object _sync = new object();
private volatile IntradayCandle _lastIntradayCandle;
public IntradayCandle LastIntradayCandle
{
get
{
lock (_sync)
{
return _lastIntradayCandle;
}
}
set
{
lock (_sync)
{
_lastIntradayCandle = value;
}
}
}
Now, the LastIntradayCandle is changed periodically, say, 5 minutes, and a new candle is put in place for updating, from a different thread coming from a System.Threading.Timer which is scheduled to run every 5m.
public void AddNewIntradayCandle()
{
if (LastIntradayCandle != null)
{
LastIntradayCandle.IsClosed = true;
}
var newIntradayCandle = new IntradayCandle { Open = 0m, High = 0m, Low = 0m, Close = 0m };
LastIntradayCandle = newIntradayCandle;
IntradayCandles.Add(newIntradayCandle);
}
Now, the problem is, I'm getting 0s in those Open, High or Low but not in Close, Open having the most number of zeroes. This is happening very randomly.
I'm thinking that if any of the Open, High, Low or Close values is getting updated, it means the tick is having a value to be grabbed, but somehow one or more assignments in UpdateIntradayCandle method are not running. Having zeroes is a strict NO for the purpose of the app.
I'm neither formally trained as a programmer nor an expert, but a self-learning hobbyist and definitely never attempted at multi-threading before.
So, I request you to please point me what I am doing wrong, or better still, what should I be doing to make it work.
Multithreading and EF Core is not compatible things. EF Core context is not a thread safe. You have to create new context for each thread. Also making your object thread safe is wasting time.
So, schematically you have to do the following and you can remove locks from your object.
private void OnTick(Tick tickData)
{
using var ctx = new MyDbContext(...);
foreach (var instrument in ctx.IntradayInstruments.Where(i => i.InstrumentToken == tickData.InstrumentToken))
{
instrument.UpdateIntradayCandle(tickData);
}
ctx.SaveChanges();
}
I have a scenario, where I have a worker method (DoWork()) which is getting called by the web service continuously.
This worker method writes the data to a file using the (writeToFile()) and I need to write this to a file under 2 conditions
a) when the number of records reached in 500 OR
b) 2 minutes has been passed from the previous file been written
my sample code is as follows:
public override void DoWork()
{
//-- This worker method is called continuously by the webservice
List<string> list1 = new List<string>();
List<string> list2 = new List<string>();
int count =0;
//--- Some code that writes the RawData
list1.Add(rawData);
If(list1.Count<=500)
{
list2=list1;
count = WriteToFile(list2);
}
}
public static int WriteToFile(List<string> list)
{
//-- The writing of File Should happen when 500 records reached OR 2 mins passed.
// ---- Logic for writing the list file using the Streamwriter is working fine
}
I need a logic check if
500 records reached in the List OR
2 mins passed from the previous
file generated,
only then the file writing should happen.
Thanks
:)
To make it a little more readable, I'd use a little helper, here...
Time check:
class TimeCheck{
private TimeSpan timeoutDuration;
private DateTime nextTimeout;
// Default = 2 minutes.
public TimeCheck(): this(TimeSpan.FromMinutes(2))
{}
public TimeCheck( TimeSpan timeout ){
this.timeoutDuration= timeout;
this.nextTimeout = DateTime.Now + timeoutDuration;
}
public bool IsTimeoutReached => DateTime.Now >= nextTimeout;
public void Reset(){
nextTimeout = DateTime.Now + timeoutDuration;
}
}
Usage:
// on class level
const int MAXITEMS = 500;
private Lazy<TimeCheck> timecheck = new Lazy<TimeCheck>( () => return new TimeCheck() );
private List<string> list1 = new List<string>();
private readonly object Locker = new object();
public override void DoWork()
{
lock(Locker){
// ... add items to list1
// Write if a) list1 has 500+ items or b) 2 minutes since last write have passed
if( list1.Count >= MAXITEMS || timecheck.Value.IsTimeoutReached )
{
WriteToFile(list1);
list1.Clear(); // assuming _all_ Items are written.
timecheck.Value.Reset();
}
}
}
Attention:
If code is called by multiple threads, you need to make sure it's thread safe. I used lock which will create a bottleneck. You may want to figure out a more sophisticated manner, but this answer concentrates on the condition requirements.
Above snippet assumes, list1 is not accessed elsewhere.
I have added an attribute DisableConcurrentExecution(1) on the job, but all that does is delays the execution of second instance of a job until after first one is done. I want to be able to detect when a concurrent job has been run, and then cancel it all together.
I figured, if DisableConcurrentExecution(1) will prevent two instances of same recurrent job from running at the same, it will put the second job on "retry", thus changing it's State. So I added additional custom attribute on the job, which detects failed state, like so :
public class StopConcurrentTask : JobFilterAttribute, IElectStateFilter
{
public void OnStateElection(ElectStateContext context)
{
var failedState = context.CandidateState as FailedState;
if(failedState != null && failedState.Exception != null)
{
if(!string.IsNullOrEmpty(failedState.Exception.Message) && failedState.Exception.Message.Contains("Timeout expired. The timeout elapsed prior to obtaining a distributed lock on"))
{
}
}
}
}
This allows me to detect whether a job failed due to being run concurrently with another instance of same job. The problem is, I can't find a way to Cancel this specific failed job and remove it from being re-run. As it is now, the job will be put on retry schedule and Hangfire will attempt to run it a number of times.
I could of course put an attribute on the Job, ensuring it does not Retry at all. However, this is not a valid solution, because I want jobs to be Retried, except if they fail due to running concurrently.
You can prevent retry to happen if you put validation in OnPerformed method in IServerFilter interface.
Implementation :
public class StopConcurrentTask : JobFilterAttribute, IElectStateFilter, IServerFilter
{
// All failed after retry will be catched here and I don't know if you still need this
// but it is up to you
public void OnStateElection(ElectStateContext context)
{
var failedState = context.CandidateState as FailedState;
if (failedState != null && failedState.Exception != null)
{
if (!string.IsNullOrEmpty(failedState.Exception.Message) && failedState.Exception.Message.Contains("Timeout expired. The timeout elapsed prior to obtaining a distributed lock on"))
{
}
}
}
public void OnPerformed(PerformedContext filterContext)
{
// Do your exception handling or validation here
if (filterContext.Exception == null) return;
using (var connection = _jobStorage.GetConnection())
{
var storageConnection = connection as JobStorageConnection;
if (storageConnection == null)
return;
var jobId = filterContext.BackgroundJob.Id
// var job = storageConnection.GetJobData(jobId); -- If you want job detail
var failedState = new FailedState(filterContext.Exception)
{
Reason = "Your Exception Message or filterContext.Exception.Message"
};
using (var transaction = connection.GetConnection().CreateWriteTransaction())
{
transaction.RemoveFromSet("retries", jobId); // Remove from retry state
transaction.RemoveFromSet("schedule", jobId); // Remove from schedule state
transaction.SetJobState(jobId, failedState); // update status with failed state
transaction.Commit();
}
}
}
public void OnPerforming(PerformingContext filterContext)
{
// Do nothing
}
}
I hope this will help you.
I actually ended up using based on Jr Tabuloc answer - it will delete a job if it has been last executed 15 seconds ago - I noticed that time between server wake up and job execution varies. Usually it is in milliseconds, but since my jobs are executed once a day, I figured 15sec won't hurt.
public class StopWakeUpExecution : JobFilterAttribute, IServerFilter
{
public void OnPerformed(PerformedContext filterContext)
{
}
public void OnPerforming(PerformingContext filterContext)
{
using (var connection = JobStorage.Current.GetConnection())
{
var recurring = connection.GetRecurringJobs().FirstOrDefault(p => p.Job.ToString() == filterContext.BackgroundJob.Job.ToString());
TimeSpan difference = DateTime.UtcNow.Subtract(recurring.LastExecution.Value);
if (recurring != null && difference.Seconds < 15)
{
// Execution was due in the past. We don't want to automaticly execute jobs after server crash though.
var storageConnection = connection as JobStorageConnection;
if (storageConnection == null)
return;
var jobId = filterContext.BackgroundJob.Id;
var deletedState = new DeletedState()
{
Reason = "Task was due in the past. Please Execute manually if required."
};
using (var transaction = connection.CreateWriteTransaction())
{
transaction.RemoveFromSet("retries", jobId); // Remove from retry state
transaction.RemoveFromSet("schedule", jobId); // Remove from schedule state
transaction.SetJobState(jobId, deletedState); // update status with failed state
transaction.Commit();
}
}
}
}
}
I'm trying to control access to an object so that it may only be accessed a certain number of times over a given timespan. In one unit test that I have, access is limit to once per second. So 5 accesses should take just over 4 seconds. However, the test is failing on our TFS server, taking only 2 seconds. A stripped down version of my code to do this is here:
public class RateLimitedSessionStrippedDown<T>
{
private readonly int _rateLimit;
private readonly TimeSpan _rateLimitSpan;
private readonly T _instance;
private readonly object _lock;
private DateTime _lastReset;
private DateTime _lastUse;
private int _retrievalsSinceLastReset;
public RateLimitedSessionStrippedDown(int limitAmount, TimeSpan limitSpan, T instance )
{
_rateLimit = limitAmount;
_rateLimitSpan = limitSpan;
_lastUse = DateTime.UtcNow;
_instance = instance;
_lock = new object();
}
private void IncreaseRetrievalCount()
{
_retrievalsSinceLastReset++;
}
public T GetRateLimitedSession()
{
lock (_lock)
{
_lastUse = DateTime.UtcNow;
Block();
IncreaseRetrievalCount();
return _instance;
}
}
private void Block()
{
while (_retrievalsSinceLastReset >= _rateLimit &&
_lastReset.Add(_rateLimitSpan) > DateTime.UtcNow)
{
Thread.Sleep(TimeSpan.FromMilliseconds(10));
}
if (DateTime.UtcNow > _lastReset.Add(_rateLimitSpan))
{
_lastReset = DateTime.UtcNow;
_retrievalsSinceLastReset = 0;
}
}
}
While running on my computer, in both Debug and Release, it works fine. However, I have a unit test that fails once I commit to our TFS build server. This is the test:
[Test]
public void TestRateLimitOnePerSecond_AssertTakesAtLeastNMinusOneSeconds()
{
var rateLimiter = new RateLimitedSessionStrippedDown<object>(1, TimeSpan.FromSeconds(1), new object());
DateTime start = DateTime.UtcNow;
for (int i = 0; i < 5; i++)
{
rateLimiter.GetRateLimitedSession();
}
DateTime end = DateTime.UtcNow;
Assert.GreaterOrEqual(end.Subtract(start), TimeSpan.FromSeconds(4));
}
I wonder if the loop in the test is being optimised in a way that it runs each iteration of the loop on a separate thread (or something similar), which means that the test completes quicker than it should because Thread.Sleep only blocks the thread that it is being called on?
Your problem is inside of the Block method, and now that I look at the comments, it appears that Henk Holterman has already brought this up.
It will only fail when _lastReset.Add(_rateLimitSpan) and DateTime.UtcNow are equal. This doesn't happen very often, hence the reason why it fails intermittently. A fix would be to change > to >= on this line:
if (DateTime.UtcNow > _lastReset.Add(_rateLimitSpan))
It's not intuitive why, unless you understand that each call to DateTime.UtcNow doesn't necessarily return a new value one each call.
Even though DateTime.UtcNow is precise up to 100 nanoseconds, its precision is not the same as its accuracy. It relies on the machine's timer interval, which ranges from 1-15ms, but more often set to 15.25ms, unless you're doing something with multimedia.
You can see this in action with this dotnetfiddle. Unless you have a program open that is setting the timer to a different value, like 1ms, you'll notice that the difference between the ticks is about 150000 ticks, about 15ms, or the normal system timer interval.
We can also validate this by lifting out the calls to DateTime.UtcNow into temporary variables and comparing them at the end of the method:
private void Block()
{
var first = DateTime.UtcNow;
while (_retrievalsSinceLastReset >= _rateLimit &&
_lastReset.Add(_rateLimitSpan) > first)
{
Thread.Sleep(TimeSpan.FromMilliseconds(10));
first = DateTime.UtcNow;
}
var second = DateTime.UtcNow;
if (second > _lastReset.Add(_rateLimitSpan))
{
_lastReset = DateTime.UtcNow;
_retrievalsSinceLastReset = 0;
}
if (first == second)
{
Console.WriteLine("DateTime.UtcNow returned same value");
}
}
On my machine, all five calls to Block printed out DateTime.UtcNow as being equal.
It has to be trivial, but I just cannot get through it.
I have to limit amount of tasks (let's say connections, emails sent or clicks in the button) per amount of time. So e.g. I can send 1000 emails per hour.
How can I do that in c#? I don't know and don't care how much time each operation will take. I just want to make sure that for last hour, only 1000 will be executed.
class EventLimiter
{
Queue<DateTime> requestTimes;
int maxRequests;
TimeSpan timeSpan;
public EventLimiter(int maxRequests, TimeSpan timeSpan)
{
this.maxRequests = maxRequests;
this.timeSpan = timeSpan;
requestTimes = new Queue<DateTime>(maxRequests);
}
private void SynchronizeQueue()
{
while ((requestTimes.Count > 0) && (requestTimes.Peek().Add(timeSpan) < DateTime.UtcNow))
requestTimes.Dequeue();
}
public bool CanRequestNow()
{
SynchronizeQueue();
return requestTimes.Count < maxRequests;
}
public void EnqueueRequest()
{
while (!CanRequestNow())
Thread.Sleep(requestTimes.Peek().Add(timeSpan).Subtract(DateTime.UtcNow));
// Was: System.Threading.Thread.Sleep(1000);
requestTimes.Enqueue(DateTime.UtcNow);
}
}
Assuming a rolling hour window:
Maintain a list of when actions were done.
Each time you want to do your action, remove all in the list not within the hour.
If there are fewer than 1000 then do the action and add a record to your list.
Assuming hourly:
Create a proxy method and a variable that is incremented for every action, and reduced to zero on the hour.
Do your action if the counter is < 1000.
The above solution looked fine. Here is my trimmed down version:
public class EmailRateHelper
{
private int _requestsPerInterval;
private Queue<DateTime> _history;
private TimeSpan _interval;
public EmailRateHelper()
: this(30, new TimeSpan(0, 1, 0)) { }
public EmailRateHelper(int requestsPerInterval, TimeSpan interval)
{
_requestsPerInterval = requestsPerInterval;
_history = new Queue<DateTime>();
_interval = interval;
}
public void SleepAsNeeded()
{
DateTime now = DateTime.Now;
_history.Enqueue(now);
if (_history.Count >= _requestsPerInterval)
{
var last = _history.Dequeue();
TimeSpan difference = now - last;
if (difference < _interval)
{
System.Threading.Thread.Sleep(_interval - difference);
}
}
}
}
You can use Rx extensions (How to use the new BufferWithTimeOrCount in Rx that returns IObservable<IObservable<T>> instead of IObservable<IList<T>>), but I would implement the buffering manually by adding an appropriate proxy object.
You may also consider storing {action, time, user} information in a database and get number of actions in a last hour fomr the DB (or similar persisted storager) if you need to handle Application pool restarts / crashes. Otherwise clever user may circumvent your in-memory protection with overloading your server.
You can create a persistent counter for every user. Every time you receive a request (for sending an email) you need to check the value of the counter and the date of the counter creation.
If the count is greater than the limit you refuse the request
If the date is older than an hour you reset the counter and set the new creation date
If the date is correct and the count is under the limit you increase the counter
Only in the last two cases the request is executed.