When I do this:
testScheduler.Schedule("Hello world",(scheduler, state) => Console.WriteLine(state));
testScheduler.AdvanceTo(testScheduler.Now);
I hit this code in VirtualTimeSchedulerBase:
public void AdvanceTo(TAbsolute time)
{
int num = this.Comparer.Compare(time, this.Clock);
if (num < 0)
throw new ArgumentOutOfRangeException("time");
if (num == 0)
return;
num == 0 is true, and I exit the method.
I can call testScheduler.Start() and my action will execute. But then TestScheduler will carry on executing everything in its queue. Whereas I want it to stop executing actions at the current time.
I can't see any other methods on TestScheduler that will get me the behaviour I want.
Is this a bug, or is it the correct behaviour but I'm missing something?
Edit:
I misunderstood. TestScheduler doesn't execute actions until after the date at which they are scheduled.
Scheduling an action immediately schedules it for the current value of testScheduler.Now. So it won't be executed until Now + 1.
var testScheduler = new TestScheduler();
var due = new DateTime();
testScheduler.Schedule("Hello world", due, (scheduler, s) =>
{
Console.WriteLine(s);
return Disposable.Empty;
});
testScheduler.AdvanceTo(due.Ticks);
// Nothing has happened
testScheduler.AdvanceTo(due.Ticks+1);
// -> "Hello world"
This still isn't the behaviour I would like, but there you go.
You might want to consider how you are using the TestScheduler:
It will in general execute at the due time. For example, this code will write to the Console:
var scheduler = new TestScheduler();
scheduler.Schedule(
TimeSpan.FromTicks(100),
() => Console.WriteLine("Hello"));
scheduler.AdvanceTo(100);
However, TestScheduler will only inspect it's queue when time is moved. So if you schedule an action, you need to move time via AdvanceBy, AdvanceTo or Start to get it to process the queue. When it does, it will process everything up to the current time. E.g. Even this will output to the console despite scheduling "in the past":
var scheduler = new TestScheduler();
scheduler.AdvanceTo(TimeSpan.FromTicks(100).Ticks);
scheduler.Schedule(
DateTime.MinValue + TimeSpan.FromTicks(50),
() => Console.WriteLine("Hello"));
Console.WriteLine("No output until the scheduler is given time...");
scheduler.AdvanceBy(1);
Idiomatic use of TestScheduler usually involves queuing up all your work and then running to completion via a Start() call and then checking for expected state. Use of AdvanceBy, AdvanceTo tends to be for more demanding test scenarios where you need to test some intermediate state - and even then you generally queue everything up first with appropriate due times and then AdvanceBy or AdvanceTo your time of interest, check state, and then progress again with AdvanceBy, AdvanceTo or Start.
What you don't want to do generally is queue work, run a bit, queue more work, run a bit - especially if you are scheduling without a due time. Then you will hit the problem you have here.
That's not to say in your specific scenario this may be unavoidable - if you need to make decisions about what to schedule after a particular time for example - but consider if you can set everything up in advance as it's probably going to result in cleaner test code that more closely follows the Arrange Act Assert pattern.
I try to Arrange by scheduling, then Act by moving time, then Assert the results.
use AdvanceBy(1) to advance the scheduler by 1 tick. The scheduler only executes events when the clock actually advances.
Different schedulers behave differently when you schedule something for immediate execution. Some of them really do execute it immediately. Some put it in a queue to be executed at the first available opportunity.
There's not a good way for the TestScheduler to behave in this situation unless the API is modified to let you tell it which way it should behave.
The Start method will execute everything scheduled. You can schedule a Stop method call to pause the execution at a given point.
var testScheduler = new TestScheduler();
var due = new DateTime();
testScheduler.Schedule("Hello world", due, (scheduler, s) =>
{
Console.WriteLine(s);
return Disposable.Empty;
});
testScheduler.Schedule(due.Ticks + 1, (scheduler, s) => testScheduler.Stop());
testScheduler.Schedule("Do more stuff", due.AddMinutes(1), (scheduler, s) => Console.WriteLine(s));
testScheduler.Start();
Assert.IsFalse(testScheduler.IsEnabled);
Assert.AreEqual(due.Ticks + 1, testScheduler.Clock);
Related
This question already has answers here:
Async process start and wait for it to finish
(6 answers)
Closed 8 months ago.
I wrote a Windows service, and I would like it to work using the same exact logic that it currently has, but process everything in parallel.
The real codebase is fairly abstracted and private, so I can't post the source here but here's the gist of it.
The app is a persistent process scheduler. It leverages EntityFramework 6 to scan a database for records detailing (among other things): 1.) a path for a process to run, 2.) a date/time to run the process and 3. the scheduled frequency that it is on.
Basic Functionality
It loops through the database for active records and returns all scheduled job details
Checks the date and time against the current date and time, within a buffer
If the job should run, it has been using either new Process().Start(...) with the path from the record, initializes the process if the file is found and is executable, and then wait for an exit or the configured timeout threshold to elapse
The exit code or lack of one (in the event of hanging processes) for each process run is what single-handedly determines if the record remains active and continues to get cycled and re-scheduled dynamically into the future, or instead, deactivated with errors logged to the associated record in the DB.
The process continues perpetually unless explicitly stopped.
Currently Working in Parallel (1000% faster, but it looks like it is possibly skipping records!). Maybe I need to add a lock before accessing the db?
As it turns out I was using (var process) {...} and it was throwing that it was being disposed. After staring at the code a few days, I saw this stupid mistake I had made trying to be tidy ;p
var tasks = new List<Thread>;
schedules.ForEach(schedule => {
// I have also tried ThreadPool.QueueUserWorkerItem(...) but then I read its basically long-hand for Task.Run() and I don't think it was working the same as new Thread using this pattern.
var thread = new Thread(() => await ProcessSchedule(schedule));
// Actually using slim semaphore in wild, but for simplicty sake...
thread.Start();
threads.Add(thread);
});
// Before exiting...
while (!threads.All(instance => !instance.IsAlive))
{
await Delay(debounceValue);
continue;
}
Working in Sequence without Issue Besides it's Blocking Slowness...
var tasks = new List<Task>;
schedules.ForEach(schedule => {
// I have also tried to just await this here, but that obviously will block anything on the same thread, so I add the tasks to a list a wait for completion after the loop is complete and the processes are working on their own process.
tasks.Add(ProcessSchedule(schedule));
});
// Before exiting...
// I expected this to work but it still seems to go record by record :*(
// Also tried using Task.Run(() => await task(...)) with no luck...
await Task.WhenAll(tasks);
Note: I am passing the list of tasks or threads up another level in the real code so it can process and be awaited on while everything is working but this is some simplified borderline-psuedo code strictly for demonstrating the concept I am struggling with as concise as possible
Inside of ProcessSchedule
Async method which starts a new process and waits for an exit. When one is received, a success or exit is written to the database using EntityFramework 6 on the schedule record which drove the process for this instance being parsed and evaluated. EG:
new Process(startInfo).Start();
// Monitor process, persist exit state via
dbContext.SaveChangesAsync();
process.StandardError += handleProcExitListener;
process.StandardOutput += handleProcExitListener;
process.Exited += (...) => handleProcExitListener(...);
I can say that:
I have no non-awaited async methods unless its using await Task.Run(MethodAsync), is in Main(argz) await Task.WhenAll(task);, etc.
Is async-await blocking me because DbContext is not thread safe by default or something? If this is the case, would someone please verify how I can achieve what I am looking for?
I have tried a number of techniques, but I am not able to get the application to run each process simultaneously, then wait and react upon the end state after spawning the processes, unless I use multithreading (new Thread directly maybe also using ThreadPool).
I have not had to resort to using threads in a while, mostly since the introduction of async-await in C#. Therefore, I am questioning myself using it without first fully understanding why. I would really appreciate some help grasping what I am missing.
It seems to me async is just a fancy pattern and facades for easy access to state-machine characteristics. Why then when I researched the subject did I just read that using ThreadPool.QueueUserWorkerItem(...) is rather obsolete since TPL async/await. If async/await does not give you new threads to work with, is running a process in parallel possible without it? Also, these processes take anywhere from 10min to 45min each to run. So you can see the importance of running them all together.
Since I am stuck with .NET 4.8, I, unfortunately, cannot use the async version of WaitForExitAsync() introduced in v5+.
Solution
I modeled a solution from the following Async process start and wait for it to finish
public static Task<bool> WaitForExitAsync(this Process process, TimeSpan timeout)
{
ManualResetEvent processWaitObject = new ManualResetEvent(false);
processWaitObject.SafeWaitHandle = new SafeWaitHandle(process.Handle, false);
TaskCompletionSource<bool> tcs = new TaskCompletionSource<bool>();
RegisteredWaitHandle registeredProcessWaitHandle = null;
registeredProcessWaitHandle = ThreadPool.RegisterWaitForSingleObject(
processWaitObject,
delegate(object state, bool timedOut)
{
if (!timedOut)
{
registeredProcessWaitHandle.Unregister(null);
}
processWaitObject.Dispose();
tcs.SetResult(!timedOut);
},
null /* state */,
timeout,
true /* executeOnlyOnce */);
return tcs.Task;
}
Even though you have omitted some of the Process code, I'm assuming that you are calling the blocking method Process.WaitForExit instead of the async equivalent of it. I have created a mock type solution and this runs in parallel.
private static async Task RunPowershellProcess()
{
using var process = new Process();
process.StartInfo.FileName = #"C:\windows\system32\windowspowershell\v1.0\powershell.exe";
process.StartInfo.UseShellExecute = true;
process.Exited += (a, _) =>
{
var p = a as Process;
Console.WriteLine(p?.ExitCode);
};
process.EnableRaisingEvents = true;
process.Start();
await process.WaitForExitAsync();
}
static async Task Main(string[] args)
{
var tasks = new List<Task>(10);
for (var x = 0; x < 10; x++)
{
tasks.Add(RunPowershellProcess());
}
await Task.WhenAll(tasks);
}
I have an existing Function App with 2 Functions and a storage queue. F1 is triggered by a message in a service bus topic. For each msg received, F1 calculates some sub-tasks (T1,T2,...) which have to be executed with varying amount of delay. Ex - T1 to be fired after 3 mins, T2 after 5min etc. F1 posts messages to a storage queue with appropriate visibility timeouts (to simulate the delay) and F2 is triggered whenever a message is visible in the queue. All works fine.
I now want to migrate this app to use 'Durable Functions'. F1 now only starts the orchestrator. The orchestrator code is something as follows -
public static async Task Orchestrator([OrchestrationTrigger] DurableOrchestrationContext context, TraceWriter log)
{
var results = await context.CallActivityAsync<List<TaskInfo>>("CalculateTasks", "someinput");
List<Task> tasks = new List<Task>();
foreach (var value in results)
{
var pnTask = context.CallActivityAsync("PerformSubTask", value);
tasks.Add(pnTask);
}
//dont't await as we want to fire and forget. No fan-in!
//await Task.WhenAll(tasks);
}
[FunctionName("PerformSubTask")]
public async static Task Run([ActivityTrigger]TaskInfo info, TraceWriter log)
{
TimeSpan timeDifference = DateTime.UtcNow - info.Origin.ToUniversalTime();
TimeSpan delay = TimeSpan.FromSeconds(info.DelayInSeconds);
var actualDelay = timeDifference > delay ? TimeSpan.Zero : delay - timeDifference;
//will still keep the activity function running and incur costs??
await Task.Delay(actualDelay);
//perform subtask work after delay!
}
I would only like to fan-out (no fan-in to collect the results) and start the subtasks. The orchestrator starts all the tasks and avoids call 'await Task.WhenAll'. The activity function calls 'Task.Delay' to wait for the specified amount of time and then does its work.
My questions
Does it make sense to use Durable Functions for this workflow?
Is this the right approach to orchestrate 'Fan-out' workflow?
I do not like the fact that the activity function is running for specified amount of time (3 or 5 mins) doing nothing. It will incurs costs,or?
Also if a delay of more than 10 minutes is required there is no way for an activity function to succeed with this approach!
My earlier attempt to avoid this was to use 'CreateTimer' in the orchestrator and then add the activity as a continuation, but I see only timer entries in the 'History' table. The continuation does not fire! Am I violating the constraint for orchestrator code - 'Orchestrator code must never initiate any async operation' ?
foreach (var value in results)
{
//calculate time to start
var timeToStart = ;
var pnTask = context.CreateTimer(timeToStart , CancellationToken.None).ContinueWith(t => context.CallActivityAsync("PerformSubTask", value));
tasks.Add(pnTask);
}
UPDATE : using approach suggested by Chris
Activity that calculates subtasks and delays
[FunctionName("CalculateTasks")]
public static List<TaskInfo> CalculateTasks([ActivityTrigger]string input,TraceWriter log)
{
//in reality time is obtained by calling an endpoint
DateTime currentTime = DateTime.UtcNow;
return new List<TaskInfo> {
new TaskInfo{ DelayInSeconds = 10, Origin = currentTime },
new TaskInfo{ DelayInSeconds = 20, Origin = currentTime },
new TaskInfo{ DelayInSeconds = 30, Origin = currentTime },
};
}
public static async Task Orchestrator([OrchestrationTrigger] DurableOrchestrationContext context, TraceWriter log)
{
var results = await context.CallActivityAsync<List<TaskInfo>>("CalculateTasks", "someinput");
var currentTime = context.CurrentUtcDateTime;
List<Task> tasks = new List<Task>();
foreach (var value in results)
{
TimeSpan timeDifference = currentTime - value.Origin;
TimeSpan delay = TimeSpan.FromSeconds(value.DelayInSeconds);
var actualDelay = timeDifference > delay ? TimeSpan.Zero : delay - timeDifference;
var timeToStart = currentTime.Add(actualDelay);
Task delayedActivityCall = context
.CreateTimer(timeToStart, CancellationToken.None)
.ContinueWith(t => context.CallActivityAsync("PerformSubtask", value));
tasks.Add(delayedActivityCall);
}
await Task.WhenAll(tasks);
}
Simply scheduling tasks from within the orchestrator seems to work.In my case I am calculating the tasks and the delays in another activity (CalculateTasks) before the loop. I want the delays to be calculated using the 'current time' when the activity was run. I am using DateTime.UtcNow in the activity. This somehow does not play well when used in the orchestrator. The activities specified by 'ContinueWith' just don't run and the orchestrator is always in 'Running' state.
Can I not use the time recorded by an activity from within the orchestrator?
UPDATE 2
So the workaround suggested by Chris works!
Since I do not want to collect the results of the activities I avoid calling 'await Tasks.WhenAll(tasks)' after scheduling all activities. I do this in order to reduce the contention on the control queue i.e. be able to start another orchestration if reqd. Nevertheless the status of the 'orchestrator' is still 'Running' till all the activities finish running. I guess it moves to 'Complete' only after the last activity posts a 'done' message to the control queue.
Am I right? Is there any way to free the orchestrator earlier i.e right after scheduling all activities?
The ContinueWith approach worked fine for me. I was able to simulate a version of your scenario using the following orchestrator code:
[FunctionName("Orchestrator")]
public static async Task Orchestrator(
[OrchestrationTrigger] DurableOrchestrationContext context,
TraceWriter log)
{
var tasks = new List<Task>(10);
for (int i = 0; i < 10; i++)
{
int j = i;
DateTime timeToStart = context.CurrentUtcDateTime.AddSeconds(10 * j);
Task delayedActivityCall = context
.CreateTimer(timeToStart, CancellationToken.None)
.ContinueWith(t => context.CallActivityAsync("PerformSubtask", j));
tasks.Add(delayedActivityCall);
}
await Task.WhenAll(tasks);
}
And for what it's worth, here is the activity function code.
[FunctionName("PerformSubtask")]
public static void Activity([ActivityTrigger] int j, TraceWriter log)
{
log.Warning($"{DateTime.Now:o}: {j:00}");
}
From the log output, I saw that all activity invocations ran 10 seconds apart from each other.
Another approach would be to fan out to multiple sub-orchestrations (like #jeffhollan suggested) which are simple a short sequence of a durable timer delay and your activity call.
UPDATE
I tried using your updated sample and was able to reproduce your problem! If you run locally in Visual Studio and configure the exception settings to always break on exceptions, then you should see the following:
System.InvalidOperationException: 'Multithreaded execution was detected. This can happen if the orchestrator function code awaits on a task that was not created by a DurableOrchestrationContext method. More details can be found in this article https://learn.microsoft.com/en-us/azure/azure-functions/durable-functions-checkpointing-and-replay#orchestrator-code-constraints.'
This means the thread which called context.CallActivityAsync("PerformSubtask", j) was not the same as the thread which called the orchestrator function. I don't know why my initial example didn't hit this, or why your version did. It has something to do with how the TPL decides which thread to use to run your ContinueWith delegate - something I need to look more into.
The good news is that there is a simple workaround, which is to specify TaskContinuationOptions.ExecuteSynchronously, like this:
Task delayedActivityCall = context
.CreateTimer(timeToStart, CancellationToken.None)
.ContinueWith(
t => context.CallActivityAsync("PerformSubtask", j),
TaskContinuationOptions.ExecuteSynchronously);
Please try that and let me know if that fixes the issue you're observing.
Ideally you wouldn't need to do this workaround when using Task.ContinueWith. I've opened an issue in GitHub to track this: https://github.com/Azure/azure-functions-durable-extension/issues/317
Since I do not want to collect the results of the activities I avoid calling await Tasks.WhenAll(tasks) after scheduling all activities. I do this in order to reduce the contention on the control queue i.e. be able to start another orchestration if reqd. Nevertheless the status of the 'orchestrator' is still 'Running' till all the activities finish running. I guess it moves to 'Complete' only after the last activity posts a 'done' message to the control queue.
This is expected. Orchestrator functions never actually complete until all outstanding durable tasks have completed. There isn't any way to work around this. Note that you can still start other orchestrator instances, there just might be some contention if they happen to land on the same partition (there are 4 partitions by default).
await Task.Delay is definitely not the best option: you will pay for this time while your function won't do anything useful. The max delay is also bound to 10 minutes on Consumption plan.
In my opinion, raw Queue messages are the best option for fire-and-forget scenarios. Set the proper visibility timeouts, and your scenario will work reliably and efficiently.
The killer feature of Durable Functions are awaits, which do their magic of pausing and resuming while keeping the scope. Thus, it's a great way to implement fan-in, but you don't need that.
I think durable definitely makes sense for this workflow. I do think the best option would be to leverage the delay / timer feature as you said, but based on the synchronous nature of execution I don't think I would add everything to a task list which is really expecting a .WhenAll() or .WhenAny() which you aren't aiming for. I think I personally would just do a sequential foreach loop with timer delays for each task. So pseudocode of:
for(int x = 0; x < results.Length; x++) {
await context.CreateTimer(TimeSpan.FromMinutes(1), ...);
await context.CallActivityAsync("PerformTaskAsync", results[x]);
}
You need those awaits in there regardless, so just avoiding the await Task.WhenAll(...) is likely causing some issues in code sample above. Hope that helps
You should be able to use the IDurableOrchestrationContext.StartNewOrchestration() method that's been added in 2019 to suport this scenario. See https://github.com/Azure/azure-functions-durable-extension/issues/715 for context
I am trying to write a unit test around an async pub/sub system. In my unit test, I create a TaskCompletionSource<int> and assign it a value within the subscription callback. Within the subscription callback, I unsubscribe from the publications. The next time I publish, I want to verify that the callback never got hit.
[TestMethod]
[Owner("Johnathon Sullinger")]
[TestCategory("Domain")]
[TestCategory("Domain - Events")]
public async Task DomainEvents_subscription_stops_receiving_messages_after_unsubscribing()
{
// Arrange
string content = "Domain Test";
var completionTask = new TaskCompletionSource<int>();
DomainEvents.Subscribe<FakeDomainEvent>(
(domainEvent, subscription) =>
{
// Set the completion source so the awaited task can fetch the result.
completionTask.TrySetResult(1);
subscription.Unsubscribe();
return completionTask.Task;
});
// Act
// Publish the first message
DomainEvents.Publish(new FakeDomainEvent(content));
await completionTask.Task;
// Get the first result
int firstResult = completionTask.Task.Result;
// Publish the second message
completionTask = new TaskCompletionSource<int>();
DomainEvents.Publish(new FakeDomainEvent(content));
await completionTask.Task;
// Get the second result
int secondResult = completionTask.Task.Result;
// Assert
Assert.AreEqual(1, firstResult, "The first result did not receive the expected value from the subscription delegate.");
Assert.AreEqual(default(int), secondResult, "The second result had a value assigned to it when it shouldn't have. The unsubscription did not work.");
}
When I do this, the test hangs at the second await. I understand that this happens due to the Task never returning. What I'm not sure is how to work around it. I know I could easily create a local field that I just assign values to like this:
[TestMethod]
[Owner("Johnathon Sullinger")]
[TestCategory("Domain")]
[TestCategory("Domain - Events")]
public void omainEvents_subscription_stops_receiving_messages_after_unsubscribing()
{
// Arrange
string content = "Domain Test";
int callbackResult = 0;
DomainEvents.Subscribe<FakeDomainEvent>(
(domainEvent, subscription) =>
{
// Set the completion source so the awaited task can fetch the result.
callbackResult++;
subscription.Unsubscribe();
return Task.FromResult(callbackResult);
});
// Act
// Publish the first message
DomainEvents.Publish(new FakeDomainEvent(content));
// Publish the second message
DomainEvents.Publish(new FakeDomainEvent(content));
// Assert
Assert.AreEqual(1, firstResult, "The callback was hit more than expected, or not hit at all.");
}
This feels wrong though. This assumes I never perform an await operation (which I do when their are subscribers) within the entire stack. This test isn't a safe test as the test could finish before the publish is totally finished. The intent here is that my callbacks are asynchronous and publications are non-blocking background processes.
How do I handle the CompletionSource in this scenario?
It's difficult to test that something won't ever happen. About the best you can do is test that it didn't happen within a reasonable time. I have a library of asynchronous coordination primitives, and to unit test this scenario I had to resort to the hack of observing the task only for a period of time, and then assuming success (see AssertEx.NeverCompletesAsync).
That's not the only solution, though. Perhaps the cleanest solution logically is to fake out time itself. That is, if your system has sufficient hooks for a fake time system, then you can actually write a test ensuring that a callback will never be called. This sounds really weird, but it's quite powerful. The disadvantage is that it would require significant code modifications - much more than just returning a Task. If you're interested, Rx is the place to start, with their TestScheduler type.
I am very new to using Quartz and I have a question regarding triggers. Is it possible to trigger based on file existence? I would like to have Quartz run a job until a certain file is found, then stop running that job and perhaps move on to a different one.
For example, I would like to do something like this:
(1) Job1 checks if File.txt exists in a given directory every 60 seconds.
(2) If File.txt is found, trigger Job2 to start. Job1 stops checking for file existence.
Right now, I have:
// Job definitions
var Job1 = JobBuilder.Create<TestEmail>().WithIdentity("job1", "group1").Build();
var Job2 = JobBuilder.Create<TestFileTrigger>().WithIdentity("job2", "group2").Build();
// Triggers
ITrigger trigger1 = TriggerBuilder.Create()
.WithIdentity("trigger1", "group1").StartNow()
.WithSimpleSchedule(x => x.WithIntervalInSeconds(5).RepeatForever())
.Build();
ITrigger trigger2 = TriggerBuilder.Create()
.WithIdentity("trigger2", "group2").StartNow()
.Build();
// Schedule jobs
scheduler.ScheduleJob(Job1, trigger1);
if (TestFileTrigger.fileExistence == true)
{
scheduler.ScheduleJob(Job2, trigger2);
}
but it seems like Job2 never starts.
TestEmail and TestFileTrigger simply print to console at the moment. The boolean TestFileTrigger.fileExistence comes from checking if a file exists at a given location (which it does).
Edit:
TestFileTrigger.fileExistence is a boolean. Added definitions of Job1/Job2 if that helps.
Edit:
I found that if I put Thread.Sleep(TimeSpan.FromSeconds(x)); before the if statement, the if statement will run if the condition is met. (Where x is some number of seconds.) Why does it work in this case, but not otherwise? I cannot always know how many seconds it will take for the condition to be met.
What type of application is this?
If this is, for example, a Windows service - to keep the scheduler alive so that it hangs around to execute the jobs according to your triggers, you need to do something like:
ThreadStart start = SetupSchedules()
var thread = new Thread(start) { Name = "mysvc" }
thread.Start();
.. this would go into the override void OnStart(string[] args) method of the Windows service.
The SetupSchedules method would be the thing that hooks into Quartz jobs and would be something like (The code you've written above in the OP would make a good start):
ISchedulerFactory factory = new StdSchedulerFactory();
JobScheduler = factory.GetScheduler();
JobScheduler.ScheduleJob(job1, trigger1);
This should keep it alive so that it executes the jobs. I've omitted a bunch of stuff here, but hopefully this should give you a few pointers to help weave it into your app.
You will also need something like this:
private void ManageThread()
{
var _thread = Thread.CurrentThread;
while (!_threadMustStop) // false by default, set this to true in a 'shutdown' process
{
Thread.Sleep(10000);
}
}
...which you call from your SetupSchedules method
It looks like you don't understand concurrency and threading that is involved here.
The issue is as follows.
Your MAIN thread does the following.
Create two jobs
Give each job a trigger
Schedule Job1 to start
Check TestFileTrigger.fileExistence and if true, start Job2 (it is false so it doesn't run).
THEN a threadpool thread will start Job1. Most likely setting TestFileTrigger.fileExistence = true. But Main thread has already completed it work.
FIN.
At no point do you go back and check if TestFileTrigger.fileExistence is true. So its result is irrelevant. You are in fact checking the result BEFORE you get a result.
By adding a Thread.Sleep you give the job1 enough time to complete and give you a result (job1 runs asynchronously and concurrently, and it is clear you expected it to run synchronously). Imagine for example you tell your friend Fred to go to the shop to buy Pizza and place it on your desk (asynchronous), then turn around straight away and wonder why there is no pizza on your desk.
Synchronous would be if you yourself went to the shop, bought a pizza and took it home and placed it on your desk, THEN eating pizza from your desk.
JobScheduler.ScheduleJob(job1, trigger1); does work asynchronously.
You should create a job to wrap up step 4 and schedule that to run periodically, OR you use the built in FileScanJob instead.
I'm trying to rewrite some code using Reactive Extensions for .NET but I need some guidance on how to achieve my goal.
I have a class that encapsulates some asynchronous behavior in a low level library. Think something that either reads or writes the network. When the class is started it will try to connect to the environment and when succesful it will signal this back by calling from a worker thread.
I want to turn this asynchronous behavior into a synchronous call and I have created a greatly simplified example below on how that can be achieved:
ManualResetEvent readyEvent = new ManualResetEvent(false);
public void Start(TimeSpan timeout) {
// Simulate a background process
ThreadPool.QueueUserWorkItem(_ => AsyncStart(TimeSpan.FromSeconds(1)));
// Wait for startup to complete.
if (!this.readyEvent.WaitOne(timeout))
throw new TimeoutException();
}
void AsyncStart(TimeSpan delay) {
Thread.Sleep(delay); // Simulate startup delay.
this.readyEvent.Set();
}
Running AsyncStart on a worker thread is just a way to simulate the asynchronous behavior of the library and is not part of my real code where the low level library supplies the thread and calls my code on a callback.
Notice that the Start method will throw a TimeoutException if start hasn't completed within the timeout interval.
I want to rewrite this code to use Rx. Here is my first attempt:
Subject<Unit> readySubject = new Subject<Unit>();
public void Start(TimeSpan timeout) {
ThreadPool.QueueUserWorkItem(_ => AsyncStart(TimeSpan.FromSeconds(1)));
// Point A - see below
this.readySubject.Timeout(timeout).First();
}
void AsyncStart(TimeSpan delay) {
Thread.Sleep(delay);
this.readySubject.OnNext(new Unit());
}
This is a decent attempt but unfortunately it contains a race condition. If the startup completes fast (e.g. if delay is 0) and if there is an additonal delay at point A then OnNext will be called on readySubject before First has executed. In essence the IObservable I'm applying Timeout and First never sees that startup has completed and a TimeoutException will be thrown instead.
It seems that Observable.Defer has been created to handle problems like this. Here is slightly more complex attempt to use Rx:
Subject<Unit> readySubject = new Subject<Unit>();
void Start(TimeSpan timeout) {
var ready = Observable.Defer(() => {
ThreadPool.QueueUserWorkItem(_ => AsyncStart(TimeSpan.FromSeconds(1)));
// Point B - see below
return this.readySubject.AsObservable();
});
ready.Timeout(timeout).First();
}
void AsyncStart(TimeSpan delay) {
Thread.Sleep(delay);
this.readySubject.OnNext(new Unit());
}
Now the asynchronous operation is not started immediately but only when the IObservable is being used. Unfortunately there is still a race condition but this time at point B. If the asynchronous operation started calls OnNext before the Defer lambda returns it is still lost and a TimeoutException will be thrown by Timeout.
I know I can use operators like Replay to buffer events but my initial example without Rx doesn't use any kind of buffering. Is there a way for me to use Rx to solve my problem without race conditions? In essence starting the asynchronous operation only after the IObservable has been connected to in this case Timeout and First?
Based on Ana Betts's answer here is working solution:
void Start(TimeSpan timeout) {
var readySubject = new AsyncSubject<Unit>();
ThreadPool.QueueUserWorkItem(_ => AsyncStart(readySubject, TimeSpan.FromSeconds(1)));
// Point C - see below
readySubject.Timeout(timeout).First();
}
void AsyncStart(ISubject<Unit> readySubject, TimeSpan delay) {
Thread.Sleep(delay);
readySubject.OnNext(new Unit());
readySubject.OnCompleted();
}
The interesting part is when there is a delay at point C that is longer than the time it takes for AsyncStart to complete. AsyncSubject retains the last notification sent and Timeout and First will still perform as expected.
So, one thing to know about Rx I think a lot of people do at first (myself included!): if you're using any traditional threading function like ResetEvents, Thread.Sleeps, or whatever, you're Doing It Wrong (tm) - it's like casting things to Arrays in LINQ because you know that the underlying type happens to be an array.
The key thing to know is that an async func is represented by a function that returns IObservable<TResult> - that's the magic sauce that lets you signal when something has completed. So here's how you'd "Rx-ify" a more traditional async func, like you'd see in a Silverlight web service:
IObservable<byte[]> readFromNetwork()
{
var ret = new AsyncSubject();
// Here's a traditional async function that you provide a callback to
asyncReaderFunc(theFile, buffer => {
ret.OnNext(buffer);
ret.OnCompleted();
});
return ret;
}
This is a decent attempt but unfortunately it contains a race condition.
This is where AsyncSubject comes in - this makes sure that even if asyncReaderFunc beats the Subscribe to the punch, AsyncSubject will still "replay" what happened.
So, now that we've got our function, we can do lots of interesting things to it:
// Make it into a sync function
byte[] results = readFromNetwork().First();
// Keep reading blocks one at a time until we run out
readFromNetwork().Repeat().TakeUntil(x => x == null || x.Length == 0).Subscribe(bytes => {
Console.WriteLine("Read {0} bytes in chunk", bytes.Length);
})
// Read the entire stream and get notified when the whole deal is finished
readFromNetwork()
.Repeat().TakeUntil(x => x == null || x.Length == 0)
.Aggregate(new MemoryStream(), (ms, bytes) => ms.Write(bytes))
.Subscribe(ms => {
Console.WriteLine("Got {0} bytes in total", ms.ToArray().Length);
});
// Or just get the entire thing as a MemoryStream and wait for it
var memoryStream = readFromNetwork()
.Repeat().TakeUntil(x => x == null || x.Length == 0)
.Aggregate(new MemoryStream(), (ms, bytes) => ms.Write(bytes))
.First();
I would further add to Paul's comment of adding WaitHandles means you are doing it wrong, that using Subjects directly usually means you are doing it wrong too. ;-)
Try to consider your Rx code working with sequences or pipelines. Subjects offer read and write capabilities which means you are no longer working with a pipeline or a sequence anymore (unless you have pipleines that go both ways or sequences that can reverse?!?)
So first Paul's code is pretty cool, but let's "Rx the hell out of it".
1st The AsyncStart method change it to this
IObservable<Unit> AsyncStart(TimeSpan delay)
{
Observable.Timer(delay).Select(_=>Unit.Default);
}
So easy! Look no subjects and data only flows one way. The important thing here is the signature change. It will push stuff to us. This is now very explicit. Passing in a Subject to me is very ambiguous.
2nd. We now dont need the Subject defined in the start method. We can also leverage the Scheduler features instead of the old-skool ThreadPool.QueueUserWorkItem.
void Start(TimeSpan timeout)
{
var isReady = AsyncStart(TimeSpan.FromSeconds(1))
.SubscribeOn(Scheduler.ThreadPool)
.PublishLast();
isReady.Connect();
isReady.Timeout(timeout).First();
}
Now we have a clear pipeline or sequence of events
AsyncStart --> isReady --> Start
Instead of Start-->AsyncStart-->Start
If I knew more of your problem space, I am sure we could come up with an even better way of doing this that did not require the blocking nature of the start method. The more you use Rx the more you will find that your old assumptions on when you need to block, use waithandles, etc can be thrown out the window.