RX.net - catching exceptions and higher order observables - c#

I'm trying to learn RX(.net) and I'm losing my mind a bit. I have an observable of which I want to handle exceptions by using Catch(). I want to be able to access the item T that is moving through the observable chain within that Catch() and I thought this would be possible with a higher order observable that is Concat()ed afterwards, like so:
IObservable<RunData> obs = ...;
var safeObs = obs.Select(rd =>
.Select(rd => {
// simple toy example, could throw exception here in practice
// throw new Exception();
return (result: true, runData: rd);
})
.Catch((Exception e) => // try to catch any exception occurring within the stream, return a new tuple with result: false if that happens
{
return (Observable.Return((result: false, runData: rd))); // possible to access rd here
})
).Concat();
So far, so good.
But while testing this pattern I noticed that it breaks the assumption that I'm able to see all RunData instances when I subscribe to that safeObs. I've written the following test to showcase this:
[Test]
[Explicit]
public async Task TestHigherOrderExceptionHandling()
{
var counter = new Counter();
var useHigherOrderExceptionHandling = true; // test succeeds when false, fails when true
var obs = Observable.Create<RunData>(async (o) =>
{
await Task.Delay(100); // just here to justify the async nature
o.OnNext(new RunData(counter)); // produce a new RunData object, must be disposed later!
o.OnCompleted();
return Disposable.Empty;
})
.Concat(Observable.Empty<RunData>().Delay(TimeSpan.FromSeconds(1)))
.Repeat() // Resubscribe indefinitely after source completes
.Publish().RefCount() // see http://northhorizon.net/2011/sharing-in-rx/
;
// transforms the stream, exceptions might be thrown inside of stream, would like to catch them and handle them appropriately
IObservable<(bool result, RunData runData)> TransformRunDataToResult(IObservable<RunData> obs)
{
return obs.Select(rd => {
// simple toy example, could throw exception here in practice
// throw new Exception();
return (result: true, runData: rd);
});
}
IObservable<(bool result, RunData runData)> safeObs;
if (useHigherOrderExceptionHandling)
{
safeObs = obs.Select(rd =>
TransformRunDataToResult(obs)
.Catch((Exception e) => // try to catch any exception occurring within the stream, return a new tuple with result: false if that happens
{
return (Observable.Return((result: false, runData: rd)));
})
).Concat();
}
else
{
safeObs = TransformRunDataToResult(obs);
}
safeObs.Subscribe(
async (t) =>
{
var (result, runData) = t;
try
{
await Task.Delay(100); // just here to justify the async nature
Console.WriteLine($"Result: {result}");
}
finally
{
t.runData.Dispose(); // dispose RunData instance that was created by the observable above
}
});
await Task.Delay(5000); // give observable enough time to produce a few items
Assert.AreEqual(0, counter.Value);
}
// simple timer class, just here so we have a reference typed counter that we can pass around
public class Counter
{
public int Value { get; set; }
}
// data that is moving through observable pipeline, must be disposed at the end
public class RunData : IDisposable
{
private readonly Counter counter;
public RunData(Counter counter)
{
this.counter = counter;
Console.WriteLine("Created");
counter.Value++;
}
public void Dispose()
{
Console.WriteLine("Dispose called");
counter.Value--;
}
}
Running this test fails. There is one more instance of RunData created than there is disposed... why? Changing useHigherOrderExceptionHandling to false makes the test succeed.
EDIT:
I simplified the code (removed async code, limited repeats to make it predictable) and tried the suggestion, but I'm getting the same bad result... the test fails:
[Test]
[Explicit]
public async Task TestHigherOrderExceptionHandling2()
{
var counter = new Counter();
var useHigherOrderExceptionHandling = true; // test succeeds when false, fails when true
var obs = Observable.Create<RunData>(o =>
{
o.OnNext(new RunData(counter)); // produce a new RunData object, must be disposed later!
o.OnCompleted();
return Disposable.Empty;
})
.Concat(Observable.Empty<RunData>().Delay(TimeSpan.FromSeconds(1)))
.Repeat(3) // Resubscribe two more times after source completes
.Publish().RefCount() // see http://northhorizon.net/2011/sharing-in-rx/
;
// transforms the stream, exceptions might be thrown inside of stream, I would like to catch them and handle them appropriately
IObservable<(bool result, RunData runData)> TransformRunDataToResult(IObservable<RunData> obs)
{
return obs.Select(rd =>
{
// simple toy example, could throw exception here in practice
// throw new Exception();
return (result: true, runData: rd);
});
}
IObservable<(bool result, RunData runData)> safeObs;
if (useHigherOrderExceptionHandling)
{
safeObs = obs.Publish(_obs => _obs
.Select(rd => TransformRunDataToResult(_obs)
.Catch((Exception e) => Observable.Return((result: false, runData: rd)))
))
.Concat();
}
else
{
safeObs = TransformRunDataToResult(obs);
}
safeObs.Subscribe(
t =>
{
var (result, runData) = t;
try
{
Console.WriteLine($"Result: {result}");
}
finally
{
t.runData.Dispose(); // dispose RunData instance that was created by the observable above
}
});
await Task.Delay(4000); // give observable enough time to produce a few items
Assert.AreEqual(0, counter.Value);
}
Output:
Created
Created
Result: True
Dispose called
Created
Result: True
Dispose called
There's still a second subscription happening at the beginning(?) and there's one more RunData object created than is disposed.

It's not clear here what you're trying to accomplish.
First, your code mixes Tasks with Observables, which is generally something to avoid. You generally want to pick one or the other.
Second, I found that both versions of safeObs would fail tests, as I would expect: You're consistently incrementing, then consistently decrementing, but with (effectively) inconsistent time gaps in between the increments and decrements. Run this enough times and you'll be wrong eventually.
You also have a multiple subscription bug. If you collapse all your code into one fluid chain, this bug should stand out:
// this is roughly equivalent to var obs in your code
var obs2 = Observable.Interval(TimeSpan.FromSeconds(2))
.Select(_ => new RunData(counter))
.Publish()
.RefCount();
// this is equivalent to the higher order version of safeObs in your code
var safeObs2HigherOrder = obs2
.Select(rd => obs2
.Select(innerRd => (result: true, runData: innerRd))
.Catch((Exception e) => Observable.Return((result: false, runData: rd)))
))
.Concat();
Notice how safeObs2HigherOrder references obs2 twice, effectively subscribing twice. You can fix that as follows:
var safeObs2HigherOrder = obs.Publish(_obs => _obs
.Select(rd => _obs
.Select(innerRd => (result: true, runData: innerRd))
.Catch((Exception e) => Observable.Return((result: false, runData: rd)))
))
.Concat();
Lastly, the Concat at the end of the safeObs2HigherOrder should probably be a Switch or a Merge. Hard to tell when the larger problem aren't readily apparent.
I don't know if you were looking for a code review or an answer to a particular question, but your code needs quite a bit of work.

Related

Exception handling in RX.Net when using ToEventPattern and Timeout

I am writing some code using RX in C# that must interface with an older system by emitting events.
In summary, I have an observable and need to emit one event when the observable completes and another event if a timeout exception is detected. The main problem is how best to handle the exception.
I'm relatively new to RX, so although I have found a solution, I can't be sure that there isn't a better or more appropriate way that uses the RX extensions better.
This is not the real code but indicates the pattern of my thinking:
public delegate void SuccessHandler(object sender, SuccessEventArgs e);
public event SuccessHandler OnSuccess;
public delegate void TimeoutHandler(object sender, TimeoutEventArgs e);
public event TimeoutHandler OnTimeout;
var id;
var o = Observable.Return() // <- this would be a fetch from an asynchronous source
.Where(r=>r.status=="OK")
.Timeout(new Timespan(0,0,30)
.Do(r=> {
id=r.Id // <-- Ugh! I know this shouldn't be done!
}
.Subscribe(r => {
var statusResponse= new StatusResponse()
{
Id = r.Id
Name = r.Name
Message = "The operation completed successfully",
Status = Status.Success
};
if (OnSuccess == null) return;
OnSuccess (this, new SuccessEventArgs(statusResponse);
},
e =>
{
_logger.LogError(e, "A matching response was not returned in a timely fashion");
if (OnTimeout == null) return;
OnTimeout(this, new TimeoutEventArgs(id));
});
If I didn't need to detect and act upon the timeout it would be fine; I have already worked out how to substitute the Subscribe for ToEventPattern:
...
.Select(r =>
{
var statusResponse= new StatusResponse()
{
Id = r.Id
Name = r.Name
Message = "The operation completed successfully",
Status = Status.Success
};
return new EventPattern<SuccessEventArgs>(this, new SuccessEventArgs(statusResponse));
})
.ToEventPattern();
However, I'd like to be able to detect the timeout (and possibly other exceptions). my experiments with Catch have been unsuccessful because I can't seem to get the types to line up correctly, probably because I don't really understand what is going on.
I'd very much appreciate opinions on this. Is this an acceptable solution? How can I improve it? Can anyone point me to some good online references that will explain how this kind of flow-control and exception handling can be done (all the examples I've seen so far seem to stop short of the real-world case where you want to emit an event and combine that with exception handling).
Thanks in advance
You can branch from observables quite easily, e.g.
var a = Observable.Range(0, 10);
var b = a.Select(x => x * x);
var c = a.Select(x => x * 10);
A word of warning - if the observable is cold, this will cause the producer function to run for each subscription. Look up the difference between hot and cold observables if this isn't clear.
I've created a solution that creates two branches from the source observable and turns each into an event:
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
var service = new Service();
var apiCall = service.CallApi();
apiCall.OnSuccess.OnNext += (_, __) => Console.WriteLine("Success!");
apiCall.OnTimeout.OnNext += (_, __) => Console.WriteLine("Timeout!");
Console.ReadLine();
}
}
class SuccessEventArgs{}
class TimeoutEventArgs{}
class ApiCall
{
public IEventPatternSource<SuccessEventArgs> OnSuccess {get;}
public IEventPatternSource<TimeoutEventArgs> OnTimeout {get;}
public ApiCall(IEventPatternSource<SuccessEventArgs> onSuccess, IEventPatternSource<TimeoutEventArgs> onTimeout)
{
OnSuccess = onSuccess;
OnTimeout = onTimeout;
}
}
class Service
{
public ApiCall CallApi()
{
var apiCall = Observable
.Timer(TimeSpan.FromSeconds(3))
.Do(_ => Console.WriteLine("Api Called"))
.Select(_ => new EventPattern<SuccessEventArgs>(null, new SuccessEventArgs()))
// .Timeout(TimeSpan.FromSeconds(2)) // uncomment to time out
.Timeout(TimeSpan.FromSeconds(4))
// the following two lines turn the "cold" observable "hot"
// comment them out and see how often "Api Called" is logged
.Publish()
.RefCount();
var success = apiCall
// ignore the TimeoutException and return an empty observable
.Catch<EventPattern<SuccessEventArgs>, TimeoutException>(_ => Observable.Empty<EventPattern<SuccessEventArgs>>())
.ToEventPattern();
var timeout = apiCall
.Materialize() // turn the exception into a call to OnNext rather than OnError
.Where(x => x.Exception is TimeoutException)
.Select(_ => new EventPattern<TimeoutEventArgs>(null, new TimeoutEventArgs()))
.ToEventPattern();
return new ApiCall(success, timeout);
}
}

Best way to handle exception in multi task to call WCF operation

I have implemented a service name ExamClient which have two operations one is Ping which return a basic string which mean service is available and one is FindStudy which search in DB it may take a long to be proceeded.
In the other side I have several endpoints of ExamClient I wand to run FindStudy per end point by task so in a Dispatcher I have something like this:
public FindStudies_DTO_OUT FindStudies(FindStudies_DTO_IN findStudies_DTO_IN)
{
List<Study_C> ret = new List<Study_C>();
List<Task> tasks = new List<Task>();
foreach (var sp in Cluster)
{
string serviceAddress = sp.GetLibraryAddress(ServiceLibrary_C.PCM) + "/Exam.svc";
var task = Task.Run(() =>
{
ExamClient examClient = new ExamClient(serviceAddress.GetBinding(), new EndpointAddress(serviceAddress), Token);
var ping = Task.Run(() =>
{
examClient.Ping();
});
if (!ping.Wait(examClient.Endpoint.Binding.OpenTimeout))
{
Logging.Log(LoggingMode.Warning, "Timeout on FindStudies for:{0}, address:{1}", sp.Name, serviceAddress);
return new List<Study_C>(); // if return null then need to manage it on ret.AddRange(t.Result);
}
return (examClient.FindStudies(findStudies_DTO_IN).Studies.Select(x =>
{
x.StudyInstanceUID = string.Format("{0}|{1}", sp.Name, x.StudyInstanceUID);
x.InstitutionName = sp.Name;
return x;
}));
});
task.ContinueWith(t =>
{
lock (ret)
{
ret.AddRange(t.Result);
}
}, TaskContinuationOptions.OnlyOnRanToCompletion);
task.ContinueWith(t =>
{
Logging.Log(LoggingMode.Error, "FindStudies failed for :{0}, address:{1}, EXP:{2}", sp.Name, serviceAddress, t.Exception.ToString());
}, TaskContinuationOptions.OnlyOnFaulted);
tasks.Add(task);
}
try
{
Task.WaitAll(tasks.ToArray());
}
catch (AggregateException aggEx)
{
foreach (Exception exp in aggEx.InnerExceptions)
{
Logging.Log(LoggingMode.Error, "Error while FindStudies EXP:{0}", exp.ToString());
}
}
return new FindStudies_DTO_OUT(ret.Sort(findStudies_DTO_IN.SortColumnName, findStudies_DTO_IN.SortOrderBy));
}
First I have to run Ping per end point to know connection is established
after that FindStudy.
if there are three end pints in Cluster six task be run in parallel mode, 3 for Ping and 3 for FindStudy.
I think something is wrong with my code to handle exception nice...
So what is the best way to implement this scenario ?
thanks in advance.
Let me throw my answer to simplify and remove unnecessary code block. And bit of explanation along the code.
public FindStudies_DTO_OUT FindStudies(FindStudies_DTO_IN findStudies_DTO_IN)
{
// Thread-safe collection
var ret = new ConcurrentBag<Study_C>()
// Loop cluster list and process each item in parallel and wait all process to finish. This handle the parallism better than task run
Parallel.Foreach(Cluster, (sp) =>
{
var serviceAddress = sp.GetLibraryAddress(ServiceLibrary_C.PCM) + "/Exam.svc";
ExamClient examClient = new ExamClient(serviceAddress.GetBinding(), new EndpointAddress(serviceAddress), Token);
try
{
examClient.Ping();
// declare result variable type outside try catch to be visible below
var result = examClient.FindStudies(findStudies_DTO_IN);
}
catch(TimeoutException timeoutEx)
{
// abort examclient to dispose channel properly
Logging.Log(LoggingMode.Warning, "Timeout on FindStudies for:{0}, address:{1}", sp.Name, serviceAddress);
}
catch(FaultException fault)
{
Logging.Log(LoggingMode.Error, "FindStudies failed for :{0}, address:{1}, EXP:{2}", sp.Name, serviceAddress, fault.Exception.ToString());
}
catch(Exception ex)
{
// anything else
}
// add exception type as needed for proper logging
// use inverted if to reduce nested condition
if( result == null )
return null;
var study_c = result.Studies.Select(x =>
{
x.StudyInstanceUID = string.Format("{0}|{1}", sp.Name, x.StudyInstanceUID);
x.InstitutionName = sp.Name;
return x;
}));
// Thread-safe collection
ret.AddRange(study_c);
});
// for sorting i guess concurrentBag has orderby; if not working convert to list
return new FindStudies_DTO_OUT(ret.Sort(findStudies_DTO_IN.SortColumnName, findStudies_DTO_IN.SortOrderBy));
}
Note : Code haven't tested but the gist is there. Also I feels like task.run inside task.run is bad idea can't remember which article I read it (probably from Stephen Cleary not sure).

Observable.Retry doesn't work as expected

I have a sequence of numbers that are processed using an async method. I'm simulating a remote service call that may fail. In case of failure, I would like to retry until the call successes.
The problem is that with the code I'm trying, every time an exception is thrown in the async method, the sequence seems to hang forever.
You can test it with this simple code snippet (it's tested in LINQPad)
Random rnd = new Random();
void Main()
{
var numbers = Enumerable.Range(1, 10).ToObservable();
var processed = numbers.SelectMany(n => Process(n).ToObservable().Retry());
processed.Subscribe( f => Console.WriteLine(f));
}
public async Task<int> Process(int n)
{
if (rnd.Next(2) == 1)
{
throw new InvalidOperationException();
}
await Task.Delay(2000);
return n*10;
}
It should process every element, retrying the ones that have failed. Instead, it never ends and I don't know why.
How can I make it to do what I want?
EDIT: (thanks #CharlesNRice and #JonSkeet for the clues!):
This works!
Random rnd = new Random();
void Main()
{
var numbers = Enumerable.Range(1, 10).ToObservable();
var processed = numbers.SelectMany(n => RetryTask(() => MyTask(n)).ToObservable());
processed.Subscribe(f => Console.WriteLine(f));
}
private async Task<int> MyTask(int n)
{
if (rnd.Next(2) == 1)
{
throw new InvalidOperationException();
}
await System.Threading.Tasks.Task.Delay(2000);
return n * 10;
}
async Task<T> RetryTask<T>(Func<Task<T>> myTask, int? retryCount = null)
{
while (true)
{
try
{
return await myTask();
}
catch (Exception)
{
Debug.WriteLine("Retrying...");
if (retryCount.HasValue)
{
if (retryCount == 0)
{
throw;
}
retryCount--;
}
}
}
}
Rolling your own Retry is overkill in this case. You can achieve the same thing by simply wrapping your method call in a Defer block and it will be re-executed when the retry occurs.
var numbers = Enumerable.Range(1, 10).ToObservable();
var processed = numbers.SelectMany(n =>
//Defer call passed method every time it is subscribed to,
//Allowing the Retry to work correctly.
Observable.Defer(() =>
Process(n).ToObservable()).Retry()
);
processed.Subscribe( f => Console.WriteLine(f));
You are retrying back to the same Task that is in a faulted state. Retry will resubscribe back to the observable source. The source of your retry is the ToObservable(). It will not act like a task factory and make a new Task and since the task is faulted it continues to retry on the faulted task and will never be successful.
You can check out this answer how to make your own retry wrapper
https://stackoverflow.com/a/6090049/1798889

Lazy observable sequence that replays value or error

I am trying to create an observable pipeline with the following characteristics:
is lazy (does nothing until somebody subscribes)
executes at most once regardless of how many subscriptions are received
replays its resulting value, if any OR
replays its resulting error, if any
For the life of me, I can't figure out the correct semantics to accomplish this. I thought it would be a simple case of doing something like this:
Observable
.Defer(() => Observable
.Start(() => { /* do something */ })
.PublishLast()
.ConnectUntilCompleted());
Where ConnectUntilCompleted just does what it sounds like:
public static IObservable<T> ConnectUntilCompleted<T>(this IConnectableObservable<T> #this)
{
#this.Connect();
return #this;
}
This seems to work when the observable terminates successfully, but not when there's an error. Any subscribers do not receive the error:
[Fact]
public void test()
{
var o = Observable
.Defer(() => Observable
.Start(() => { throw new InvalidOperationException(); })
.PublishLast()
.ConnectUntilCompleted());
// this does not throw!
o.Subscribe();
}
Can anyone tell me what I'm doing wrong? Why doesn't Publish replay any error it receives?
UPDATE: it gets even stranger:
[Fact]
public void test()
{
var o = Observable
.Defer(() => Observable
.Start(() => { throw new InvalidOperationException(); })
.PublishLast()
.ConnectUntilCompleted())
.Do(
_ => { },
ex => { /* this executes */ });
// this does not throw!
o.Subscribe();
o.Subscribe(
_ => { },
ex => { /* even though this executes */ });
}
Try this version of you ConnectUntilCompleted method:
public static IObservable<T> ConnectUntilCompleted<T>(this IConnectableObservable<T> #this)
{
return Observable.Create<T>(o =>
{
var subscription = #this.Subscribe(o);
var connection = #this.Connect();
return new CompositeDisposable(subscription, connection);
});
}
The allows Rx to behave properly.
Now I've added to it to help show what's going on:
public static IObservable<T> ConnectUntilCompleted<T>(this IConnectableObservable<T> #this)
{
return Observable.Create<T>(o =>
{
var disposed = Disposable.Create(() => Console.WriteLine("Disposed!"));
var subscription = Observable
.Defer<T>(() => { Console.WriteLine("Subscribing!"); return #this; })
.Subscribe(o);
Console.WriteLine("Connecting!");
var connection = #this.Connect();
return new CompositeDisposable(disposed, subscription, connection);
});
}
Now your observable looks like this:
var o =
Observable
.Defer(() =>
Observable
.Start(() =>
{
Console.WriteLine("Started.");
throw new InvalidOperationException();
}))
.PublishLast()
.ConnectUntilCompleted();
The final key thing is to actually handle the errors in the subscription - so it's not enough to simply do o.Subscribe().
So do this:
o.Subscribe(
x => Console.WriteLine(x),
e => Console.WriteLine(e.Message),
() => Console.WriteLine("Done."));
o.Subscribe(
x => Console.WriteLine(x),
e => Console.WriteLine(e.Message),
() => Console.WriteLine("Done."));
o.Subscribe(
x => Console.WriteLine(x),
e => Console.WriteLine(e.Message),
() => Console.WriteLine("Done."));
When I run that I get this:
Subscribing!
Connecting!
Subscribing!
Connecting!
Subscribing!
Connecting!
Started.
Operation is not valid due to the current state of the object.
Disposed!
Operation is not valid due to the current state of the object.
Disposed!
Operation is not valid due to the current state of the object.
Disposed!
Note that "Started" only appears once, but the error is reported three times.
(Sometimes Started appears higher up in the list after the first subscription.)
I think this is what you wanted from your description.
Just to support #Engimativity's answer, i want to show how you should be runing your tests so you stop getting these "surprises". Your tests are non-deterministic because they are multi-threaded/concurrent. Your use of Observable.Start without providing an IScheduler is problematic. If you run your tests with a TestScheduler your tests will now be singlethreaded and determinisitic
[Test]
public void Test()
{
var testScheduler = new TestScheduler();
var o = Observable
.Defer(() => Observable
.Start(() => { throw new InvalidOperationException(); }, testScheduler)
.PublishLast()
.ConnectUntilCompleted());
var observer = testScheduler.CreateObserver<Unit>();
o.Subscribe(observer);
testScheduler.Start();
CollectionAssert.IsNotEmpty(observer.Messages);
Assert.AreEqual(NotificationKind.OnError, observer.Messages[0].Value.Kind);
}
An alternative way to achieve your requirements could be:
var lazy = new Lazy<Task>(async () => { /* execute once */ }, isThreadSafe: true);
var o = Observable.FromAsync(() => lazy.Value);
When subscribed for the first time, lazy would create (and execute) the task. For other subscriptions, lazy would return the same (possibly already completed or failed) task.

Cancel Operations / UI Notification and UI Information?

I'm currently working on a small project that use Tasks.Dataflow and I'm a little bit confused about UI notifications. I want to separate my "Pipeline" from the UI in another class called PipelineService, but I'm unable to notify the UI on cancelled operations or data that should be shown up in the UI. How can this be handled in the right manner?
Code:
private void btnStartPipeline_Click(object sender, EventArgs e)
{
btnStartPipeline.Enabled = false;
btnStopPipeline.Enabled = true;
cancellationToken = new CancellationTokenSource();
if (head == null)
{
head = pipeline.SearchPipeline();
}
head.Post(AppDirectoryNames.STORE_PATH);
}
private void btnStopPipeline_Click(object sender, EventArgs e)
{
cancellationToken.Cancel();
}
This methods related to Form1.cs. head is type of ITargetBlock<string>.
public ITargetBlock<string> SearchPipeline()
{
var search = new TransformBlock<string, IEnumerable<FileInfo>>(path =>
{
try
{
return Search(path);
}
catch (OperationCanceledException)
{
return Enumerable.Empty<FileInfo>();
}
});
var move = new ActionBlock<IEnumerable<FileInfo>>(files =>
{
try
{
Move(files);
}
catch (OperationCanceledException ex)
{
throw ex;
}
});
var operationCancelled = new ActionBlock<object>(delegate
{
form.Invoke(form._update);
},
new ExecutionDataflowBlockOptions
{
TaskScheduler = TaskScheduler.FromCurrentSynchronizationContext()
});
search.LinkTo(move);
search.LinkTo(operationCancelled);
return search;
}
Invoke don't take effect with delegate methods. What am I doing wrong here?
At first, I din't understand why you think your code should work. The way you set up your dataflow network, each IEnumerable<FileInfo> generated by the search block is first sent to the move block. If the move block didn't accept it (which never happens here), it would be sent to the operationCancelled block. That doesn't seem to be what you want at all.
After looking at the walkthough you seem to be basing your code on, it does cancellation similar than you, but with one significant difference: it uses LinkTo() with a predicate, which rejects a message that signifies cancellation. If you wanted to do the same, you would need to also use LinkTo() with a predicate. And since I don't think an empty sequence is a good choice to signify cancellation, I think you should switch to null too.
Also, you don't need to use form.Invoke() if you're already using TaskScheduler.FromCurrentSynchronizationContext(), they do basically the same thing.
public ITargetBlock<string> SearchPipeline()
{
var search = new TransformBlock<string, IEnumerable<FileInfo>>(path =>
{
try
{
return Search(path);
}
catch (OperationCanceledException)
{
return null;
}
});
var move = new ActionBlock<IEnumerable<FileInfo>>(files =>
{
try
{
Move(files);
}
catch (OperationCanceledException)
{
// swallow the exception; we don't want to fault the block
}
});
var operationCancelled = new ActionBlock<object>(_ => form._update(),
new ExecutionDataflowBlockOptions
{
TaskScheduler = TaskScheduler.FromCurrentSynchronizationContext()
});
search.LinkTo(move, files => files != null);
search.LinkTo(operationCancelled);
return search;
}

Categories