Howto enforce object construction on first subscription only? - c#

Taking my first steps with Rx I am stuck here:
public class DisposableResourceDemo : IDisposable
{
public DisposableResourceDemo() {
Console.WriteLine("DisposableResourceDemo constructor.");
}
public void Dispose() {
Console.WriteLine("DisposableResourceDemo.Dispose()");
}
public void SideEffect() {
Console.WriteLine("DisposableResourceDemo.SideEffect()");
}
}
[Test]
public void ShowBehaviourOfRxUsing()
{
var test = Observable.Using(() =>
{
// This should happen exactly once, independent of number of subscriptions,
// object should be disposed on last subscription disposal or OnCompleted call
return new DisposableResourceDemo();
},
(dr) =>
{
return Observable.Create<string>(
(IObserver<string> observer) =>
{
dr.SideEffect();
var dummySource = Observable.Return<string>("Some Text");
return dummySource.Subscribe(observer);
});
}).Publish().RefCount();
Console.WriteLine("before 1st subscription.");
test.Subscribe(Console.WriteLine, () => Console.WriteLine("OnCompleted in 1st."));
Console.WriteLine("before 2nd subscription.");
test.Subscribe(Console.WriteLine, () => Console.WriteLine("OnCompleted in 2nd."));
}
To my surprise the code above yields
before 1st subscription.
DisposableResourceDemo constructor.
DisposableResourceDemo.SideEffect()
Some Text
OnCompleted in 1st.
DisposableResourceDemo.Dispose()
before 2nd subscription.
--> [happy with missing "Some Text" here]
OnCompleted in 2nd.
--> [unhappy with second instantiation here]
DisposableResourceDemo constructor.
DisposableResourceDemo.SideEffect()
DisposableResourceDemo.Dispose()
Please note that calling Connect() manually after both subscriptions is not what I want here, though then the output is as expected.

I am not totally sure what you are trying to achieve here. It seems that you want to share the observable sequence and its related resources. So the standard ways to do this is with the ConnectableObservable types that you get from .Replay() and .Publish() etc
You say you dont want to use .Connect() and instead you use .RefCount() which is very common. However, your sequence completes. You also are using the Extension method Subscribe(...) which will internally create an Auto detaching observer, i.e. when the sequence completes, it will disconnect.
So my question is, should the internal sequence actually complete?
If the answer is yes, then why would the 2nd subscription get the OnComplete notification...it has happened already, it is in the past. Maybe you do want to replay the OnComplete, in which case maybe .Replay(1) is what you want.
If the answer is no, then you can easily fix this by putting a Concat(Observable.Never<string>()) either before the .Publish() or after the Observable.Return.

Related

Delay Returns with NSubstitute

I have an interface IDiscosClient, for testing/demo purposes while I'm developing the app, I want a mock to return a new model when the .GetSingle<T>() method is called with a random delay of between 1 and 5 seconds. This is mostly so I can see that all of my various loading spinner components and whatnot work.
So, I thought I'd be able to do something like this:
Fixture fixture = new();
fixture.Customize(new DiscosModelFixtureCustomizationNoLinks());
builder.Services.AddTransient(_ =>
{
IDiscosClient client = Substitute.For<IDiscosClient>();
DiscosObject obj = fixture.Create<DiscosObject>();
client.GetSingle<DiscosObject>(Arg.Any<string>()).Returns(Task.Delay(Random.Shared.Next(1000,5000)).ContinueWith(_ => obj));
return client;
});
However, while there seems to be a delay when I first call the method, once this has resolved, it just seems to return the completed task with the same model in it every time I call it for that IDiscosClient instance.
Is there a simple enough way to accomplish this?
So the issue is that the code above only creates a fresh Task the first time and then returns the same one (which has already completed) each subsequent time.
To fix this, we can either change the code above to:
client.GetSingle<DiscosObject>(Arg.Any<string>()).Returns(_ => Task.Delay(Random.Shared.Next(1000,5000)).ContinueWith(_ => obj));
Or, for legibilities sake, we can extract it into a method and make the whole code block:
builder.Services.AddTransient(_ =>
{
IDiscosClient client = Substitute.For<IDiscosClient>();
client.GetSingle<DiscosObject>(Arg.Any<string>()).Returns(GetDiscosObject);
return client;
});
async Task<DiscosObject> GetDiscosObject(CallInfo _)
{
await Task.Delay(Random.Shared.Next(1000, 5000));
return fixture.Create<DiscosObject>();
}

How to mock a method within a non-mocked class using Moq?

I want to test the Save() method which is residing inside the Contributor class. This method, in turn, opens a dialog which, when finished loading, actions an event that triggers the method that I want to mock - PushToPortal.
internal class Contributor
{
private readonly IEntryPointWrapper _entryPoint;
private readonly ISyncDialog _dialog;
public event Action OnPushToWeldCompleted;
internal void Save(Document document)
{
...
void Ptp() => PushToPortal(information);
_dialog.ShowDialog(Ptp);
}
}
When ShowDialog is called, OnLoadingCompleted += Ptp; is invoked.
The problem begins just here. PushToPortal looks like this:
internal virtual void PushToPortal(Information information)
{
var pushToPortal = Task.Run(() => {
var result = _entryPoint.ProcessElements(information);
});
pushToPortal.ContinueWith(task => OnPushToPortalCompleted?.Invoke(), TaskScheduler.FromCurrentSynchronizationContext());
}
Basically, when running the tests, they continue running while the async method is still processing and when asserting, the callback does not retrieve the needed information unless I use Thread.Sleep, which is not a solution that I'd like.
The working solution would be:
_entryPointMock.Setup(epm => epm.ProcessElements(It.IsAny<Information>()))
.Callback<WeldInformation>(information => actualWeldInformation = information)
.Returns(new InfoResult { Status = Status.Succeed, InfoCount = 1 });
Thread.Sleep(5000);
_contributor.Save(document);
I tried mocking the PushToPortal method and use a Callback to retrieve its arguments, without going into ProcessElements, but it does not seem to do anything at all.
_entryPointMock.Setup(epm => epm.ProcessElements(It.IsAny<Information>()))
.Returns(new InfoResult { Status = Status.Succeed, InfoCount = 1 });
var mock = new Mock<Contributor>();
mock.CallBase = true;
mock.Setup(x => x.PushToPortal(It.IsAny<Information>()))
.Callback<Information>(information => actualInformation = information);
_contributor.Save(document);
Therefore, how can I mock PushToPortal properly such that I retrieve the information, without actually entering the new thread which processes the elements? I am looking for a solution which does not imply changing the current code very much - an example which I thought of (and would not like to implement) would've been breaking down the functionality and returning a Task.

How to unit test that tasks are run synchronously

In my code I have a method such as:
void PerformWork(List<Item> items)
{
HostingEnvironment.QueueBackgroundWorkItem(async cancellationToken =>
{
foreach (var item in items)
{
await itemHandler.PerformIndividualWork(item);
}
});
}
Where Item is just a known model and itemHandler just does some work based off of the model (the ItemHandler class is defined in a separately maintained code base as nuget pkg I'd rather not modify).
The purpose of this code is to have work done for a list of items in the background but synchronously.
As part of the work, I would like to create a unit test to verify that when this method is called, the items are handled synchronously. I'm pretty sure the issue can be simplified down to this:
await MyTask(1);
await MyTask(2);
Assert.IsTrue(/* MyTask with arg 1 was completed before MyTask with arg 2 */);
The first part of this code I can easily unit test is that the sequence is maintained. For example, using NSubstitute I can check method call order on the library code:
Received.InOrder(() =>
{
itemHandler.PerformIndividualWork(Arg.Is<Item>(arg => arg.Name == "First item"));
itemHandler.PerformIndividualWork(Arg.Is<Item>(arg => arg.Name == "Second item"));
itemHandler.PerformIndividualWork(Arg.Is<Item>(arg => arg.Name == "Third item"));
});
But I'm not quite sure how to ensure that they aren't run in parallel. I've had several ideas which seem bad like mocking the library to have an artificial delay when PerformIndividualWork is called and then either checking a time elapsed on the whole background task being queued or checking the timestamps of the itemHandler received calls for a minimum time between the calls. For instance, if I have PerformIndividualWork mocked to delay 500 milliseconds and I'm expecting three items, then I could check elapsed time:
stopwatch.Start();
// I have an interface instead of directly calling HostingEnvironment, so I can access the task being queued here
backgroundTask.Invoke(...);
stopwatch.Stop();
Assert.IsTrue(stopwatch.ElapsedMilliseconds > 1500);
But that doesn't feels right and could lead to false positives. Perhaps the solution lies in modifying the code itself; however, I can't think of a way of meaningfully changing it to make this sort of unit test (testing tasks are run in order) possible. We'll definitely have system/integration testing to ensure the issue caused by asynchronous performance of the individual items doesn't happen, but I would like to hit testing here at this level as well.
Not sure if this is a good idea, but one approach could be to use an itemHandler that will detect when items are handled in parallel. Here is a quick and dirty example:
public class AssertSynchronousItemHandler : IItemHandler
{
private volatile int concurrentWork = 0;
public List<Item> Items = new List<Item>();
public Task PerformIndividualWork(Item item) =>
Task.Run(() => {
var result = Interlocked.Increment(ref concurrentWork);
if (result != 1) {
throw new Exception($"Expected 1 work item running at a time, but got {result}");
}
Items.Add(item);
var after = Interlocked.Decrement(ref concurrentWork);
if (after != 0) {
throw new Exception($"Expected 0 work items running once this item finished, but got {after}");
}
});
}
There are probably big problems with this, but the basic idea is to check how many items are already being handled when we enter the method, then decrement the counter and check there are still no other items being handled. With threading stuff I think it is very hard to make guarantees about things from tests alone, but with enough items processed this can give us a little confidence that it is working as expected:
[Fact]
public void Sample() {
var handler = new AssertSynchronousItemHandler();
var subject = new Subject(handler);
var input = Enumerable.Range(0, 100).Select(x => new Item(x.ToString())).ToList();
subject.PerformWork(input);
// With the code from the question we don't have a way of detecting
// when `PerformWork` finishes. If we can't change this we need to make
// sure we wait "long enough". Yes this is yuck. :)
Thread.Sleep(1000);
Assert.Equal(input, handler.Items);
}
If I modify PerformWork to do things in parallel I get the test failing:
public void PerformWork2(List<Item> items) {
Task.WhenAll(
items.Select(item => itemHandler.PerformIndividualWork(item))
).Wait(2000);
}
// ---- System.Exception : Expected 1 work item running at a time, but got 4
That said, if it is very important to run synchronously and it is not apparent from glancing at the implementation with async/await then maybe it is worth using a more obviously synchronous design, like a queue serviced by only one thread, so that you're guaranteed synchronous execution by design and people won't inadvertently change it to async during refactoring (i.e. it is deliberately synchronous and documented that way).

Alternative in a situation of recurring Task demand

I have observer module which takes care of subscriptions of some reactive stream I have created from Kafka. Sadly I need to Poll in order to receive messages from kafka, so I need to dedicate one background thread for that. My first solution was this one:
public void Poll()
{
if (Interlocked.Exchange(ref _state, POLLING) == NOTPOLLING)
{
Task.Run(() =>
{
while (CurrentSubscriptions.Count != 0)
{
_consumer.Poll(TimeSpan.FromSeconds(1));
}
_state = NOTPOLLING;
});
}
}
Now my reviewer suggested that I should Task because it have statuses and can be checked if they are running or not. This led to this code:
public void Poll()
{
// checks for statuses: WaitingForActivation, WaitingToRun, Running
if (_runningStatuses.Contains(_pollingTask.Status)) return;
_pollingTask.Start(); // this obviously throws exception once Task already completes and then I want to start it again
}
Task remained pretty much the same but check changed, now since my logic is that I want to start polling when I have subscriptions and stop when I don't I need to sort of re-use the Task, but since I can't I am wondering do I need to go back to my first implementation or is there any other neat way of doing this that right now I am missing?
I am wondering do I need to go back to my first implementation or is there any other neat way of doing this that right now I am missing?
Your first implementation looks fine. You might use a ManualResetEventSlim instead of enum and Interlocked.Exchange, but that's essentially the same... as long as you have just two states.
I think I made a compromise and removed Interlocked API for MethodImpl(MethodImpl.Options.Synchronized) it lets me have simple method body without possibly confusing Interlocked API code for eventual newcomer/inexperienced guy.
[MethodImpl(MethodImplOptions.Synchronized)]
public void Poll()
{
if (!_polling)
{
_polling = true;
new Task(() =>
{
while (_currentSubscriptions.Count != 0)
{
_consumer.Poll(TimeSpan.FromSeconds(1));
}
_polling = false;
}, TaskCreationOptions.LongRunning).Start();
}
}

Does Parallel.ForEach Block?

Does the .net function Parallel.ForEach block the calling thread? My guess as to the behavior is one of these:
Yes, it blocks until the slowest item executing returns.
No, it doesn't block and returns control immediately. The items to run in parallel are done on background threads.
Or perhaps something else is happening, anyone know for sure?
This question came up when implementing this in a logging class:
public class MultipleLoggingService : LoggingServiceBase
{
private readonly List<LoggingServiceBase> loggingServices;
public MultipleLoggingService(List<LoggingServiceBase> loggingServices)
{
this.loggingServices = loggingServices;
LogLevelChanged += OnLogLevelChanged;
}
private void OnLogLevelChanged(object sender, LogLevelChangedArgs args)
{
loggingServices.ForEach(l => l.LogLevel = LogLevel);
}
public override LogMessageResponse LogMessage(LogMessageRequest request)
{
if (request.LogMessage)
Parallel.ForEach(loggingServices, l => l.LogMessage(request));
return new LogMessageResponse{MessageLogged = request.LogMessage};
}
}
Notice the LogMessage method calls some other logging services. I need that part to return immediately, so it doesn't block the calling thread.
Update: Based on comments from others (we have confirmed the behavior is #1). So I have taken advice to use the Task library and rewritten the loop like this:
if (request.LogMessage)
foreach (var loggingService in loggingServices)
Task.Factory.StartNew(() => loggingService.LogMessage(request));
Number 1 is correct; Parallel.ForEach does not return until the loop has completed. If you don't want that behavior, you can simply execute your loop as a Task and run it on another thread.
Re your update, StartNew in a normal foreach() :
This may not be the most optimal for large collections, and you don't get a point to handle errors.
Your loggingServices probably doesn't hold thousands of items but the errorhandling remains a point .
Consider:
Task.Factory.StartNew(() =>
{
try
{
Parallel.ForEach(loggingServices, l => l.LogMessage(request));
}
catch(SomeException ex)
{
// at least try to log it ...
}
});

Categories