Why does Observable.Do throws InvalidOperationException "Sequence contains no elements" - c#

I do get an (IMHO) unexpected Exception InvalidOperationException "Sequence contains no elements" in the below code, where I believe this should not be the case. Please see the code, background info and description below.
The Issue
For the sake to reproduce it easily and to demonstrate it, the code in the Unit Test using my SUT Effect is commented out, and replaced with the given code which should make it clear what I am trying to do and what I am trying to test.
So, I can reduce the question to "Why do I get a "Sequence contains no elements" exception when executing Observable.Do, when it is a total valid use case":
IObservable<int> observable = Observable.Empty<int>();
And, I want to test this, an empty sequence which completes.
Unit Test
public class EffectTests {
[Fact]
public async void TestEmpty() {
// IObservable<int> observable = Effect<Unit, int>.Empty.Invoke()
// .Delay(System.TimeSpan.FromSeconds(1.2));
IObservable<int> observable = Observable.Empty<int>()
.Delay(System.TimeSpan.FromSeconds(1.2));
// Exception has occurred: CLR/System.InvalidOperationException
// Exception thrown: 'System.InvalidOperationException' in System.Reactive.dll: 'Sequence contains no elements.'
// at System.Reactive.Subjects.AsyncSubject`1.GetResult()
// at EffectTests.<TestEmpty>d__0.MoveNext() in
await observable.Do(
onNext: _ => AssertEx.Fail()
);
}
}
public static class AssertEx {
public static void Fail(string message = "")
=> throw new Xunit.Sdk.XunitException(message);
}
Background
I hope, the code in comments and the xUnit test setup shows what I am trying to do:
I have a custom struct Effect which basically stores a function returning an IObservable<T>. This should show already the fact that an Effect can actual return any kind of observable (via function Invoke()). That observable may now return values that will be emitted synchronously or asynchronously, it might fail immediately or succeed, or it might emit one or more values, or none at all.
In my Unit Test, in the case of an Effect.Empty the Observable is actually an empty sequence and thus it's intended to not emit any values. So, I precicely added an Assert.Fail() when the callback onNext would be called.
Additional Info
So, the documentation to Observable.Do says:
// Summary:
// Invokes an action for each element in the observable sequence, and propagates
// all observer messages through the result sequence. This method can be used for
// debugging, logging, etc. of query behavior by intercepting the message stream
// to run arbitrary actions for messages on the pipeline.
which clearly does not state anything that it would throw an exception if the sequence is empty. It also cleary states the use case, which is for "debugging, logging, etc.", so I thought that would be a perfect fit for my use case, i.e. Unit Tests.
My Questions
Why does Observable.Do throw the exception?
How can I alleviate the issue in my Unit Tests?
There are a lot of questions about "Sequence contains no elements" on SO already, but as far as I could look it up, there is none specifically mention it in regards to Observable.Do. But any existing answer is appreciated, too.

Related

More flexible Assert.ThrowsException?

MSTest now allows you to test that a certain piece of code throws a certain exception type, through:
Assert.ThrowsException<MyException>(() => foo.Bar());
However, I need a more flexible exception test; I need to not only test the type of the exception, but check that its message starts with a certain string (rather than matching the exact exception string). Is there a way I can do this with MSTest? If not, what's the best way for me to do it? Is there another testing framework that handles this better?
Ideally, the Assert would take in a second func that passed in the Exception thrown, and this func could test the Exception however it wanted and return true or false to indicate that the assert had succeeded or failed.
var ex = Assert.ThrowsException<MyException>(() => foo.Bar());
Assert.IsTrue(ex.Message.StartsWith("prefix"));

Asserting against handled exceptions in NUnit

I have a method that takes a JSON object and puts it through several stages of processing, updating values in the database at each stage. We wanted this method to be fault tolerant, and decided that the desired behaviour would be, if any processing stage failed, log an error to the database and carry on with the next stage of processing, rather than aborting.
I've just made several changes to the behaviour of one of the processing steps. I then ran our unit test suite, expecting several of the tests to fail due to the new behaviour and point me at potential problem areas. Instead, the tests all passed.
After investigating, I realised that the mock data the tests run against didn't include certain key values important for the new behaviour. The tests were in fact throwing exceptions when they ran, but the exceptions were being caught and handled - and, because the tests don't run with a logger enabled, they were completely suppressed. So the new code didn't change the data in a way that would cause the tests to fail, because it was silently erroring instead.
This seems like the sort of problem that unit tests are there to catch, and the fact that they showed no trace means they're not serving their purpose. Is there any way that I can use NUnit to assert that no exception was ever thrown, even if it was handled? Or alternatively, is there a sensible way to refactor that would expose this issue better?
(Working in C#, but the question seems fairly language-agnostic)
First and foremost, in the scenario you describe, the presence or absence of exceptions is secondary. If you wrote the code to produce a desired result while catching and handling exceptions, then that result - whether it's a return value or some other effect - is the most important thing to test.
If the exceptions that you didn't see caused that result to be incorrect, then testing for the correct result will always reveal the problems. If you don't know what the expected result will be and are only interested in whether or not exceptions are getting handled, something is wrong. We can never determine whether or not anything works correctly according to whether or not it throws exceptions.
That aside, here's how to test whether or not your code is catching and logging exceptions that you wouldn't otherwise be able to observe:
If you're injecting a logger that looks something like this:
public interface ILogger
{
void LogError(Exception ex);
void LogMessage(string message);
}
...then a simple approach is to create a test double which stores the exceptions so that you can inspect it and see what was logged.
public class ListLoggerDouble : ILogger
{
public List<Exception> Exceptions = new List<Exception>();
public List<string> Messages = new List<string>();
public void LogError(Exception ex)
{
Exceptions.Add(ex);
}
public void LogMessage(string message)
{
Messages.Add(message);
}
}
After you've executed the method you're testing you can assert that a collection contains the exception(s) or message(s) you expect. If you wish you can also verify that there are none, although it seems like that might be redundant if the result you're testing for is correct.
I wouldn't create a logger that throws an exception and then write a test that checks for a thrown exception. That makes it look like the expected behavior of your code is to throw an exception, which is exactly the opposite of what it does. Tests help us to document expected behaviors. Also, what will you do if you want to verify that you caught and logged two exceptions?

Returning multiple assert messages in one test

Im running some tests on my code at the moment. My main test method is used to verify some data, but within that check there is a lot of potential for it to fail at any one point.
Right now, I've set up multiple Assert.Fail statements within my method and when the test is failed, the message I type is displayed as expected. However, if my method fails multiple times, it only shows the first error. When I fix that, it is only then I discover the second error.
None of my tests are dependant on any others that I'm running. Ideally what I'd like is the ability to have my failure message to display every failed message in one pass. Is such a thing possible?
As per the comments, here are how I'm setting up a couple of my tests in the method:
private bool ValidateTestOne(EntityModel.MultiIndexEntities context)
{
if (context.SearchDisplayViews.Count() != expectedSdvCount)
{
Assert.Fail(" Search Display View count was different from what was expected");
}
if (sdv.VirtualID != expectedSdVirtualId)
{
Assert.Fail(" Search Display View virtual id was different from what was expected");
}
if (sdv.EntityType != expectedSdvEntityType)
{
Assert.Fail(" Search Display View entity type was different from what was expected");
}
return true;
}
Why not have a string/stringbuilder that holds all the fail messages, check for its length at the end of your code, and pass it into Assert.Fail? Just a suggestion :)
The NUnit test runner (assuming thats what you are using) is designed to break out of the test method as soon as anything fails.
So if you want every failure to show up, you need to break up your test into smaller, single assert ones. In general, you only want to be testing one thing per test anyways.
On a side note, using Assert.Fail like that isn't very semantically correct. Consider using the other built-in methods (like Assert.Equals) and only using Assert.Fail when the other methods are not sufficient.
None of my tests are dependent on any others that I'm running. Ideally
what I'd like is the ability to have my failure message to display
every failed message in one pass. Is such a thing possible?
It is possible only if you split your test into several smaller ones.
If you are afraid code duplication which is usually exists when tests are complex, you can use setup methods. They are usually marked by attributes:
NUnit - SetUp,
MsTest - TestInitialize,
XUnit - constructor.
The following code shows how your test can be rewritten:
public class HowToUseAsserts
{
int expectedSdvCount = 0;
int expectedSdVirtualId = 0;
string expectedSdvEntityType = "";
EntityModelMultiIndexEntities context;
public HowToUseAsserts()
{
context = new EntityModelMultiIndexEntities();
}
[Fact]
public void Search_display_view_count_should_be_the_same_as_expected()
{
context.SearchDisplayViews.Should().HaveCount(expectedSdvCount);
}
[Fact]
public void Search_display_view_virtual_id_should_be_the_same_as_expected()
{
context.VirtualID.Should().Be(expectedSdVirtualId);
}
[Fact]
public void Search_display_view_entity_type_should_be_the_same_as_expected()
{
context.EntityType.Should().Be(expectedSdvEntityType);
}
}
So your test names could provide the same information as you would write as messages:
Right now, I've set up multiple Assert.Fail statements within my
method and when the test is failed, the message I type is displayed as
expected. However, if my method fails multiple times, it only shows
the first error. When I fix that, it is only then I discover the
second error.
This behavior is correct and many testing frameworks follow it.
I'd like to recommend stop using Assert.Fail() because it forces you to write specific messages for every failure. Common asserts provide good enough messages so you can replace you code with the following lines:
// Act
var context = new EntityModelMultiIndexEntities();
// Assert
Assert.Equal(expectedSdvCount, context.SearchDisplayViews.Count());
Assert.Equal(expectedSdVirtualId, context.VirtualID);
Assert.Equal(expectedSdvEntityType, context.EntityType);
But I'd recommend start using should-frameworks like Fluent Assertions which make your code mere readable and provide better output.
// Act
var context = new EntityModelMultiIndexEntities();
// Assert
context.SearchDisplayViews.Should().HaveCount(expectedSdvCount);
context.VirtualID.Should().Be(expectedSdVirtualId);
context.EntityType.Should().Be(expectedSdvEntityType);

How to create a Rhino Mock that has different behavior each time a method is called

I am trying to figure out how to create a mock using rhino mocks that will call the original method the first time a specific method (BasicPublish) is called, throw an exception the second time, then call the original method for all remaining calls to the method.
The original method signature looks like this:
public virtual void BasicPublish(
string exchange, string routingKey, IBasicProperties basicProperties, byte[] body)
Is this possible?
Here is what I have tried that I thought would work. My test is now calling the orginal method. But after setting a break point in my method and debugging my calls to BasicPublish, all the parameters are being passed through as null. Am I using the "CallOriginalMethod(OriginalCallOptions.CreateExpectation)" incorrectly?
mockModel.Stub(a => a.BasicPublish(Arg<string>.Is.Anything, Arg<string>.Is.Anything, Arg<IBasicProperties>.Is.Anything, Arg<byte[]>.Is.Anything)).CallOriginalMethod(OriginalCallOptions.NoExpectation).Repeat.Once();
mockModel.Stub(a => a.BasicPublish(Arg<string>.Is.Anything, Arg<string>.Is.Anything, Arg<IBasicProperties>.Is.Anything, Arg<byte[]>.Is.Anything)).Throw(new DivideByZeroException()).Repeat.Once();
mockModel.Stub(a => a.BasicPublish(Arg<string>.Is.Anything, Arg<string>.Is.Anything, Arg<IBasicProperties>.Is.Anything, Arg<byte[]>.Is.Anything)).CallOriginalMethod(OriginalCallOptions.NoExpectation).Repeat.Any();
Additional context: This is to be used in a unit test (using nunit) to test the behavior of a custom Log4Net Rabbit MQ appender. The test case is as follows: given a live connection to a queue, when the connection becomes faulty (simulated by an exception on a call to BasicPublish(...)), then that specific log message is ignored and following log messages are processed normally
I wasn't able to find a way to do this with Rhino but realized I could just create a fake myself that would behave the way I expected. This solution turned out to be very readable with proper naming conventions.

Advice on generic try catch

This is not so much of a problem but more feedback and thoughts. I have been considering an implementation for methods that have been tested thoroughly through our internal teams. I would like to write a generic exception catch method and reporting service.
I relize this is not as easy as a "try-catch" block, but allows for a uniform method for catching exceptions. Ideally I would like to execute a method, provide a failure callback and log all the parameters from the calling method.
Generic Try-Execute.
public class ExceptionHelper
{
public static T TryExecute<T, TArgs>(Func<TArgs, T> Method, Func<TArgs, T> FailureCallBack, TArgs Args)
{
try
{
return Method(Args);
}
catch (Exception ex)
{
StackTrace stackTrace = new StackTrace();
string method = "Unknown Method";
if (stackTrace != null && stackTrace.FrameCount > 0)
{
var methodInfo = stackTrace.GetFrame(1).GetMethod();
if (methodInfo != null)
method = string.Join(".", methodInfo.ReflectedType.Namespace, methodInfo.ReflectedType.Name, methodInfo.Name);
}
List<string> aStr = new List<string>();
foreach (var prop in typeof(TArgs).GetProperties().Where(x => x.CanRead && x.CanWrite))
{
object propVal = null;
try
{
propVal = prop.GetValue(Args, null);
}
catch
{
propVal = string.Empty;
}
aStr.Add(string.Format("{0}:{1}", prop.Name, propVal.ToString()));
}
string failureString = string.Format("The method '{0}' failed. {1}", method, string.Join(", ", aStr));
//TODO: Log To Internal error system
try
{
return FailureCallBack(Args);
}
catch
{
return default(T);
}
}
}
}
What I know as draw backs.
Performance Loss using reflection
MethodBase (methodInfo) may not be available through optimization
The try-catch around the error handler. Basically I could use the TryExecute wrapper for the try-catch around the error call back however that could result in a stack overflow situation.
Here would be a sample implementation
var model = new { ModelA = "A", ModelB = "B" };
return ExceptionHelper.TryExecute((Model) =>
{
throw new Exception("Testing exception handler");
},
(Model) =>
{
return false;
},
model);
Thoughts and comments appreciated.
That's a lot of code to put in a catch, including two more try/catch blocks. Seems like a bit of overkill if you ask me, with a good amount of risk that a further exception can obscure the actual exception and that the error information would be lost.
Also, why return default(T)? Returning defaults or nulls as indications of a problem is usually pretty sloppy. If nothing else, it requires the same conditional to be wrapped around every call to the method to check for the return and respond to... some error that has gone somewhere else now.
Honestly, that usage example looks pretty messy, too. It looks like you'll end up obscuring the actual business logic with the error-trapping code. The entire codebase will look like a series of error traps, with actual business logic hidden somewhere in the entanglement of it. This takes valuable focus off of the actual intent of the application and puts something of background infrastructure importance (logging) at the forefront.
Simplify.
If an exception occurs within a method, you generally have two sensible options:
Catch (and meaningfully handle) the exception within the method.
Let the exception bubble up the stack to be caught elsewhere.
There's absolutely nothing wrong with an exception escaping the scope of the method in which it occurs. Indeed, exceptions are designed to do exactly that, carrying with them useful stack information about what happened and where. (And, if you add meaningful runtime context to the exception, it can also carry information about why.)
In fact, the compiler even subtly hints at this. Take these two methods for example:
public int Sum(int first, int second)
{
// TODO: Implement this method
}
public int Product(int first, int second)
{
throw new NotImplementedException();
}
One of these methods will compile, one of them will not. The compiler error will state that not all code paths return a value on the former method. But why not the latter? Because throwing an exception is a perfectly acceptable exit strategy for a method. It's how the method gives up on what it's doing (the one thing it should be trying to do and nothing more) and let's the calling code deal with the problem.
The code should read in a way that clearly expresses the business concept being modeled. Error handling is an important infrastructure concept, but it's just that... infrastructure. The code should practically scream the business concept being modeled, clearly and succinctly. Infrastructure concerns shouldn't get in the way of that.
This is very rarely going to be useful.
It covers only cases where:
The method has a well-defined means of obtaining an appropriate return value in the face of failure.
You'd actually care to log that it happened.
Now, 2 is very common with exceptions of all sorts, but not where 1 is true too.
1 of course is rare, since in most cases if you could produce a reasonable return value for given parameters by means X you wouldn't be trying means Y first.
It also has a default behaviour of returning default(T) - so null or all zeros - if the fallback doesn't work.
This only works where your case 1 above has "something that just returns null as a result because we don't really care very much what this thing does", or where the called method never returns null, in which case you then test for null, which means that your real error-handling code happens there.
In all, what you've got here is a way in which exceptions that would be trappable by real code have to be caught for by testing (and sometimes testing + guesswork) instead, and those that would bring down a program in a clear place with nice debugging information will instead put it into a state where you don't know what's going on anywhere, but at least of the few dozen bugs that got logged before something managed to bring it down fully, one of the is probably the actual problem
When you've a catch on some exception for a particular reason, by all means log the exception. Note that this is not so much to help find bugs (if that exception being raised there is a bug, you shouldn't be catching it there), but to cancel out the fact that having a catch there could hide bugs - i.e. to cancel out the very effect you are deliberately encouraging by putting catches all over the place. (E.g. you expect a regularly hit webservice to fail to connect on occasion, and you can go on for some hours with cached data - so you catch the failure and go on from cache - here you log because if there was a bug meaning you were never trying to hit the webservice correctly, you've just hidden it).
It's also reasonable to have some non-interactive (service or server) app log all exceptions that reach the top of the stack, because there's nobody there to note the exception.
But exceptions are not the enemy, they're the messenger. Don't shoot the messenger.

Categories