I use Theory with MemberData like this:
[Theory]
[MemberData(nameof(TestParams))]
public void FijewoShortcutTest(MapMode mapMode)
{
...
and when it works, it is all fine, but when it fails XUnit iterates over all data I pass as parameters. In my case it is fruitless attempt, I would like to stop short -- i.e. when the first set of parameters make the test to fail, stop the rest (because they will fail as well -- again, it is my case, not general rule).
So how to tell XUnit to stop Theory on the first fail?
The point of a Theory is to have multiple independent tests running the same code of different data. If you only actually want one test, just use a Fact and iterate over the data you want to test within the method:
[Fact]
public void FijewoShortcutTest()
{
foreach (MapMode mapMode in TestParams)
{
// Test code here
}
}
That will mean you can't easily run just the test for just one MapMode though. Unless it takes a really long time to execute the tests for some reason, I'd just live with "if something is badly messed up, I get a lot of broken tests".
Related
Let me start by saying that I know about the DebuggerStepThroughAttribute and I'm using it in a number of places with much success.
However, I'm searching for a complementary solution that would work for my specific scenario, which I'll now illustrate...
Say I have a homegrown data-access framework. This framework comes with lots of unit tests which ensure that all my high-level data-access APIs are working as expected. In these tests, there is often a requirement to first seed some test-specific data to a one-off database, then execute the actual test on that data.
The thing is, I might rely on unit tests not just to give me a passive green/red indication about my code, but also to help me zero in on the source of occasional regression. Given the way I've written the tests, it's easy to imagine that a small subset of them could sometimes give me grief, because the code that performs the test data seeding and actual test code both use the same framework APIs at lower levels.
So for example, if my debugging of a failed test happened to require that I place a breakpoint inside one such common method, debugger would stop there a number of times (maybe an annoyingly large number of times!) before I'd get to the point I'm interested in (the actual test, not the seeding).
Leaving aside the fact that I could theoretically refactor everything and improve decoupling, I'm asking this:
Is there a general way to quickly and easily disable debugger breaking for a specific code block, including any sub-calls that might be made from that block, when any of the executed lines could have a breakpoint associated?
The only solution that I'm aware of is to use conditional breakpoints. I would need to set a certain globally accessible flag when entering the method that I wanted to exclude and clear it when exiting. Any conditional breakpoints would then have to require that flag must not be set.
But this seems tedious, because breakpoints are often added, removed, then added again, etc. Given the rudimentary breakpoint management support in Visual Studio this quickly becomes really annoying.
Is there another way? Preferably by manipulating the debugger directly or indirectly, similarly to how the DebuggerStepThroughAttribute does it for single method scope?
EDIT:
Here's a contrived example of what I might have:
public class MyFramework
{
public bool TryDoCommonWork(string s)
{
// Picture an actual breakpoint here instead of this line.
// As it is, debugger would stop here 3 times during the seeding
// phase and then one final time during the test phase.
Debugger.Break();
if (s != null)
{
// Do the work.
return true;
}
return false;
}
}
[TestClass]
public class MyTests
{
[TestMethod]
public void Test()
{
var fw = new MyFramework();
// Seeding stage of test.
fw.TryDoCommonWork("1");
fw.TryDoCommonWork("2");
fw.TryDoCommonWork("3");
// Test.
Assert.IsTrue(fw.TryDoCommonWork("X"));
}
}
What I'm really looking for is something roughly similar to this:
[TestClass]
public class MyTests
{
[TestMethod]
public void Test()
{
var fw = new MyFramework();
// Seeding stage of test with no debugger breaking.
using (Debugger.NoBreakingWhatsoever())
{
fw.TryDoCommonWork("1");
fw.TryDoCommonWork("2");
fw.TryDoCommonWork("3");
}
// Test with normal debugger breaking.
Assert.IsTrue(fw.TryDoCommonWork("X"));
}
}
Using Selenium in Visual Studio. Using NUnit to sort my testcases.
I'm writing a testcase that compares two serialnumbers with a if statement like this:
[Test]
public void CompareVariables()
{
if (string.Equals(serialNumberInfo, serialNumberReport))
Console.WriteLine($"{serialNumberInfo} and {serialNumberReport} are a match! Proceed!");
else
Console.WriteLine($"{serialNumberInfo} and {serialNumberReport} don't match! Cancel test!");
//method for stopping test missing!
I want to be able to abort the rest of the testsequence if the serialnumbers don't match.
Is there a "end/stop test" method or something similar I could put in else section?
I think you have a couple of options.
1) simply throw an exception (and fail the test)
Throwing an exception will fail a unit test. There are loads of different types of exceptions but the base is simply Exception. You can check the different types of exceptions available here. Where possible try to pick the exception that most closely represents the error (so bad arguments for example use ArgumentException or some derivative there of).
Your test would then look something like this:
[Test]
public void CompareVariables()
{
if (!string.Equals(serialNumberInfo, serialNumberReport))
throw new Exception($"{serialNumberInfo} and {serialNumberReport} don't match! Cancel test!");
// The rest of your tests (only run if serialNumberInfo and serialNumberReport) are equal.
}
2) Use an assertion (and fail the test)
Unit tests are usually supposed to assert that something desirable happened. If that thing didn't happen then an exception should be thrown (which is often handled for you by some assertion framework).
So you could flip the test to do this:
[Test]
public void CompareVariables()
{
serialNumberInfo.ShouldBe(serialNumberReport);
// The rest of your tests (only run if serialNumberInfo and serialNumberReport) are equal.
}
This is done with Shouldly but there are countless assertion frameworks so pick your favourite. (mstest has one built in but I find it less readable but that is a personal preference).
Note, only use an assertion when you want to explicitly make sure that it should have happened. I.e. This needs to be true for my test to pass, rather than if this happened then abort. That's hard to explain so I hope that makes sense?
Exceptions for when something went wrong, Assertions for when something should have gone right.
3) Leave the test (and pass the test)
If the test exits without an exception being thrown (either manually or via an assertion framework) then the test is considered to be a passing test. Therefor if you wanted to treat this as a pass you could simply return from the test.
[Test]
public void CompareVariables()
{
if (string.Equals(serialNumberInfo, serialNumberReport))
{
Console.WriteLine($"{serialNumberInfo} and {serialNumberReport} are a match! Proceed!");
}
else
{
Console.WriteLine($"{serialNumberInfo} and {serialNumberReport} don't match! Cancel test!");
return;
}
// The rest of your tests
}
This will mark the test as passing, but mean the rest of the operations in the test are not run. I would try not to do this however - unless you really understand why you want this because you could start passing tests without knowing why they passed (i.e. without asserting anything)
I hope that helps
If you want to end the test early without failing it, simply use return.
[Test]
public void MyTest() {
//Do some stuff
if(!shouldContinue) {
return;
}
}
I do this reasonably often given certain conditions may warrant additional assertions, and other conditions may not. Throwing an exception will fail the test. This will not fail it.
Edit: I just noticed that the other responder mentioned this at the end of their answer. So ignore me :)
Im running some tests on my code at the moment. My main test method is used to verify some data, but within that check there is a lot of potential for it to fail at any one point.
Right now, I've set up multiple Assert.Fail statements within my method and when the test is failed, the message I type is displayed as expected. However, if my method fails multiple times, it only shows the first error. When I fix that, it is only then I discover the second error.
None of my tests are dependant on any others that I'm running. Ideally what I'd like is the ability to have my failure message to display every failed message in one pass. Is such a thing possible?
As per the comments, here are how I'm setting up a couple of my tests in the method:
private bool ValidateTestOne(EntityModel.MultiIndexEntities context)
{
if (context.SearchDisplayViews.Count() != expectedSdvCount)
{
Assert.Fail(" Search Display View count was different from what was expected");
}
if (sdv.VirtualID != expectedSdVirtualId)
{
Assert.Fail(" Search Display View virtual id was different from what was expected");
}
if (sdv.EntityType != expectedSdvEntityType)
{
Assert.Fail(" Search Display View entity type was different from what was expected");
}
return true;
}
Why not have a string/stringbuilder that holds all the fail messages, check for its length at the end of your code, and pass it into Assert.Fail? Just a suggestion :)
The NUnit test runner (assuming thats what you are using) is designed to break out of the test method as soon as anything fails.
So if you want every failure to show up, you need to break up your test into smaller, single assert ones. In general, you only want to be testing one thing per test anyways.
On a side note, using Assert.Fail like that isn't very semantically correct. Consider using the other built-in methods (like Assert.Equals) and only using Assert.Fail when the other methods are not sufficient.
None of my tests are dependent on any others that I'm running. Ideally
what I'd like is the ability to have my failure message to display
every failed message in one pass. Is such a thing possible?
It is possible only if you split your test into several smaller ones.
If you are afraid code duplication which is usually exists when tests are complex, you can use setup methods. They are usually marked by attributes:
NUnit - SetUp,
MsTest - TestInitialize,
XUnit - constructor.
The following code shows how your test can be rewritten:
public class HowToUseAsserts
{
int expectedSdvCount = 0;
int expectedSdVirtualId = 0;
string expectedSdvEntityType = "";
EntityModelMultiIndexEntities context;
public HowToUseAsserts()
{
context = new EntityModelMultiIndexEntities();
}
[Fact]
public void Search_display_view_count_should_be_the_same_as_expected()
{
context.SearchDisplayViews.Should().HaveCount(expectedSdvCount);
}
[Fact]
public void Search_display_view_virtual_id_should_be_the_same_as_expected()
{
context.VirtualID.Should().Be(expectedSdVirtualId);
}
[Fact]
public void Search_display_view_entity_type_should_be_the_same_as_expected()
{
context.EntityType.Should().Be(expectedSdvEntityType);
}
}
So your test names could provide the same information as you would write as messages:
Right now, I've set up multiple Assert.Fail statements within my
method and when the test is failed, the message I type is displayed as
expected. However, if my method fails multiple times, it only shows
the first error. When I fix that, it is only then I discover the
second error.
This behavior is correct and many testing frameworks follow it.
I'd like to recommend stop using Assert.Fail() because it forces you to write specific messages for every failure. Common asserts provide good enough messages so you can replace you code with the following lines:
// Act
var context = new EntityModelMultiIndexEntities();
// Assert
Assert.Equal(expectedSdvCount, context.SearchDisplayViews.Count());
Assert.Equal(expectedSdVirtualId, context.VirtualID);
Assert.Equal(expectedSdvEntityType, context.EntityType);
But I'd recommend start using should-frameworks like Fluent Assertions which make your code mere readable and provide better output.
// Act
var context = new EntityModelMultiIndexEntities();
// Assert
context.SearchDisplayViews.Should().HaveCount(expectedSdvCount);
context.VirtualID.Should().Be(expectedSdVirtualId);
context.EntityType.Should().Be(expectedSdvEntityType);
I'm writing unit tests for a simple IsBoolean(x) function to test if a value is boolean. There's 16 different values I want to test.
Will I be burnt in hell, or mocked ruthlessly by the .NET programming community (which would be worse?), if I don't break them up into individual unit tests, and run them together as follows:
[TestMethod]
public void IsBoolean_VariousValues_ReturnsCorrectly()
{
//These should all be considered Boolean values
Assert.IsTrue(General.IsBoolean(true));
Assert.IsTrue(General.IsBoolean(false));
Assert.IsTrue(General.IsBoolean("true"));
Assert.IsTrue(General.IsBoolean("false"));
Assert.IsTrue(General.IsBoolean("tRuE"));
Assert.IsTrue(General.IsBoolean("fAlSe"));
Assert.IsTrue(General.IsBoolean(1));
Assert.IsTrue(General.IsBoolean(0));
Assert.IsTrue(General.IsBoolean(-1));
//These should all be considered NOT boolean values
Assert.IsFalse(General.IsBoolean(null));
Assert.IsFalse(General.IsBoolean(""));
Assert.IsFalse(General.IsBoolean("asdf"));
Assert.IsFalse(General.IsBoolean(DateTime.MaxValue));
Assert.IsFalse(General.IsBoolean(2));
Assert.IsFalse(General.IsBoolean(-2));
Assert.IsFalse(General.IsBoolean(int.MaxValue));
}
I ask this because "best practice" I keep reading about would demand I do the following:
[TestMethod]
public void IsBoolean_TrueValue_ReturnsTrue()
{
//Arrange
var value = true;
//Act
var returnValue = General.IsBoolean(value);
//Assert
Assert.IsTrue(returnValue);
}
[TestMethod]
public void IsBoolean_FalseValue_ReturnsTrue()
{
//Arrange
var value = false;
//Act
var returnValue = General.IsBoolean(value);
//Assert
Assert.IsTrue(returnValue);
}
//Fell asleep at this point
For the 50+ functions and 500+ values I'll be testing against this seems like a total waste of time.... but it's best practice!!!!!
-Brendan
I would not worry about it. This sort of thing isn't the point. JB Rainsberger talked about this briefly in his talk Integration Tests are a Scam. He said something like, "If you have never forced yourself to use one assert per test, I recommend you try it for a month. It will give you a new perspective on test, and teach you when it matters to have one assert per test, and when it doesn't". IMO, this falls into the doesn't matter category.
Incidentally, if you use nunit, you can use the TestCaseAttribute, which is a little nicer:
[TestCase(true)]
[TestCase("tRuE")]
[TestCase(false)]
public void IsBoolean_ValidBoolRepresentations_ReturnsTrue(object candidate)
{
Assert.That(BooleanService.IsBoolean(candidate), Is.True);
}
[TestCase("-3.14")]
[TestCase("something else")]
[TestCase(7)]
public void IsBoolean_InvalidBoolRepresentations_ReturnsFalse(object candidate)
{
Assert.That(BooleanService.IsBoolean(candidate), Is.False);
}
EDIT: wrote the tests in a slightly different way, that I think communicates intent a little better.
Although I agree it's best practice to separate the values in order to more easily identify the error. I think one still has to use their own common sense and follow such rules as guidelines and not as an absolute. You want to minimize assertion counts in a unit test, but what's generally most important is to insure a single concept per test.
In your specific case, given the simplicity of the function, I think that the one unit test you provided is fine. It's easy to read, simple, and clear. It also tests the function thoroughly and if ever it were to break somewhere down the line, you would be able to quickly identify the source and debug it.
As an extra note, in order to maintain good unit tests, you'll want to always keep them up to date and treat them with the same care as you do the actual production code. That's in many ways the greatest challenge. Probably the best reason to do Test Driven Development is how it actually allows you to program faster in the long run because you stop worrying about breaking the code that exists.
It's best practice to split each of the values you want to test into separate unit tests. Each unit test should be named specifically to the value you're passing and the expected result. If you were changing code and broke just one of your tests, then that test alone would fail and the other 15 would pass. This buys you the ability to instantly know what you broke without then having to debug the one unit test and find out which of the Asserts failed.
Hope this helps.
I can't comment on "Best Practice" because there is no such thing.
I agree with what Ayende Rahien says in his blog:
At the end, it boils down to the fact that I don’t consider tests to
be, by themselves, a value to the product. Their only value is their
binary ability to tell me whatever the product is okay or not.
Spending a lot of extra time on the tests distract from creating real
value, shippable software.
If you put them all in one test and this test fails "somewhere", then what do you do? Either your test framework will tell you exactly which line it failed on, or, failing that, you step through it with a debugger. The extra effort required because it's all in one function is negligible.
The extra value of knowing exactly which subset of tests failed in this particular instance is small, and overshadowed by the ponderous amount of code you had to write and maintain.
Think for a minute the reasons for breaking them up into individual tests. It's to isolate different functionality and to accurately identify all the things that went wrong when a test breaks. It looks like you might be testing two things: Boolean and Not Boolean, so consider two tests if your code follows two different paths. The bigger point, though, is that if none of the tests break, there are no errors to pinpoint.
If you keep running them, and later have one of these tests fail, that would be the time to refactor them into individual tests, and leave them that way.
I have the following test:
[Test]
public void VerifyThat_WhenProvidingAServiceOrderWithALinkedAccountGetSerivceProcessWithStatusReasonOfEndOfEntitlementToUpdateStatusAndStopReasonForAccountGetServiceProcessesAndServiceOrders_TheProcessIsUpdatedWithAStatusReasonOfEndOfEntitlement()
{
IFixture fixture = new Fixture()
.Customize(new AutoMoqCustomization());
Mock<ICrmService> crmService = new Mock<ICrmService>();
fixture.Inject(crmService);
var followupHandler = fixture.CreateAnonymous<FollowupForEndOfEntitlementHandler>();
var accountGetService = fixture.Build<EndOfEntitlementAccountGetService>()
.With(handler => handler.ServiceOrders, new HashedSet<EndOfEntitlementServiceOrder>
{
{
fixture.Build<EndOfEntitlementServiceOrder>()
.With(order => order.AccountGetServiceProcess, fixture.Build<EndOfEntitlementAccountGetServiceProcess>()
.With(process => process.StatusReason, fixture.Build<StatusReason>()
.With(statusReason=> statusReason.Id == MashlatReasonStatus.Worthiness)
.CreateAnonymous())
.CreateAnonymous())
.CreateAnonymous()
}
})
.CreateAnonymous();
followupHandler.UpdateStatusAndStopReasonForAccountGetServiceProcessesAndServiceOrders(accountGetService);
crmService.Verify(svc => svc.Update(It.IsAny<DynamicEntity>()), Times.Never());
}
My problem is that it will never fail on the first run, like TDD specifies that it should.
What it should test is that whenever there is a certain value to a status for a process of a service order, perform no updates.
Is this test checking what it should?
I'm struggling a bit to understand the question here...
Is your problem that this test passes on the first try?
If yes, that means one of two things
your test has an error
you have already met this spec/requirement
Since the first has been ruled out, Green it is. Off you go to the next one on the list..
Somewhere down the line I assume, you will implement more functionality that results in the expected method being called. i.e. when the status value is different, perform an update.
The fix for that test must ensure that both tests pass.
If not, give me more information to help me understand.
Following TDD methodology, we only write new tests for functionality that doesn't exist. If a test passes on the first run, it is important to understand why.
One of my favorite things about TDD is its subtle ability to challenge our assumptions, and knock our egos flat. The practice of "Calling your Shots" is not only a great way to work through tests, but it's also a lot of fun. I love when a test fails when I expect it to pass - many great learning opportunities come from this; Time after time, evidence of working software trumps developer ego.
When a test passes when I think it shouldn't, the next step is to make it fail.
For example, your test, which expects that something doesn't happen, is guaranteed to pass if the implementation is commented out. Tamper with the logic that you think you are implementing by commenting it out or by altering the conditions of the implementation and verify if you get the same results.
If after doing this, and you're confident that the functionality is correct, write another test that proves the opposite. Will Update get called with different state or inputs?
With both sets in place, you should be able to comment out that feature and have the ability to know in advance which test will be impacted. (8-ball, corner pocket)
I would also suggest that you add another assertion to the above test to ensure that the subject and functionality under test is actually being invoked.
change the Times.Never() to Times.AtLeastOnce() and you got a good start for tdd.
Try to find nothing in nothing, well that's a good test ,but not they way to start tdd, first go with the simple specification, the naive operation the user could do (from your view point of course).
As you done some work, keep it for later, when it fails.