Is it a good practice to use RowTest in a unit test - c#

NUnit and MbUnit has a RowTest attribute that allows you to sent different set of parameters into a single test.
[RowTest]
[Row(5, 10, 15)]
[Row(3.5, 2.7, 6.2)]
[Row(-5, 6, 1)]
public void AddTest(double firstNumber, double secondNumber, double result)
{
Assert.AreEqual(result, firstNumber + secondNumber);
}
I used to be huge fan of this feature. I used it everywhere. However, lately I'm not sure if it's a very good idea to use RowTest in Unit Tests. Here are more reasons:
A unit test must be very simple. If there's a bug, you don't want to spent a lot of time to figure out what your test tests. When you use multiple rows, each row has different sent set of parameter and tests something different.
Also I'm using TestDriven.NET, that allows me to run my unit tests from my IDE, Visual Studio. With TestDrivent.NET I cannot instruct to run a specific row, it will execute all the rows. Therefore, when I debug I have to comment out all other rows and leave only the one I'm working with.
Here's an example how would write my tests today:
[Test]
public void Add_with_positive_whole_numbers()
{
Assert.AreEqual(5, 10 + 15);
}
[Test]
public void Add_with_one_decimal_number()
{
Assert.AreEqual(6.2, 3.5 + 2.7);
}
[Test]
public void Add_with_negative_number()
{
Assert.AreEqual(1, -5 + 6);
}
Saying that I still occasionally use RowTest attribute but only when I believe that it's not going to slow me down when I need to work on this later.
Do you think it's a good idea to use this feature in a Unit test?

Yes. It's basically executing the same test over and over again with different inputs... saving you the trouble of repeating yourself for each distinct input combination.
Thus upholding the 'once and only once' or DRY principle. So if you need to update this test you just update one test (vs multiple) tests.
Each Row should be a representative input from a distinct set - i.e. this input is different from all others w.r.t. this function's behavior.
The RowTest actually was a much-asked for feature for NUnit - having originated from MBUnit... I think Schlapsi wrote it as a NUnit extension which then got promoted to std distribution status. The NUnit GUI also groups all RowTests under one node in the GUI and shows which input failed/passed.. which is cool.
The minor disadvantage of the 'need to debug' is something I personally can live with.. It's after all commenting out a number of Row attributes temporarily (First of all most of the time I can eyeball the function once I find ScenarioX failed and solve it without needing a step-through) or conversely just copy the test out and pass it fixed (problematic) inputs temporarily

Related

How to stop XUnit Theory on the first fail?

I use Theory with MemberData like this:
[Theory]
[MemberData(nameof(TestParams))]
public void FijewoShortcutTest(MapMode mapMode)
{
...
and when it works, it is all fine, but when it fails XUnit iterates over all data I pass as parameters. In my case it is fruitless attempt, I would like to stop short -- i.e. when the first set of parameters make the test to fail, stop the rest (because they will fail as well -- again, it is my case, not general rule).
So how to tell XUnit to stop Theory on the first fail?
The point of a Theory is to have multiple independent tests running the same code of different data. If you only actually want one test, just use a Fact and iterate over the data you want to test within the method:
[Fact]
public void FijewoShortcutTest()
{
foreach (MapMode mapMode in TestParams)
{
// Test code here
}
}
That will mean you can't easily run just the test for just one MapMode though. Unless it takes a really long time to execute the tests for some reason, I'd just live with "if something is badly messed up, I get a lot of broken tests".

Refactoring large methods in NUnit tests

How do I manage large tests. I'm testing a webapplication and one of the features is, making a new order, where the user has to go through a couple of forms before the order will be made.
I can write a selenium test in C# that tests the entire flow of making a new order. But that test would rather turn out quite large.
The simplified flow looks like this:
Select 1 or more customers for the order
Select 1 or more products associated with the selected customers
Add some metadata about the order, such as name, who has to complete it, date, comments, etc.
There are a few subforms where the user has to search for customers and for products.
Now I can write one (large) test that walks through the entire primary flow. But that test could easily result in a method with 100+ lines.
And I also want to test certain alternative flows, which would result in a method that could easily be 80% the same as the normal flow method.
However, I know you shouldn't write tests that depend on each other. So there's my dilemma. My code will look something like this
[test]
public void NormalFlow()
{
//Execute the first two steps
//Around 100 lines
//Execute the third step normally
//around 50 lines
}
[test]
public void AlternativeFlow()
{
//Execute the first two steps
//Around 100 lines
//Execute the third step, but follow alternative flow
//around 50 lines
}
There's a lot of duplicate code, but I can't just start at the third step, so I've got to walk through the first two steps. I can't separate those first two steps as a separate test, because that would make my tests dependent on each other.
What should I do? How do I avoid duplicating all of my code without creating dependent tests?
Now I can write one (large) test that walks through the entire primary
flow. But that test could easily result in a method with 100+ lines.
And I also want to test certain alternative flows, which would result
in a method that could easily be 80% the same as the normal flow
method.
Just because you're writing a test that does several things, doesn't mean you have to put it all in a single method. Refactoring your test code to make sure it is of an appropriate quality is an important part of the development process.
I know you shouldn't write tests that depend on each other.
Whilst this is true, I think you may be taking it a bit literally. Based on your description of the system, this would be an example of two tests that depend on each other:
Test One
Create a new order for customer XXX.
Test Two
Add product YYY to an open order for customer XXX.
Test Two is dependent on Test One, because if Test One hasn't executed / has failed Test Two will also fail and it may not be obvious why.
This is different from two related tests that aren't dependent on each other. So, the alternative to the above would be:
Test One Create a new order for customer XXX.
Test Two Create a new order for customer ZZZ and add product YYY to the order.
Each test case is self contained and can be run in isolation. As you've said, this is essentially because Test Two is performing a lot of the same processing that Test One is. This is ok, but it doesn't mean that all of the code for Test Two has to be in a single method. If you were writing production code, you would probably look at your code, identify duplication and refactor it out either into different methods or classes that could be reused. If this makes your test code easier to read, then it's absolutely the right thing to do.
So from your example code you might have something like this:
[Test]
public void NormalFlow()
{
var sessionDetails = Logon(customerCredentials);
var openOrder = CreateOrder(sessionDetails);
AddProductToOrder(openOrder, productDetails);
AddMetaDataToOrder(openOrder, metaData);
}
[Test]
public void AlternativeFlow()
{
var sessionDetails = Logon(customerCredentials);
var openOrder = CreateOrder(sessionDetails);
AddProductToOrder(openOrder, productDetails);
AddMetaDataToOrder(openOrder, alternateFlowMetaData);
}
The shared/duplicate code is pushed into the shared methods. Just because the code is shared, doesn't mean the tests are dependent.
As has been said in the comments by #Sriram Sakthivel, another thing that you can do if you have the same code at the start of every test (for example logging on) is to put that into a method that's marked up with the Setup attribute. Remember, the goal is to make your test code easy to write/understand and maintain.

Listing test results in VS2010 that DONT include a keyword

Is there any way I can filter test results by specifying a keyword that should NOT appear?
Context:
I've written some C# classes and methods, but did not implement those methods for now (I made them throw a NotImplementedException so that they clearly indicate this). I also written some test cases for those functions, but they currently fail because the methods throw the NotImplementedException. This is ok and I expect this for now.
I want to ignore these tests for now and look at other test results that are more meaningful, so I was trying to figure out how I can list results that do not have the "NotImplementedException". However, I can only list the results that do have that keyword, not those that don't. Is there any way I can list the results that don't? Using some wildcards or something?
I see a lot of information about the new Test Explorer in VS2012, but that's not a feature in 2010, which is what I'm using.
You can sort of cheat to pass this tests, if you want to, by marking that this test expects an exception to be thrown and thereby passes the test.
[TestMethod]
[ExpectedException(typeof(NotImplementedException))]
public void NotYetImplementedMethod(Object args)
{
....
}
Alternatively you can create categories for your tests. This way you can choose which tests to run in the Test explorer, if you assign a category to most of your tests.
[TestMethod]
[Testcategory("NotImplementedNotTested")]
public void NotYetImplementedMethod(Object args)
{
....
}
Last but not least the simplest solution [Ignore]. This will skip the tests alltogether.
[TestMethod]
[Ignore]
public void NotYetImplementedMethod(Object args)
{
....
}
Reference:
http://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Getting-Started-with-Unit-Testing-Part-1
http://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Getting-Started-with-Unit-Testing-Part-2
How to create unit tests which runs only when manually specified?
I also written some test cases for those functions
If your tests are linked to Test Cases work items on TFS you could simply set the Test Case's State to Design. Then, on your Test Plans exclude all test cases that are on Designed state.
If they are not linked to actual Test Cases work items (let's say a batch of unit tests), I believe the best solution is the Ignore attrbute (as #Serv already mentioned). Because I don't think you want to run tests that are not implemented yet and also waste time to find out how to exclude them from test results.

When is it OK to group similar unit tests?

I'm writing unit tests for a simple IsBoolean(x) function to test if a value is boolean. There's 16 different values I want to test.
Will I be burnt in hell, or mocked ruthlessly by the .NET programming community (which would be worse?), if I don't break them up into individual unit tests, and run them together as follows:
[TestMethod]
public void IsBoolean_VariousValues_ReturnsCorrectly()
{
//These should all be considered Boolean values
Assert.IsTrue(General.IsBoolean(true));
Assert.IsTrue(General.IsBoolean(false));
Assert.IsTrue(General.IsBoolean("true"));
Assert.IsTrue(General.IsBoolean("false"));
Assert.IsTrue(General.IsBoolean("tRuE"));
Assert.IsTrue(General.IsBoolean("fAlSe"));
Assert.IsTrue(General.IsBoolean(1));
Assert.IsTrue(General.IsBoolean(0));
Assert.IsTrue(General.IsBoolean(-1));
//These should all be considered NOT boolean values
Assert.IsFalse(General.IsBoolean(null));
Assert.IsFalse(General.IsBoolean(""));
Assert.IsFalse(General.IsBoolean("asdf"));
Assert.IsFalse(General.IsBoolean(DateTime.MaxValue));
Assert.IsFalse(General.IsBoolean(2));
Assert.IsFalse(General.IsBoolean(-2));
Assert.IsFalse(General.IsBoolean(int.MaxValue));
}
I ask this because "best practice" I keep reading about would demand I do the following:
[TestMethod]
public void IsBoolean_TrueValue_ReturnsTrue()
{
//Arrange
var value = true;
//Act
var returnValue = General.IsBoolean(value);
//Assert
Assert.IsTrue(returnValue);
}
[TestMethod]
public void IsBoolean_FalseValue_ReturnsTrue()
{
//Arrange
var value = false;
//Act
var returnValue = General.IsBoolean(value);
//Assert
Assert.IsTrue(returnValue);
}
//Fell asleep at this point
For the 50+ functions and 500+ values I'll be testing against this seems like a total waste of time.... but it's best practice!!!!!
-Brendan
I would not worry about it. This sort of thing isn't the point. JB Rainsberger talked about this briefly in his talk Integration Tests are a Scam. He said something like, "If you have never forced yourself to use one assert per test, I recommend you try it for a month. It will give you a new perspective on test, and teach you when it matters to have one assert per test, and when it doesn't". IMO, this falls into the doesn't matter category.
Incidentally, if you use nunit, you can use the TestCaseAttribute, which is a little nicer:
[TestCase(true)]
[TestCase("tRuE")]
[TestCase(false)]
public void IsBoolean_ValidBoolRepresentations_ReturnsTrue(object candidate)
{
Assert.That(BooleanService.IsBoolean(candidate), Is.True);
}
[TestCase("-3.14")]
[TestCase("something else")]
[TestCase(7)]
public void IsBoolean_InvalidBoolRepresentations_ReturnsFalse(object candidate)
{
Assert.That(BooleanService.IsBoolean(candidate), Is.False);
}
EDIT: wrote the tests in a slightly different way, that I think communicates intent a little better.
Although I agree it's best practice to separate the values in order to more easily identify the error. I think one still has to use their own common sense and follow such rules as guidelines and not as an absolute. You want to minimize assertion counts in a unit test, but what's generally most important is to insure a single concept per test.
In your specific case, given the simplicity of the function, I think that the one unit test you provided is fine. It's easy to read, simple, and clear. It also tests the function thoroughly and if ever it were to break somewhere down the line, you would be able to quickly identify the source and debug it.
As an extra note, in order to maintain good unit tests, you'll want to always keep them up to date and treat them with the same care as you do the actual production code. That's in many ways the greatest challenge. Probably the best reason to do Test Driven Development is how it actually allows you to program faster in the long run because you stop worrying about breaking the code that exists.
It's best practice to split each of the values you want to test into separate unit tests. Each unit test should be named specifically to the value you're passing and the expected result. If you were changing code and broke just one of your tests, then that test alone would fail and the other 15 would pass. This buys you the ability to instantly know what you broke without then having to debug the one unit test and find out which of the Asserts failed.
Hope this helps.
I can't comment on "Best Practice" because there is no such thing.
I agree with what Ayende Rahien says in his blog:
At the end, it boils down to the fact that I don’t consider tests to
be, by themselves, a value to the product. Their only value is their
binary ability to tell me whatever the product is okay or not.
Spending a lot of extra time on the tests distract from creating real
value, shippable software.
If you put them all in one test and this test fails "somewhere", then what do you do? Either your test framework will tell you exactly which line it failed on, or, failing that, you step through it with a debugger. The extra effort required because it's all in one function is negligible.
The extra value of knowing exactly which subset of tests failed in this particular instance is small, and overshadowed by the ponderous amount of code you had to write and maintain.
Think for a minute the reasons for breaking them up into individual tests. It's to isolate different functionality and to accurately identify all the things that went wrong when a test breaks. It looks like you might be testing two things: Boolean and Not Boolean, so consider two tests if your code follows two different paths. The bigger point, though, is that if none of the tests break, there are no errors to pinpoint.
If you keep running them, and later have one of these tests fail, that would be the time to refactor them into individual tests, and leave them that way.

Unit test passes when in debug but fails when run

A search method returns any matching Articles and the most recent Non-matching articles up to a specified number.
Prior to being returned, the IsMatch property of the matching articles is set to true as follows:
articles = matchingArticles.Select(c => { c.IsMatch = true; return c; }).ToList();
In a test of this method,
[Test]
public void SearchForArticle1Returns1MatchingArticleFirstInTheList()
{
using (var session = _sessionFactory.OpenSession())
{
var maxResults = 10;
var searchPhrase = "Article1";
IArticleRepository articleRepository = new ArticleRepository(session);
var articles = articleRepository.GetSearchResultSet(searchPhrase, maxResults);
Assert.AreEqual(10, articles.Count);
Assert.AreEqual(1, articles.Where(a => a.Title.Contains(searchPhrase)).Count());
var article = articles[0];
Assert.IsTrue(article.Title.Contains(searchPhrase));
Assert.IsTrue(article.IsMatch);
}
}
All assertions pass when the test is run in debug, however the final assertion fails when the test is run in release:
Expected: True
But was: False
In the app itself the response is correct.
Any ideas as to why this is happening?
Edit:
I figured out what the problem is. It's essentially a race condition. When I am setting up the tests, I am dropping the db table, recreating it and populating it with the test data. Since the search relies on Full Text search, I am creating a text index on the relevant columns and setting it to auto populate. When this is run in debug, there appears to be sufficient time to populate the text index and the search query returns matches. When I run the test I don't think the index has been populated in time, no matches are returned and the test fails. It's similar to issues with datetimes. If I put a delay between creating the catalog and running the test the test passes.
Pones, you have since clarified that the unit test fails when not debugging.
At this stage it could be anything however you should continue to run the unit test not debugging and insert the following statement somewhere you know (or think you know) is true
if(condition)
Debugger.Launch();
This will do the obvious and allow you to zone in on whats going wrong. 1 Place i suggest is on the IsMatch property (for starters)
Another common place you can run into issues like this is using DateTime's. If your unit test is running 'too fast' then it may break an assumption you had.
Obviously the problem will be different for other users, but I just hit it, and figured my solution may help some. Basically when you are running in debug mode, you are running a single test only. When you are running in run mode, you are running multiple tests in addition to the one you are having a problem with.
In my situation the problem was those other tests writing to a global list that I was not explicitly clearing in my test setup. I fixed the issue by clearing the list at the beginning of the test.
My advice to see if this is the type of problem you are facing would be to disable all other tests and only 'run' the test you have an issue with. If it works when ran by itself, but not with others, you'll know you have some dependency between tests.
Another tip is to use Console.WriteLine("test") statements in the test. That's actually how I found my list had items with it leftover from another test.
try to print out the actual result that you are comparing them with expected on debug and normal run
in my case, I created entities (JBA) in the test method
in debug mode, the generated ids were 1, 2 and 3
but in the normal running mode, they ware different
that caused my hard-coded values to make the test fail, so I changed them to get id from entity instead of the hard-coded way
hope this helps

Categories