Using NUnit Multiple Assert when Assert statements are in different methods - c#

I am running automated BDD steps using NUnit assertions for each step i.e. Then And for my UI tests.
The NUnit assertions are confined to confined to each method. This means that if an assertion in a method fails, then the other steps won't be run.
I was thinking of using NUnit Multiple Assert but this requires all the asserts to be together. Any ideas?
BDD Steps
Then I am shown results for("foo")
And the page count is(3)
I am using the LightBDD Library https://github.com/LightBDD/LightBDD
// Then Step
private void ThenIAmShownResultsFor(string expectedResults)
{
Assert.AreEqual(expectedResults, actual);
}
// And Step
private void AndThePageCountIs(int expectedResults)
{
Assert.AreEqual(expectedResults, actual);
}

See this article. Your tests that are reliant on the result of another should mock those other tests or methods. Each test should be completely decoupled from any other tests. You should never, ever, ever make one test dependent on the results of another. If a test relies on the results of another, you need to mock the response from the other test.
The code, assuming _foo
// And Step
private void AndThePageCountIs(int expectedResults)
{
actual = _foo.Setup(x => x.ThenIAmShownResultsFor()).Returns(expectedResults);
Assert.AreEqual(expectedResults, actual);
}

Related

C# test dependancies

I am very aware and agree with the opinion that tests should not be dependant on one another.
However in this instance I feel it would be beneficial.
The situation is the system under test has a step by step process that needs to be followed (there is no way to jump to a step without going through the previous ones).
In the ideal world we would get the devs to add an API to allow us to do thus, but given the constraints this will not be done.
Currently the tests being done are all end to end making failing tests difficult to analyse at times.
My question is then: Is there a clean way I can break down these end to end tests into smaller tests, and impose some dependencies on them?
Im aware that TestNG can do this using the #DependOn notation, is there a similar concept for C#?
From the way you describe your tests, either:
The way the developers have developed the code is flawed, in that to test step 3 say, steps 1 and 2 must be invoked first, rather than being able to isolate step 3. If the problem is with the way the developers designed the system, suggest they fix it.
You are performing integration tests and want to test the results of invoking several steps. In that case, you do not want to use a unit test tool, you need an integration test tool. See this answer to another question for advice on such tools and their pitfalls.
A couple of concepts might be useful here:
Scenario-Driven Tests
One Assert Per Test
The basic idea here is to separate the scenario you are executing from the assertion you are making (i.e. what you are testing). This is really just BDD, so it may make sense to use a BDD Framework to help you, although it's not necessary. With NUnit, you might write something like this
[TestFixture]
public class ExampleTests
{
[SetUp]
public void ExecuteScenario()
{
GrabBucket();
AddApple();
AddOrange();
AddBanana();
}
[Test]
public void ThereShouldBeOneApple()
{
Assert.AreEqual(1, Count("Apple"));
}
[Test]
public void ThereShouldBeOneOrange()
{
Assert.AreEqual(1, "Orange");
}
[Test]
public void ThereShouldBeOneBanana()
{
Assert.AreEqual(1, "Banana");
}
[Test]
public void ThereShouldBeNoPomegranates()
{
Assert.AreEqual(0, "Pomegranate");
}
private void GrabBucket() { /* do stuff */ }
private void AddApple() { /* do stuff */ }
private void AddOrange() { /* do stuff */ }
private void AddBanana() { /* do stuff */ }
private int Count(string fruitType)
{
// Query the application state
return 0;
}
}
I realize this doesn't answer your question as stated - this isn't breaking the larger integration test down into smaller units - but it may help you solve the dependency problem; here, all the related tests are depending on execution of a single scenario, not of previously-executed tests.
I agree with almost all the comments to date, the solution I ended up going with be it not the cleanest or most 'SOLID', was to use nUnit as the framework, using the category attribute to order the tests.

DeploymentItem and TestCleanup conflict in unit tests?

I have an application that has many unit tests in many classes. Many of the tests have DeploymentItem attributes to provide required test data:
[TestMethod]
[DeploymentItem("UnitTesting\testdata1.xml","mytestdata")]
public void Test1(){
/*test*/
}
[TestMethod]
[DeploymentItem("UnitTesting\testdata2.xml","mytestdata")]
public void Test1(){
/*test*/
}
When tests are run individually, they pass. When all are run at once (For example, when I select "Run all tests in the current context"), some tests fail, because the DeploymentItems left behind by other tests cause the tests to grab the wrong data. (Or, a test incorrectly use the files meant for another test that hasn't run yet.)
I discovered the [TestCleanup] and [ClassCleanup] attributes, which seem like the would help. I added this:
[TestCleanup]
public void CleanUp(){
if(Directory.Exists("mytestdata"))
Directory.Delete("mytestdata", true);
}
The trouble is, this runs after every test method, and it seems that it will delete DeploymentItems for tests that have not run yet. [ClassCleanup] would prevent this, but unfortunately, it would not run often enough to prevent the original issue.
From the MSDN documentation, it seems that DeploymentItem only guarantees that the files will be there before the test executes, but it is not more specific than that. I think I am seeing the following problem:
Deployment Item for test executes
(other stuff happens?)
Test cleanup from previous test executes
Next test executes
Test fails because files are gone
Does anyone know the execution order of the different test attributes? I've been searching but I haven't found much.
I have thought about having each deployment item use its own, unique folder for data, but this becomes difficult as there are hundreds of tests to go through.
The order of the test attributes are as follows:
Methods marked with the AssemblyInitializeAttribute.
Methods marked with the ClassInitializeAttribute.
Methods marked with the TestInitializeAttribute.
Methods marked with the TestMethodAttribute.
Part of the problem is that Visual Studio runs tests in a non-deterministic order(by default but this can be changed) and multiple at a time. This means that you cannot delete the folder after each test.
In general, if you can avoid going to the disk for unit tests it will be much better. In general you don't want to have anything besides code that can break your tests.
I had a similar problem. In few tests i need to delete a deployed item - all tests pass when run individually, but failed when run in a playlist. My solution is ugly, but simple: use a different folder for every test.
For example:
[TestMethod]
[DeploymentItem("Resources\\DSC06247.JPG", "D1")]
public void TestImageUploadWithRemoval()
{
// Arrange
myDeployedImagePath = Path.Combine(TestContext.DeploymentDirectory, "D1", "DSC06247.JPG");
// Act ...
}
[TestMethod]
[DeploymentItem("Resources\\DSC06247.JPG", "D2")]
public void TestImageUploadWithoutRemoval()
{
// Arrange
myDeployedImagePath = Path.Combine(TestContext.DeploymentDirectory, "D2", "DSC06247.JPG");
// Act...
}

Does this negative unit testing make sense

I have tests like that. Negative unit testing.
Does this test make sense? Is it not better to test only the expected exceptional scenarios?
[Test]
public void Get_Root_Units_By_Non_Existing_TemplateId()
{
// ARRANGE
ITemplateUnitDataProvider provider = new TemplateUnitDataProvider(_connectionString);
int templateId = -1;
// ACT
var units = provider.GetRootUnits(templateId);
// ASSERT
Assert.IsNotNull(units);
Assert.Count(0, units);
}
For me this test makes sense. You are checking that the SUT (Subject Under Test) returns an empty array if it hasn't found any records matching the input parameter.
In my opinion, unit tests are used to check your method's correctness. Handling incorrect input correctly is also something you want to test for.
Unit tests are meant to give you a sense of security about your code, so handling invalid input is certainly something I would test.
One good reason to wite fail test is any unit test is better than no unit test
writing fail test lets you know your system well what it should fail on else you have some other scenario's that you are not aware of

Need some information in VS unit Test Assert.Inconclusive?

I am working on some unittest projects in VS 2008 in C#, i created one simple small method for unit test?
public int addNumber(int a, int b)
{
return a + b;
}
well i created a unit test method as below,
[TestMethod()]
public void addNumberTest()
{
Mathematical target = new Mathematical(); // TODO: Initialize to an appropriate value
int a = 4; // TODO: Initialize to an appropriate value
int b = 2; // TODO: Initialize to an appropriate value
int expected = 0; // TODO: Initialize to an appropriate value
int actual;
actual = target.addNumber(a, b);
Assert.AreEqual(expected, actual);
Assert.Inconclusive("Verify the correctness of this test method.");
}
But when i try to run the unittest project ,
I am receiving an Inconclusive message. My question is
what exactly the Inconclusive is and when it comes into the picture?
what are the necessary things, i need to do to make my unit test passed?
You need to decide what the criteria is for a unit test to be considered passed. There isn't a blanket answer to what makes a unit test pass. The specifications ultimately dictate what constitutes a passing unit test.
If the method you are testing is indeed just adding two numbers, than the Assert.AreEqual(expected,actual) is probably enough for this particular unit test. You may also want to check Assert.IsTrue(expected>0) That may be another assertion you could tack on to this unit test.
You'll want to test it again though with other values like negatives, zeros, and really large numbers.
You won't need the Inconclusive operator for your unit tests of the addNumber method. That assertion would be more useful when dealing with objects and threads possibly. Calling the Inconclusive assertion like you have will always fail and always return the string passed into it.

Mock Verify/VerifyAll before or after Assertion

I have been used to following code pattern while writing my test
public void TestMethod_Condition_Output()
{
//Arrange----------------
Mock<x> temp = new Mock<x>();
temp.setup.......
//Act--------------------
classinstance.TestMethod()
//Assert------------------
temp.VerifyAll();
Assert.AreNotEqual(.....)
}
I have been used to do the VerifyAll() before performing Assertions. But lately on some online examples, I have seen people doing Assertion first and then VerifyAll, if any. I do feel that my way is the correct way unless I am missing something.
Could you please alert me if I am missing anything.
In my opinion, the verify should come after the asserts. I want the asserts close to the invocation of the method under test as they are documenting what the method does. The verifications of the mock invocations are detailing how the class uses it's dependencies. This is less important to tie directly to the method itself.
In a sense the mocking of the dependencies becomes a wrapper around the actual test itself. This makes the test more understandable (to me, anyway, YMMV). My tests then follow this pattern:
Arrange
Mock
Set up expectations for dependencies
Set up expected results
Create class under test
Act
Invoke method under test
Assert
Assert actual results match expected results
Verify that expectations were met
I don't know that I would be pedantic about it, but this is the order that makes the most sense to me.
In a AAA style testing I do not use VerifyAll but rather than verify methods were called explicitly as part of the unit of test. Within the Arrange area I only setup methods that need to return a value.
using Rhino as an example...
//Arrange
mockedInterface.Stub(x => x.SomeMethod1()).Returns(2);
...
//Assert
mockedInterface.AssertWasCalled(x => x.SomeMethod1());
mockedInterface.AssertWasCalled(x => x.SomeMethod2());
Assert.AreEqual(...); // stanmdard NUnit asserttions
I do not need to setup the expected call to SomeMethod2() if it does not return anything.
With Loose mocks there is no real need to call VerifyAll as calls to other methods would not fail the test (unless a return is needed then it is required in the Arrange section).
The amount of assertions should be kept to a minimum (create more tests if it gets too large) and the order of them should not really matter either.

Categories