I am writing unit tests using Nunit for a C# project.
I am trying to run a single test multiple times with different data using the TestCaseSource attribute.
I am doing this elsewhere without any problems, but now I am finding that the first time I run my tests, the code passes. The next time, it doesn't. Using some Console.WriteLine statements, I can see that the test data is different each time.
The method used to generate the data is internal to the test class, is not static and generates all required dependencies from scratch for each test.
--
I have a fake class which holds a queue of values to return when a given function is called. A new class is created for each test.
However, if the first time the test is run, it exhausts the queue, the next time it is run, no data is found. Surely this should be regenerated every time?
--
It is as if Nunit is not calling the method specified by the TestCaseSource attribute every time the test is run - only when the project is first loaded.
Is this expected? Is there a workaround?
EDIT:
Ok, here is a very basic example, below:
[TestFixture]
public class Tests
{
public interface IEntry
{
object Read();
}
[TestCaseSource("TestData")]
public void Test(Mock<IEntry> entry)
{
object o = entry.Object.Read();
object o2 = entry.Object.Read();
}
public System.Collections.IEnumerable TestData()
{
var entry = new Mock<IEntry>();
int call = 0;
entry.Setup(x => x.Read()).Returns(() =>
{
Console.WriteLine(call);
return null;
}).Callback(() =>
{
call++;
});
yield return new TestCaseData(entry);
}
}
If you watch the test output in Nunit, it should always display 0, followed by 1. In this case, each time the test is run it will be incremented. I.e. second run: 2 and 3, third run: 4 and 5, etc.
If you move the TestData code into Test, then the correct values are returned every time.
I assume you're using the NUnit GUI runner?
What you're seeing is (I believe, I can't find confirmation in the NUnit source easily) is an optimization of the test runner. Rather than re-create the values provided by the TestCaseSource attribute on every test run, it will only do so when the assembly under test changes.
If you change your code to remove the Moq dependency, it's a little clearer:
[TestFixture]
public class SampleTests
{
[TestCaseSource("TestData")]
public void Test(CallTracker callTracker)
{
callTracker.Call++;
callTracker.Call++;
}
public IEnumerable TestData()
{
yield return new TestCaseData(new CallTracker());
}
public class CallTracker
{
int call;
public int Call
{
get
{
return call;
}
set
{
call = value;
Console.WriteLine(call);
}
}
}
}
This yields the same behavior as your code. CallTracker is created anew whenever the TestCaseSource is evaluated, but since the call count keeps going up, the test runner must be resuing the same instance (for what I'm assuming is performance reasons).
Resharper's test runner in Visual Studio does not exhibit this behavior; it always shows 1 and 2 for repeated runs without recompiling. This is probably why it's slower to start running tests than the NUnit GUI runner is. Similarly, the NUnit console does not exhibit this behavior since it always starts cold.
Related
I use Theory with MemberData like this:
[Theory]
[MemberData(nameof(TestParams))]
public void FijewoShortcutTest(MapMode mapMode)
{
...
and when it works, it is all fine, but when it fails XUnit iterates over all data I pass as parameters. In my case it is fruitless attempt, I would like to stop short -- i.e. when the first set of parameters make the test to fail, stop the rest (because they will fail as well -- again, it is my case, not general rule).
So how to tell XUnit to stop Theory on the first fail?
The point of a Theory is to have multiple independent tests running the same code of different data. If you only actually want one test, just use a Fact and iterate over the data you want to test within the method:
[Fact]
public void FijewoShortcutTest()
{
foreach (MapMode mapMode in TestParams)
{
// Test code here
}
}
That will mean you can't easily run just the test for just one MapMode though. Unless it takes a really long time to execute the tests for some reason, I'd just live with "if something is badly messed up, I get a lot of broken tests".
Im begginer in webdriver and c#. I want to use variable, from first test in another tests, how do I do that? I got to this point with some examples, but it does not work. I see that first test gets the login right, but when I start the second test and try to sendkeys, I get that loginName is null. (code is in short version, only to give you an idea of what Im trying to do)
[TestFixture]
public class TestClass
{
private IWebDriver driver;
private StringBuilder verificationErrors;
private string baseURL;
private bool acceptNextAlert = true;
static public String loginName;
static public String loginPassword;
[SetUp]
public void SetupTest()...
[TearDown]
public void TeardownTest()...
[Test]
public void GetLoginAndPassword()
{
loginName = driver.FindElement(By.XPath("...")).Text;
loginPassword = driver.FindElement(By.XPath("...")).Text;
Console.WriteLine(loginName);
}
[Test]
public void Test1()
{
driver.FindElement(By.Id("UserNameOrEmail")).SendKeys(loginName);
driver.FindElement(By.Id("Password")).SendKeys(loginPassword);
}
You cannot (and should not) send variables between tests. Tests methods are independent from another... and should actually Assert() something.
Your first method GetLoginAndPassword() isn't a test method per se but a utility method. If you use a Selenium PageObject pattern, this is probably a method of your PageObject class that you can run at the begining of your actual Test1() method.
The problem is that the methods marked with TestAttribute do not run sequentially in the same order you implemented them. Thus it might be possible that Test1 runs long before GetLoginAndPassword). You have to either call that method once from within the constructor or during test-initialization, or before every test-run.
[Test]
public void Test1()
{
GetLoginAndPassword();
driver.FindElement(By.Id("UserNameOrEmail")).SendKeys(loginName);
driver.FindElement(By.Id("Password")).SendKeys(loginPassword);
}
Probably your GetLoginAndPassword isnĀ“t even a method to test but a method used from your tests (unless you actually have a method within your system under test called GetLoginAndPassword. However since there are no asserts at all your tests are somewhat weird.
The purpose of unit testing is to test if a specific unit (meaning a group of closely related classes) works as specified. It is not meant to test if your complete elaborate program works as specified.
Test driven design has the advantage that you are more aware what each function should do. Each function is supposes to transform a pre-condition into a specified post-condition, regardless of what you did before or after calling the function.
If your tests assume that other tests are run before your test is run, then you won't test the use case that the other functions are not called, or only some of these other required functions are called.
This leads to the conclusion that each test method should be able to be run independently. Each test should set up the precondition, call the function and check if the postcondition is met.
But what if my function A only works correctly if other function B is called?
In that case the specification of A ought to describe what happens if B was called before A was called, as well as what would happen if A was called without calling B first.
If your unit test would first test B and then A with the prerequisite that B was called, you would not test whether A would react according to specification without calling B.
Example.
Suppose we have a divider class, that will divide any number by a given denominator that can be set using a property.
public class Divider
{
public double Denominator {get; set;}
public double Divide(double numerator)
{
return numerator / this.Denominator;
}
}
It is obvious that in normal usage one ought to set property Denominator before calling Divide:
Divider divider = new divider(){Denominator = 3.14};
Console.WriteLine(divider.Divide(10);
Your specification ought to describe what happens if Divide is called without setting Denominator to a non-zero value. This description would be like:
If method Divide is called with a parameter value X and the value of Denominator is a non-zero Y, then the return value is X/Y. If the value of Denominator is zero, then the System.DivideByZeroException is thrown.
You should create at least two tests. One for the use case that Denominator was set at a proper non-zero value, and one for the use case that the Denominator is not set at all. And if you are very thorough: a test for the use case that the Denominator is first set to a non-zero value and then to a zero value.
How to programmatically tell NUnit to repeat a test?
Background:
I'm running NUnit from within my C# code, using a SimpleNameFilter and a RemoteTestRunner. My application reads a csv file, TestList.csv, that specifies what tests to run. Up to that point everything works ok.
Problem:
The problem is when I put the same test name two times in my TestList file. In that case, my application correctly reads and loads the SimpleNameFilter with two instances of the test name. This filter is then passed to the RemoteTestRunner. Then, Nunit executes the test only once. It seems that when Nunit sees the second instance of a test it already ran, it ignores it.
How can I override such behavior? I'd like to have NUnit run the same test name two times or more as specified in my TestList.csv file.
Thank you,
Joe
http://www.nunit.org/index.php?p=testCase&r=2.5
TestCaseAttribute serves the dual purpose of marking a method with
parameters as a test method and providing inline data to be used when
invoking that method. Here is an example of a test being run three
times, with three different sets of data:
[TestCase(12,3, Result=4)]
[TestCase(12,2, Result=6)]
[TestCase(12,4, Result=3)]
public int DivideTest(int n, int d)
{
return( n / d );
}
Running an identical test twice should have the same result. An individual test can either pass or it can fail. If you have tests that work sometimes and fail another then it feels like the wrong thing is happening. Which is why NUnit doesn't support this out of the box. I imagine it would also cause problems in the reporting of the results of the test run, does it say that test X worked, or failed if both happened?
The closest thing you're going to get in Nunit is something like the TestCaseSource attribute (which you already seem to know about). You can use TestCaseSource to specify a method, which can in turn, read from a file. So, you could for example have a file "cases.txt" which looks like this:
Test1,1,2,3
Test2,wibble,wobble,wet
Test1,2,3,4
And then use this from your tests like so:
[Test]
[TestCaseSource("Test1Source")]
public void Test1(string a, string b, string c) {
}
[Test]
[TestCaseSource("Test2Source")]
public void Test2(string a, string b, string c) {
}
public IEnumerable Test1Source() {
return GetCases("Test1");
}
public IEnumerable Test2Source() {
return GetCases("Test2");
}
public IEnumerable GetCases(string testName) {
var cases = new List<IEnumerable>();
var lines = File.ReadAllLines(#"cases.txt").Where(x => x.StartsWith(testName));
foreach (var line in lines) {
var args = line.Split(',');
var currentcase = new List<object>();
for (var i = 1; i < args.Count(); i++) {
currentcase.Add(args[i]);
}
cases.Add(currentcase.ToArray());
}
return cases;
}
This is obviously a very basic example, that results in Test1 being called twice and Test2 being called once, with the arguments from the text file. However, this is again only going to work if the arguments passed to the test are different, since nunit uses the arguments to create a unique test name, although you could work around this by having the test source generate a unique number for each method call and passing it to the test as an extra argument that the test simply ignores.
An alternative would be for you to run the nunit from a script that calls nunit over and over again for each line of the file, although I imagine this may cause you other issues when you're consolidating the reporting from the multiple runs.
Im running some tests on my code at the moment. My main test method is used to verify some data, but within that check there is a lot of potential for it to fail at any one point.
Right now, I've set up multiple Assert.Fail statements within my method and when the test is failed, the message I type is displayed as expected. However, if my method fails multiple times, it only shows the first error. When I fix that, it is only then I discover the second error.
None of my tests are dependant on any others that I'm running. Ideally what I'd like is the ability to have my failure message to display every failed message in one pass. Is such a thing possible?
As per the comments, here are how I'm setting up a couple of my tests in the method:
private bool ValidateTestOne(EntityModel.MultiIndexEntities context)
{
if (context.SearchDisplayViews.Count() != expectedSdvCount)
{
Assert.Fail(" Search Display View count was different from what was expected");
}
if (sdv.VirtualID != expectedSdVirtualId)
{
Assert.Fail(" Search Display View virtual id was different from what was expected");
}
if (sdv.EntityType != expectedSdvEntityType)
{
Assert.Fail(" Search Display View entity type was different from what was expected");
}
return true;
}
Why not have a string/stringbuilder that holds all the fail messages, check for its length at the end of your code, and pass it into Assert.Fail? Just a suggestion :)
The NUnit test runner (assuming thats what you are using) is designed to break out of the test method as soon as anything fails.
So if you want every failure to show up, you need to break up your test into smaller, single assert ones. In general, you only want to be testing one thing per test anyways.
On a side note, using Assert.Fail like that isn't very semantically correct. Consider using the other built-in methods (like Assert.Equals) and only using Assert.Fail when the other methods are not sufficient.
None of my tests are dependent on any others that I'm running. Ideally
what I'd like is the ability to have my failure message to display
every failed message in one pass. Is such a thing possible?
It is possible only if you split your test into several smaller ones.
If you are afraid code duplication which is usually exists when tests are complex, you can use setup methods. They are usually marked by attributes:
NUnit - SetUp,
MsTest - TestInitialize,
XUnit - constructor.
The following code shows how your test can be rewritten:
public class HowToUseAsserts
{
int expectedSdvCount = 0;
int expectedSdVirtualId = 0;
string expectedSdvEntityType = "";
EntityModelMultiIndexEntities context;
public HowToUseAsserts()
{
context = new EntityModelMultiIndexEntities();
}
[Fact]
public void Search_display_view_count_should_be_the_same_as_expected()
{
context.SearchDisplayViews.Should().HaveCount(expectedSdvCount);
}
[Fact]
public void Search_display_view_virtual_id_should_be_the_same_as_expected()
{
context.VirtualID.Should().Be(expectedSdVirtualId);
}
[Fact]
public void Search_display_view_entity_type_should_be_the_same_as_expected()
{
context.EntityType.Should().Be(expectedSdvEntityType);
}
}
So your test names could provide the same information as you would write as messages:
Right now, I've set up multiple Assert.Fail statements within my
method and when the test is failed, the message I type is displayed as
expected. However, if my method fails multiple times, it only shows
the first error. When I fix that, it is only then I discover the
second error.
This behavior is correct and many testing frameworks follow it.
I'd like to recommend stop using Assert.Fail() because it forces you to write specific messages for every failure. Common asserts provide good enough messages so you can replace you code with the following lines:
// Act
var context = new EntityModelMultiIndexEntities();
// Assert
Assert.Equal(expectedSdvCount, context.SearchDisplayViews.Count());
Assert.Equal(expectedSdVirtualId, context.VirtualID);
Assert.Equal(expectedSdvEntityType, context.EntityType);
But I'd recommend start using should-frameworks like Fluent Assertions which make your code mere readable and provide better output.
// Act
var context = new EntityModelMultiIndexEntities();
// Assert
context.SearchDisplayViews.Should().HaveCount(expectedSdvCount);
context.VirtualID.Should().Be(expectedSdVirtualId);
context.EntityType.Should().Be(expectedSdvEntityType);
I'm using MSTEST inside Visual Studio 2008. How can I have each unit test method in a certain test class act as if it were the first test to run so that all global state is reset before running each test? I do not want to explicitly clean up the world using TestInitialize, ClassInitialize, AssemblyInitialize, etc. For example:
[TestClass]
public class MyClassTests
{
[TestMethod]
public void Test1()
{
// The "Instance" property creates a new instance of "SomeSingleton"
// if it hasn't been created before.
var i1 = SomeSingleton.Instance;
...
}
[TestMethod]
public void Test2()
{
// When I select "Test1" and "Test2" to run, I'd like Test2
// to have a new AppDomain feel so that the static variable inside
// of "SomeSingleton" is reset (it was previously set in Test1) on
// the call to ".Instance"
var i2 = SomeSingleton.Instance;
// some code
}
Although a similar question appeared on this topic, it only clarified that tests do not run in parallel. I realize that tests run serially, but there doesn't seem to be a way to explicitly force a new AppDomain for each method (or something equivalent to clear all state).
Ideally, I'd like to specify this behavior for only a small subset of my unit tests so that I don't have to pay the penalty of a new AppDomain creation for tests that don't care about global state (the vast majority of my tests).
In the end, I wrote a helper that used AppDomain.CreateDomain and then used reflection to call the unit test under a different AppDomain. It provides the isolation I needed.
This post on MSDN's forums shows how to handle the situation if you only have a few statics that need to be reset. It does mention some options (e.g. using Reflection and PrivateType ).
I continue to welcome any further ideas, especially if I'm missing something obvious about MSTEST.
Add a helper in your tests that uses reflection to delete the singleton instance (you can add a reset method to the singleton as well, but I would be concerned about its use). Something like:
public static class SingletonHelper {
public static void CleanDALFactory()
{
typeof(DalFactory)
.GetField("_instance",BindingFlags.Static | BindingFlags.NonPublic)
.SetValue(null, null);
}
}
Call this in your TestInitialize method. [ I know this is "cleaning up the world", but you only have to write the method once in a helper per singleton, its very trivial and gives you explicit control ]
I think you are looking for the TestIntialize attribute and the TestCleanUp attribute. Here is an MSDN blog showing the execution orderlink text
We had a similar issue arise with our MSTests. We handled it by calling a function at the beginning and end of the specific tests that needed it.
We are storing a test expiration date in our app configuration. Three tests needed this date to fall into a specific range to determine the appropriate values. The way our application is set up, the configuration values would only be reset if there was not a value assigned in session. So, we created two new private static functions - one to explicitly set the configuration value to a specified date and one to clear that date from session after the test runs. In our three tests, we called these two functions. When the next test runs, the application sees an empty value for the date and refetches it from the configuration file.
I'm not sure if that's helpful, but that was how we worked around our similar issue.