This question already has answers here:
How to run a test many times with data read from .csv file (data driving)
(3 answers)
Closed 6 years ago.
I Have a desktop Application in which I have 200 Test cases with different input parameters
Now the issue is Every time I am recording the Each and every test case with different input parameters
Is there is any way so that I can copy the code and change the parameters so that my code remains same for all the test cases only input parameters change
There are a few things to address here. Firstly, you can run the test using a data driven approach as described in the link above.
More importantly, in my opinion anyway, is how you are writing your test so that they can be data driven and what exactly are you testing that you need so many combinations?
When writing tests, it is important to have reusable code to test. I would recommend looking at something like Code First Scaffolding or Coded UI Page Modeling (I wrote the page modeling stuff). With these approaches, your test code is far more maintainable and flexible (easier to change by hand). This would allow for extremely simple data driven tests.
public void WhenPerformingCalculation_ThenResultIsCorrect() {
// imagine calculator with two numbers and a sign
var testResult =
modelUnderTest.LeftSideNumber.SetValue(3) // set first number
.Operator.SetValue("*") // set sign
.RightSideNumber.SetValue(10) // set right number
.Evaluate.Click() // press evaluate button
.Result; // get the result
Assert.AreEqual(testResult, 30);
}
becomes
public class CalculationParameters
{
public double LeftNumber {get;set;}
public string Operator {get;set;}
public double RightNumber {get;set;}
public double Result {get;set;}
public override string ToString(){ return $"{LeftNumber} {Operator} {RightNumber} = {Result}"; }
}
public void WhenPerformingCalculation_ThenResultIsCorrect() {
ICollection<CalculationParameters> parameters = getParameters();
List<Exception> exceptions = new List<Exception>();
foreach(CalculationParameters parameter in parameters)
{
try
{
var testResult =
modelUnderTest.LeftSideNumber.SetValue(parameter.LeftNumber) // set first number
.Operator.SetValue(parameter.Operator) // set sign
.RightSideNumber.SetValue(parameter.RightNumber) // set right number
.Evaluate.Click() // press evaluate button
.Result; // get the result
Assert.AreEqual(testResult, parameter.Result);
}
catch (Exception e)
{
exceptions.Add(new Exception($"Failed for parameters: {parameter}", e));
}
}
if(exceptions.Any()){
throw new AggregateException(exceptions);
}
}
Secondly, why do you need to test so many combinations of input / output in a given test? If you are testing things like, "Given the login page, when supplying invalid credentials, then a warning is supplied to the user." How many invalid inputs do you really need to test? There would be a second test for valid credentials and no data driving is necessary.
I would caution you to be careful that you are not testing stuff that should be a unit test in your UI. It sounds like you are testing different combinations of inputs to see if the UI generates the correct output, which should probably be a unit test of your underlying system. When testing the UI, typically it is sufficient to test that the bindings to your view models are correct and not test that calculations or other server logic is performed accurately.
My provided example shows what I would NOT test client side unless that calculator only exists client side (no server side validation or logic regarding the calculation). Even in that case, I would probably get a javascript test runner to test the view model powering my calculator instead of using coded ui to do this test.
Would you be able to provide some example of the combinations of input/output you are testing?
You can use the event arguments to the application through command line with a batch script or you can create a function what will pass the requested parameters.
In the main header you can use
main(string eventargs[]);
where the string variable will be the event arguments from the command line
Related
Let me start by saying that I know about the DebuggerStepThroughAttribute and I'm using it in a number of places with much success.
However, I'm searching for a complementary solution that would work for my specific scenario, which I'll now illustrate...
Say I have a homegrown data-access framework. This framework comes with lots of unit tests which ensure that all my high-level data-access APIs are working as expected. In these tests, there is often a requirement to first seed some test-specific data to a one-off database, then execute the actual test on that data.
The thing is, I might rely on unit tests not just to give me a passive green/red indication about my code, but also to help me zero in on the source of occasional regression. Given the way I've written the tests, it's easy to imagine that a small subset of them could sometimes give me grief, because the code that performs the test data seeding and actual test code both use the same framework APIs at lower levels.
So for example, if my debugging of a failed test happened to require that I place a breakpoint inside one such common method, debugger would stop there a number of times (maybe an annoyingly large number of times!) before I'd get to the point I'm interested in (the actual test, not the seeding).
Leaving aside the fact that I could theoretically refactor everything and improve decoupling, I'm asking this:
Is there a general way to quickly and easily disable debugger breaking for a specific code block, including any sub-calls that might be made from that block, when any of the executed lines could have a breakpoint associated?
The only solution that I'm aware of is to use conditional breakpoints. I would need to set a certain globally accessible flag when entering the method that I wanted to exclude and clear it when exiting. Any conditional breakpoints would then have to require that flag must not be set.
But this seems tedious, because breakpoints are often added, removed, then added again, etc. Given the rudimentary breakpoint management support in Visual Studio this quickly becomes really annoying.
Is there another way? Preferably by manipulating the debugger directly or indirectly, similarly to how the DebuggerStepThroughAttribute does it for single method scope?
EDIT:
Here's a contrived example of what I might have:
public class MyFramework
{
public bool TryDoCommonWork(string s)
{
// Picture an actual breakpoint here instead of this line.
// As it is, debugger would stop here 3 times during the seeding
// phase and then one final time during the test phase.
Debugger.Break();
if (s != null)
{
// Do the work.
return true;
}
return false;
}
}
[TestClass]
public class MyTests
{
[TestMethod]
public void Test()
{
var fw = new MyFramework();
// Seeding stage of test.
fw.TryDoCommonWork("1");
fw.TryDoCommonWork("2");
fw.TryDoCommonWork("3");
// Test.
Assert.IsTrue(fw.TryDoCommonWork("X"));
}
}
What I'm really looking for is something roughly similar to this:
[TestClass]
public class MyTests
{
[TestMethod]
public void Test()
{
var fw = new MyFramework();
// Seeding stage of test with no debugger breaking.
using (Debugger.NoBreakingWhatsoever())
{
fw.TryDoCommonWork("1");
fw.TryDoCommonWork("2");
fw.TryDoCommonWork("3");
}
// Test with normal debugger breaking.
Assert.IsTrue(fw.TryDoCommonWork("X"));
}
}
How to programmatically tell NUnit to repeat a test?
Background:
I'm running NUnit from within my C# code, using a SimpleNameFilter and a RemoteTestRunner. My application reads a csv file, TestList.csv, that specifies what tests to run. Up to that point everything works ok.
Problem:
The problem is when I put the same test name two times in my TestList file. In that case, my application correctly reads and loads the SimpleNameFilter with two instances of the test name. This filter is then passed to the RemoteTestRunner. Then, Nunit executes the test only once. It seems that when Nunit sees the second instance of a test it already ran, it ignores it.
How can I override such behavior? I'd like to have NUnit run the same test name two times or more as specified in my TestList.csv file.
Thank you,
Joe
http://www.nunit.org/index.php?p=testCase&r=2.5
TestCaseAttribute serves the dual purpose of marking a method with
parameters as a test method and providing inline data to be used when
invoking that method. Here is an example of a test being run three
times, with three different sets of data:
[TestCase(12,3, Result=4)]
[TestCase(12,2, Result=6)]
[TestCase(12,4, Result=3)]
public int DivideTest(int n, int d)
{
return( n / d );
}
Running an identical test twice should have the same result. An individual test can either pass or it can fail. If you have tests that work sometimes and fail another then it feels like the wrong thing is happening. Which is why NUnit doesn't support this out of the box. I imagine it would also cause problems in the reporting of the results of the test run, does it say that test X worked, or failed if both happened?
The closest thing you're going to get in Nunit is something like the TestCaseSource attribute (which you already seem to know about). You can use TestCaseSource to specify a method, which can in turn, read from a file. So, you could for example have a file "cases.txt" which looks like this:
Test1,1,2,3
Test2,wibble,wobble,wet
Test1,2,3,4
And then use this from your tests like so:
[Test]
[TestCaseSource("Test1Source")]
public void Test1(string a, string b, string c) {
}
[Test]
[TestCaseSource("Test2Source")]
public void Test2(string a, string b, string c) {
}
public IEnumerable Test1Source() {
return GetCases("Test1");
}
public IEnumerable Test2Source() {
return GetCases("Test2");
}
public IEnumerable GetCases(string testName) {
var cases = new List<IEnumerable>();
var lines = File.ReadAllLines(#"cases.txt").Where(x => x.StartsWith(testName));
foreach (var line in lines) {
var args = line.Split(',');
var currentcase = new List<object>();
for (var i = 1; i < args.Count(); i++) {
currentcase.Add(args[i]);
}
cases.Add(currentcase.ToArray());
}
return cases;
}
This is obviously a very basic example, that results in Test1 being called twice and Test2 being called once, with the arguments from the text file. However, this is again only going to work if the arguments passed to the test are different, since nunit uses the arguments to create a unique test name, although you could work around this by having the test source generate a unique number for each method call and passing it to the test as an extra argument that the test simply ignores.
An alternative would be for you to run the nunit from a script that calls nunit over and over again for each line of the file, although I imagine this may cause you other issues when you're consolidating the reporting from the multiple runs.
Im running some tests on my code at the moment. My main test method is used to verify some data, but within that check there is a lot of potential for it to fail at any one point.
Right now, I've set up multiple Assert.Fail statements within my method and when the test is failed, the message I type is displayed as expected. However, if my method fails multiple times, it only shows the first error. When I fix that, it is only then I discover the second error.
None of my tests are dependant on any others that I'm running. Ideally what I'd like is the ability to have my failure message to display every failed message in one pass. Is such a thing possible?
As per the comments, here are how I'm setting up a couple of my tests in the method:
private bool ValidateTestOne(EntityModel.MultiIndexEntities context)
{
if (context.SearchDisplayViews.Count() != expectedSdvCount)
{
Assert.Fail(" Search Display View count was different from what was expected");
}
if (sdv.VirtualID != expectedSdVirtualId)
{
Assert.Fail(" Search Display View virtual id was different from what was expected");
}
if (sdv.EntityType != expectedSdvEntityType)
{
Assert.Fail(" Search Display View entity type was different from what was expected");
}
return true;
}
Why not have a string/stringbuilder that holds all the fail messages, check for its length at the end of your code, and pass it into Assert.Fail? Just a suggestion :)
The NUnit test runner (assuming thats what you are using) is designed to break out of the test method as soon as anything fails.
So if you want every failure to show up, you need to break up your test into smaller, single assert ones. In general, you only want to be testing one thing per test anyways.
On a side note, using Assert.Fail like that isn't very semantically correct. Consider using the other built-in methods (like Assert.Equals) and only using Assert.Fail when the other methods are not sufficient.
None of my tests are dependent on any others that I'm running. Ideally
what I'd like is the ability to have my failure message to display
every failed message in one pass. Is such a thing possible?
It is possible only if you split your test into several smaller ones.
If you are afraid code duplication which is usually exists when tests are complex, you can use setup methods. They are usually marked by attributes:
NUnit - SetUp,
MsTest - TestInitialize,
XUnit - constructor.
The following code shows how your test can be rewritten:
public class HowToUseAsserts
{
int expectedSdvCount = 0;
int expectedSdVirtualId = 0;
string expectedSdvEntityType = "";
EntityModelMultiIndexEntities context;
public HowToUseAsserts()
{
context = new EntityModelMultiIndexEntities();
}
[Fact]
public void Search_display_view_count_should_be_the_same_as_expected()
{
context.SearchDisplayViews.Should().HaveCount(expectedSdvCount);
}
[Fact]
public void Search_display_view_virtual_id_should_be_the_same_as_expected()
{
context.VirtualID.Should().Be(expectedSdVirtualId);
}
[Fact]
public void Search_display_view_entity_type_should_be_the_same_as_expected()
{
context.EntityType.Should().Be(expectedSdvEntityType);
}
}
So your test names could provide the same information as you would write as messages:
Right now, I've set up multiple Assert.Fail statements within my
method and when the test is failed, the message I type is displayed as
expected. However, if my method fails multiple times, it only shows
the first error. When I fix that, it is only then I discover the
second error.
This behavior is correct and many testing frameworks follow it.
I'd like to recommend stop using Assert.Fail() because it forces you to write specific messages for every failure. Common asserts provide good enough messages so you can replace you code with the following lines:
// Act
var context = new EntityModelMultiIndexEntities();
// Assert
Assert.Equal(expectedSdvCount, context.SearchDisplayViews.Count());
Assert.Equal(expectedSdVirtualId, context.VirtualID);
Assert.Equal(expectedSdvEntityType, context.EntityType);
But I'd recommend start using should-frameworks like Fluent Assertions which make your code mere readable and provide better output.
// Act
var context = new EntityModelMultiIndexEntities();
// Assert
context.SearchDisplayViews.Should().HaveCount(expectedSdvCount);
context.VirtualID.Should().Be(expectedSdVirtualId);
context.EntityType.Should().Be(expectedSdvEntityType);
I am trying to test a method with unit testing, however to also make sure internally it was executed in an expected way. Here is the method simplified; It returns a value from a database but also saves it to the cache, so if it's requested again within 5 seconds, the value will be retrieved from the cache and not the DB.
public static string GetValue()
{
var cache = HttpRuntime.Cache;
string value = (string)cache["test"];
// if value is null then it was never in cache or it expired.
if (value == null)
{
// Imagine here complex code that retrieves and sets "value"
value = "OK";
// Add it to cache to retrieve it faster if requested again within 5 sec.
cache.Add("test", value, null, DateTime.Now + TimeSpan.FromSeconds(5),
System.Web.Caching.Cache.NoSlidingExpiration, CacheItemPriority.NotRemovable, null);
Debug.WriteLine("From DB");
}
else
{
// Value was in cache, so it's ready to return
Debug.WriteLine("From Cache");
}
return value;
}
This basically returns the value "OK" from a hypothetical database. However, because it uses absolute expiration of HttpRuntime.Cache, if the value is requested again within 5 seconds, it's returned from the Cache and not from the DB.
Now my question is, how to write a TestMethod that not only verifies that it returns OK, but also it the caching logic is working correctly.
Notice that depending on whether it used the DB or the Cache to get the value, a corresponding debug line is added to debug output.
So the testing method should like something like this:
[TestMethod]
public void GetValue_OK()
{
Assert.IsTrue(Helpers.GetValue() == "OK");
Thread.Sleep(4000);
Assert.IsTrue(Helpers.GetValue() == "OK");
// Assert that it wrote "From Cache"
Assert.IsTrue(LastDebugLine().Contains("Cache"));
Thread.Sleep(2000);
Assert.IsTrue(Helpers.GetValue() == "OK");
// Assert that it wrote "From DB", because theoretically over 5 seconds passed
// So it has expired and the routine loaded it again from the DB.
Assert.IsTrue(LastDebugLine().Contains("DB"));
}
string LastDebugLine()
{
// Imagine a string named LastDebugOutput that contains the
// last line of output from Debug.WriteLine
// The code below retrieves somehow this line.
return String.Empty;
}
In the testmethod, I not only verify the correct output but also whether it was retrieved from the cache or the DB. I run the method 3 times, inserting delays between each retrieval. I want to also test if the value will be retrieved from the DB or the Cache.
The way I thought this could happen was with a method I call LastDebugLine() that retrieves the last debug info from the tested method. By reading it's status the test method knows the internals of the method so it can compare with the expected result.
Now my question has two parts:
1) What is the correct way to test all of this? Is my idea of using the debug output and checking it in the unit test correct? I could be very wrong here about the general concept so maybe this can be done a better way.
2) If however my concept is correct, what exactly should be the code in LastDebugLine() to get the last line from Debug.WriteLine?
Even if I am correct, there is still a problem as unit tests might run multithreaded, so reading the debug output like this might bring up unexpected results.
How to test this method correctly?
There are a couple of ways to approach this. To do the whole thing as an integration test you could
create a test which calls the method to put the value in the DB/Cache.
then I would have the test delete the value from the db
then call the method again.
If the value is got from the cache then you'll get the right value. if the method calls the db the test will fail.
As an alternative you could isolate the components which access the db and the cache and inject the objects which do this into your class under test. You could then provide mocks in your tests to assert that the objects are called in the right order.
It will be difficult in your current situation as you have a static method and so injecting the dependency is hard, but if you change your GetValue method so that is not static, you can pass some object that wraps your HttpRuntime.Cache and implements a similar interface, then pass a mock of that object and validate that when the object doesn't exist in the cache the other code is called and when it does exist in the cache it isn't called
If you can't change your model so that GetValue can't be static then you could look at using the MS Fakes framework or one of the other commercial products which can mock static methods.
I am a newbie to unit testing - I have only done basic assert tests using mere Testmethods(my last module, I created about 50 of those).
I am currently reading a book on Unit Testing, and one of the many examples in the book has me creating a new class for each single test. Below is one of the example objects created just for one test case. My question is is it ever necessary to do this? Or when should one apply this approach and when is it not necessary?
public class and_saving_an_invalid_item_type : when_working_with_the_item_type_repository
{
private Exception _result;
protected override void Establish_context()
{
base.Establish_context();
_session.Setup(s => s.Save(null)).Throws(new ArgumentNullException());
}
protected override void Because_of()
{
try
{
_itemTypeRepository.Save(null);
}
catch (Exception exception)
{
_result = exception;
}
}
[Test]
public void then_an_argument_null_exception_should_be_raised()
{
_result.ShouldBeInstanceOfType(typeof(ArgumentNullException));
}
}
Do you need to create a new class for each individual test? I would say no, you certainly do not. I don't know why the book is saying that, or if they are just doing it to help illustrate their examples.
To answer your question, I'd recommend using a class for each group of tests... but it's really a bit more complex than that, because how you define "group" is varying and dependant on what you're doing at the time.
In my experience, a set of tests is really logically structured like a document, which can contain one or more set of tests, grouped (and sometimes nested) together by some common aspect. A natural grouping for testing Object-Oriented code is to group by class, and then by method.
Here's an example
tests for class 1
tests for method 1
primary behaviour of method 1
alternate behaviour of method 1
tests for method 2
primary behaviour of method 2
alternate behaviour of method 2
Unfortunately, in C# or java (or similar languages), you've only got two levels of structure to work with (as opposed to the 3 or 4 you really actually want), and so you have to hack things to fit.
The common way this is done is to use a class to group together sets of tests, and don't group anything at the method level, as like this:
class TestsForClass1 {
void Test_method1_primary()
void Test_method1_alternate()
void Test_method2_primary()
void Test_method2_alternate()
}
If both your method 1 and method 2 all have identical setup/teardown, then this is fine, but sometimes they don't, leading to this breakdown:
class TestsForClass1_method1 {
void Test_primary()
void Test_alternate()
}
class TestsForClass1_method2 {
void Test_primary()
void Test_alternate()
}
If you have more complex requirements (let's say you have 10 tests for method_1, the first 5 have setup requirement X, the next 5 have different setup requirements), then people usually end up just making more and more class names like this:
class TestsForClass1_method1_withRequirementX { ... }
class TestsForClass1_method1_withRequirementY { ... }
This sucks, but hey - square peg, round hole, etc.
Personally, I'm a fan of using lambda-functions inside methods to give you a third level of grouping. NSpec shows one way that this can be done... we have an in-house test framework which is slightly different, it reads a bit like this:
class TestsForClass1 {
void TestsForMethod1() {
It.Should("perform it's primary function", () => {
// ....
});
It.Should("perform it's alternate function", () => {
// ....
});
}
}
This has some downsides (if the first It statement fails, the others don't run), but I consider this tradeoff worth it.)
-- The question originally read: "is it ever really necessary to create an object for each single test I want to carry out?". The answer to that is (mostly) yes, as per this explanation.
Generally, unit tests involve the interaction of two parts
The object under test. Usually this is an instance of a class or a function you've written
The environment. Usually this is whatever parameters you've passed to your function, and whatever other dependencies the object may have a reference to.
In order for unit tests to be reliable, both of these parts need to be "fresh" for each test, to ensure that the state of the system is sane and reliable.
If the thing under test is not refreshed for each test, then one function may alter the object's internal state, and cause the next test to wrongly fail
If the environment is not refreshed for each test, then one function may alter the environment (eg: set some variable in an external database or something), which may cause the next test to wrongly fail.
There are obviously many situations where this is not the case - You might for example have a pure mathematical function that only takes integers as parameters and doesn't touch any external state, and then you may not want to bother re-creating the object under test or the test environment... but generally, most things in any Object-Oriented system will need refreshing, so this is why it is "standard practice" to do so.
I'm not quite able to follow your example, but ideally any test case should be able to run independently of any other - independently from anything else, really.