C# test dependancies - c#

I am very aware and agree with the opinion that tests should not be dependant on one another.
However in this instance I feel it would be beneficial.
The situation is the system under test has a step by step process that needs to be followed (there is no way to jump to a step without going through the previous ones).
In the ideal world we would get the devs to add an API to allow us to do thus, but given the constraints this will not be done.
Currently the tests being done are all end to end making failing tests difficult to analyse at times.
My question is then: Is there a clean way I can break down these end to end tests into smaller tests, and impose some dependencies on them?
Im aware that TestNG can do this using the #DependOn notation, is there a similar concept for C#?

From the way you describe your tests, either:
The way the developers have developed the code is flawed, in that to test step 3 say, steps 1 and 2 must be invoked first, rather than being able to isolate step 3. If the problem is with the way the developers designed the system, suggest they fix it.
You are performing integration tests and want to test the results of invoking several steps. In that case, you do not want to use a unit test tool, you need an integration test tool. See this answer to another question for advice on such tools and their pitfalls.

A couple of concepts might be useful here:
Scenario-Driven Tests
One Assert Per Test
The basic idea here is to separate the scenario you are executing from the assertion you are making (i.e. what you are testing). This is really just BDD, so it may make sense to use a BDD Framework to help you, although it's not necessary. With NUnit, you might write something like this
[TestFixture]
public class ExampleTests
{
[SetUp]
public void ExecuteScenario()
{
GrabBucket();
AddApple();
AddOrange();
AddBanana();
}
[Test]
public void ThereShouldBeOneApple()
{
Assert.AreEqual(1, Count("Apple"));
}
[Test]
public void ThereShouldBeOneOrange()
{
Assert.AreEqual(1, "Orange");
}
[Test]
public void ThereShouldBeOneBanana()
{
Assert.AreEqual(1, "Banana");
}
[Test]
public void ThereShouldBeNoPomegranates()
{
Assert.AreEqual(0, "Pomegranate");
}
private void GrabBucket() { /* do stuff */ }
private void AddApple() { /* do stuff */ }
private void AddOrange() { /* do stuff */ }
private void AddBanana() { /* do stuff */ }
private int Count(string fruitType)
{
// Query the application state
return 0;
}
}
I realize this doesn't answer your question as stated - this isn't breaking the larger integration test down into smaller units - but it may help you solve the dependency problem; here, all the related tests are depending on execution of a single scenario, not of previously-executed tests.

I agree with almost all the comments to date, the solution I ended up going with be it not the cleanest or most 'SOLID', was to use nUnit as the framework, using the category attribute to order the tests.

Related

Working example for running Selenium tests in parallel within the same class using NUnit attribute FixtureLifeCycle value InstancePerTestCase

Starting from NUnit 3.13.1 (I'm trying 3.13.1) a new attribute was introduced for TestFixture isolation when running tests in parallel within the class.
Has anybody managed to use [Parallelizable(ParallelScope.All)] + [FixtureLifeCycle(LifeCycle.SingleInstance)] and run Webdriver tests in parallel within the same class?
After activating this feature I started to get unpredictable errors, like I used to have without the new attribute. Looks like the Fixture is not isolated.
NOTE: everything works fine when running WebDriver test classes in parallel.
WebDriver is initialized in
TestFixture base class looks like the following
[SetUp]
protected void Initialize()
{
//InitializeWebDriver();
Driver = new MyDriver();
}
[TearDown]
public void TestFixtureTearDown()
{
try
{
//...
}
finally
{
Driver.Quit();
Driver = null;
}
}
}
Tests look like this:
[TestFixture]
[Parallelizable(ParallelScope.All)]
[FixtureLifeCycle(LifeCycle.SingleInstance)]
public class TestClassA : TestBase
{
[Test]
public void TestA1()
{
}
}
The mistake in the code was very obvious (used SingleInstance instead of InstancePerTestCase)
Created a template project with 2 classes with 3 tests each.
All 6 may be executed simultaneously without any failures.
https://github.com/andrewlaser/TestParallelNUnit
The general philosophy of NUnit attributes around parallel execution is that they are intended to tell NUnit that it's safe to run your class in parallel. Using them doesn't make it safe... that's up to you.
The new attribute makes it easier for you to do that but doesn't guarantee anything. It does protect you from a certain kind of error - where two parallel test case instances make incompatible changes to the same member of the test class. But that's a very small part of all the ways your tests can fail when run in parallel.
Putting it another way... your Fixture is now safe but the things your Fixture may refer to, drivers, files, remote services are in no way protected. If your fixtures share anything external, that's a source of failure.
Unfortunately, you haven't given enough information for me to point out what's specifically wrong here. For example, you haven't shown how or where the Driver property is declared. With more info on your part, I'll be glad to up date my answer.
Going back to my initial point, your use of the attributes is no more than a promise you are making to NUnit... something like "I have made sure that it's safe to run this test in parallel." In your case, you're making the even bigger promise that all the tests in your fixture are safe to run in parallel. That's not something I would do right out of the box. I'd start with just two tests, which I think can safely run together and I'd expand from there. Obviously, it's almost always safe to run one test in parallel. :-)

How can I test if a private method of a class is called or not with rhino mock?

I am quite new at C# and also rhino mocks. I searched and found similar topics with my question but couldnt find a proper solution.
I am trying to understand if the private method is called or not in my unit test. I am using rhino mock, read many files about it, some of them just say that change the access specifier of the method from private to public, but I can not change the source code. I tried to link source file to my test project but it doesnt change.
public void calculateItems()
{
var result = new Result(fileName, ip, localPath, remotePath);
calculateItems(result, nameOfString);
}
private void calculateItems(Result result, string nameOfString )
As you see from the code above, I have two methods have exactly same name, calculateItems, but public one has no parameter, private one has two parameters. I am trying to understand when I called public one in my unittest, is private method called?
private CalculateClass sut;
private Result result;
[SetUp]
public void Setup()
{
result = MockRepository.GenerateStub<Result>();
sut = new CalculateClass();
}
[TearDown]
public void TearDown()
{
}
[Test]
public void test()
{
sut.Stub(stub => stub.calculateItems(Arg<Result>.Is.Anything, Arg<string>.Is.Anything));
sut.calculateItems();
sut.AssertWasCalled(stub => stub.calculateItems(Arg<Result>.Is.Anything, Arg<string>.Is.Anything));
}
In my unittest, I am taking such an error which says "No overload method for calculateItems take two arguments". Is there a way to test it without any changing in source code?
You're testing the wrong thing. Private methods are private. They are of no concern to consuming code, and unit tests are consuming code like any other.
In your tests you test and validate the outward facing functionality of the component. Its inner implementation details aren't relevant to the tests. All the tests care about is whether the invoked operation produces the expected results.
So the question you must ask yourself is... What are the expected results when invoking this operation?:
calculateItems()
It doesn't return anything, so what does it do? What state does it modify in some way? That is what your test needs to observe, not the implementation details but the observable result. (And if the operation has no observable result, then there's no difference between "passed" or "failed" so there's nothing to test.)
We can't see the details of your code, but it's possible that the observable result is coupled to another component entirely. If that's the case then that other component is a dependency for this operation and the goal of the unit test is to mock that dependency so the operation can be tested independently of the dependency. The component may then need to be modified so that a dependency is provided rather than internally controlled. (This is referred to as the Dependency Inversion Principle.)
Also of note...
but I can not change the source code
That's a separate problem entirely. If you truly can't change the source code, then the value of these tests is drastically reduced and possibly eliminated entirely. If a test fails, what can you do about it? Nothing. Because you can't change the code. So what are you testing?
Keep in mind that it's not only possible but unfortunately very common for programmers to write code which can't be meaningfully unit tested. If this code was provided to you by someone else and you are forbidden to change it for some non-technical reason, then it will be the responsibility of that someone else to correct the code. "Correcting" may include "making it possible to meaningfully unit test". (Or, honestly, they should be unit testing it. Not you.)
If your public method calls your private one then the same thing will happen in your tests. Tests are nothing more than code that can be run and debugged and you can try that so see what happens.
Private methods can't be tested directly but they can be tested via their public callers which is what you are doing, so it's all good. Whether it's a good idea to have a setup like this well, that's a different story entirely but I am not going into that now.
Now, let's discuss what you are actually testing.
Unit tests should not have deep knowledge of the code they test. The reason is that you should have inputs and outputs and you shouldn't care what happens in between.
If you refactor the code and eliminate the private method then your test would break, even if your inputs and outputs to your public method remain the same. That's not a good position to be in, this is what we call brittle tests.
So add your functional tests around the public method, verify that you get hat you expect and don't worry whether it calls your private method or not.
When you say you need to know whether your private methods are called, this can have two different interpretations:
You want to ensure that the private method is called within one particular test, making it a success criterion for that very test.
You want to know if the private method is called at all, by any of your test cases. You might be interested in this because you want to be sure if the private method is covered by your test suite, or as you said, just to form an understanding of what is actually going on in your code.
Regarding the second interpretation: If you want to understand what is going on in the code, a good approach is to use a debugger and just step through the code to see what function is called. As I am not a C# expert here, I can not recommend any specific debugging tool, but finding some recommendations about this on the web should not be difficult. This approach would fulfill your requirements not to require changes to the source code
Another possibility, in particular if you are interested in whether your private function is covered by the tests, is to use a test coverage tool for C#. The coverage tool would show you whether or not the private method was called or not. Again, this would not require to make any changes to the source code.
Regarding the first interpretation of your question: If you want to test that some privat function is called as part of your test's success criterion, you preferrably do this with tests that use the public API. Then, in these tests, you should be able to judge if the private function is called because of the effect that the private function has on the test result.
And, in contrast to other opinions, you should test the implementation. The primary goal of unit-testing is to find the bugs in the code. Different implementations have different bugs. This is why people also use coverage tools, to see if they have covered the code of their implementation. And, coverage is not enough, you also need to check boundary cases of expressions etc. Certainly, having maintainable tests and tests that do not break unnecessarily in case of refactorings are good goals (why testing through the public API is typically a good approach - but not always), but they are secondary goals compared to the goal to find all bugs.

How to verify number of calls method of 'this' service

I'm using NUnit framework with moq for testing. I've got a problem with veryfing how many times private method of this class has been called. To do so with mock object it's enough to call Verify() with parameters of type Times, but my method is part of this class. I was trying to mock current service (SUT), but it probably isn't the best idea and it doesn't work properly.
SUT:
public object Post(Operations.Campaign.Merge request)
{
List<CampaignIdWithNumberOfAds> campaignList = new List<CampaignIdWithNumberOfAds>();
for (int i = 0; i < request.CampaignIdsToMerge.Count; i++)
{
if (this.CampaignRepository.Exist(request.CampaignIdsToMerge[i]))
{
campaignList.Add(new CampaignIdWithNumberOfAds()
{
CampaignId = request.CampaignIdsToMerge[i],
NumberOfAdvertisement = this.CampaignRepository.GetNumberOfAdvertisementsInCampaign(request.CampaignIdsToMerge[i])
});
}
}
if (campaignList.Count > 1)
{
campaignList = campaignList.OrderByDescending(p => (p == null) ? -1 : p.NumberOfAdvertisement).ToList();
List<CampaignIdWithNumberOfAds> campaignsToMerge = campaignList.Skip(1).ToList();
CampaignIdWithNumberOfAds chosenCampaign = campaignList.FirstOrDefault<CampaignIdWithNumberOfAds>();
uint chosenCampaignId = chosenCampaign.CampaignId;
foreach (var campaignToMerge in campaignsToMerge)
{
this.MergeCampaigns(chosenCampaignId, campaignToMerge.CampaignId);
}
}
return true;
}
Test:
[Test]
public void MergeCampaignsPost_ValidMergeCampaignsRequest_ExecuteMergeCampaignsMethodAppropriateNumberOfTimes()
{
// Arrange
var mockCampaignService = new Mock<Toucan.Api.Services.CampaignService>();
var request = Mother.GetValidMergeCampaignsRequest_WithDifferentNumbersOfAdvertisement();
mockCampaignService.Setup(x => x.MergeCampaigns(It.IsAny<uint>(), It.IsAny<uint>()));
// Act
var response = this.Service.Post(request);
// Assert
mockCampaignService.Verify(x => x.MergeCampaigns(It.IsAny<uint>(), It.IsAny<uint>()), Times.Exactly(request.CampaignIdsToMerge.Count - 1));
}
I am afraid that I won't give you a solution here, although I would rather suggest you some sort of guidance. There are many different strategies to unit testing and different people would suggest different solutions. Basically in my opinion you could change the way you are testing your code (you might agree or disagree with those, but please take them into consideration).
Unit test should be independent from the implementation
Easy as it sounds, it is very hard to keep to this approach. Private methods are your implementation of solving the problem. The typical pitfall for a developer writing a unit test for his own code is the fact that you know how your code works and mirror it in unit test. What if the implementation changes, but the public method will still fulfill the requested contract? Hardly ever you want to directly your unit test with a private method. This is related to the following...
Test should check the output result of the method
Which basically means do not check how many times something is executed if you don't have to. I am not sure what is your MergeCampaigns method doing but it would be better if you check the result of the operation instead of how many times it is executed.
Don't overdo your unit tests - keep it maintainable
Try to test each functional scenario you can imagine with as simple and as independent test as possible. Don't go too deep checking if something is called. Otherwise, you will get a 100% coverage at start, but you will curse each time changing a thing in your service, because this will make half of your test fail (assuming that the service is still doing its job, but in different way than designed at the beginning). So you will spend time rewriting unit tests that actually give you no gain in terms of creating a bulletproof solution.
It is very easy to start writing unit tests and keep the coverage green, it starts to get very tricky if you want to write good unit tests. There are many valuable resources to help with that. Good luck!

understanding some unit testing practices

I am a newbie to unit testing - I have only done basic assert tests using mere Testmethods(my last module, I created about 50 of those).
I am currently reading a book on Unit Testing, and one of the many examples in the book has me creating a new class for each single test. Below is one of the example objects created just for one test case. My question is is it ever necessary to do this? Or when should one apply this approach and when is it not necessary?
public class and_saving_an_invalid_item_type : when_working_with_the_item_type_repository
{
private Exception _result;
protected override void Establish_context()
{
base.Establish_context();
_session.Setup(s => s.Save(null)).Throws(new ArgumentNullException());
}
protected override void Because_of()
{
try
{
_itemTypeRepository.Save(null);
}
catch (Exception exception)
{
_result = exception;
}
}
[Test]
public void then_an_argument_null_exception_should_be_raised()
{
_result.ShouldBeInstanceOfType(typeof(ArgumentNullException));
}
}
Do you need to create a new class for each individual test? I would say no, you certainly do not. I don't know why the book is saying that, or if they are just doing it to help illustrate their examples.
To answer your question, I'd recommend using a class for each group of tests... but it's really a bit more complex than that, because how you define "group" is varying and dependant on what you're doing at the time.
In my experience, a set of tests is really logically structured like a document, which can contain one or more set of tests, grouped (and sometimes nested) together by some common aspect. A natural grouping for testing Object-Oriented code is to group by class, and then by method.
Here's an example
tests for class 1
tests for method 1
primary behaviour of method 1
alternate behaviour of method 1
tests for method 2
primary behaviour of method 2
alternate behaviour of method 2
Unfortunately, in C# or java (or similar languages), you've only got two levels of structure to work with (as opposed to the 3 or 4 you really actually want), and so you have to hack things to fit.
The common way this is done is to use a class to group together sets of tests, and don't group anything at the method level, as like this:
class TestsForClass1 {
void Test_method1_primary()
void Test_method1_alternate()
void Test_method2_primary()
void Test_method2_alternate()
}
If both your method 1 and method 2 all have identical setup/teardown, then this is fine, but sometimes they don't, leading to this breakdown:
class TestsForClass1_method1 {
void Test_primary()
void Test_alternate()
}
class TestsForClass1_method2 {
void Test_primary()
void Test_alternate()
}
If you have more complex requirements (let's say you have 10 tests for method_1, the first 5 have setup requirement X, the next 5 have different setup requirements), then people usually end up just making more and more class names like this:
class TestsForClass1_method1_withRequirementX { ... }
class TestsForClass1_method1_withRequirementY { ... }
This sucks, but hey - square peg, round hole, etc.
Personally, I'm a fan of using lambda-functions inside methods to give you a third level of grouping. NSpec shows one way that this can be done... we have an in-house test framework which is slightly different, it reads a bit like this:
class TestsForClass1 {
void TestsForMethod1() {
It.Should("perform it's primary function", () => {
// ....
});
It.Should("perform it's alternate function", () => {
// ....
});
}
}
This has some downsides (if the first It statement fails, the others don't run), but I consider this tradeoff worth it.)
-- The question originally read: "is it ever really necessary to create an object for each single test I want to carry out?". The answer to that is (mostly) yes, as per this explanation.
Generally, unit tests involve the interaction of two parts
The object under test. Usually this is an instance of a class or a function you've written
The environment. Usually this is whatever parameters you've passed to your function, and whatever other dependencies the object may have a reference to.
In order for unit tests to be reliable, both of these parts need to be "fresh" for each test, to ensure that the state of the system is sane and reliable.
If the thing under test is not refreshed for each test, then one function may alter the object's internal state, and cause the next test to wrongly fail
If the environment is not refreshed for each test, then one function may alter the environment (eg: set some variable in an external database or something), which may cause the next test to wrongly fail.
There are obviously many situations where this is not the case - You might for example have a pure mathematical function that only takes integers as parameters and doesn't touch any external state, and then you may not want to bother re-creating the object under test or the test environment... but generally, most things in any Object-Oriented system will need refreshing, so this is why it is "standard practice" to do so.
I'm not quite able to follow your example, but ideally any test case should be able to run independently of any other - independently from anything else, really.

Is it a good practice to use RowTest in a unit test

NUnit and MbUnit has a RowTest attribute that allows you to sent different set of parameters into a single test.
[RowTest]
[Row(5, 10, 15)]
[Row(3.5, 2.7, 6.2)]
[Row(-5, 6, 1)]
public void AddTest(double firstNumber, double secondNumber, double result)
{
Assert.AreEqual(result, firstNumber + secondNumber);
}
I used to be huge fan of this feature. I used it everywhere. However, lately I'm not sure if it's a very good idea to use RowTest in Unit Tests. Here are more reasons:
A unit test must be very simple. If there's a bug, you don't want to spent a lot of time to figure out what your test tests. When you use multiple rows, each row has different sent set of parameter and tests something different.
Also I'm using TestDriven.NET, that allows me to run my unit tests from my IDE, Visual Studio. With TestDrivent.NET I cannot instruct to run a specific row, it will execute all the rows. Therefore, when I debug I have to comment out all other rows and leave only the one I'm working with.
Here's an example how would write my tests today:
[Test]
public void Add_with_positive_whole_numbers()
{
Assert.AreEqual(5, 10 + 15);
}
[Test]
public void Add_with_one_decimal_number()
{
Assert.AreEqual(6.2, 3.5 + 2.7);
}
[Test]
public void Add_with_negative_number()
{
Assert.AreEqual(1, -5 + 6);
}
Saying that I still occasionally use RowTest attribute but only when I believe that it's not going to slow me down when I need to work on this later.
Do you think it's a good idea to use this feature in a Unit test?
Yes. It's basically executing the same test over and over again with different inputs... saving you the trouble of repeating yourself for each distinct input combination.
Thus upholding the 'once and only once' or DRY principle. So if you need to update this test you just update one test (vs multiple) tests.
Each Row should be a representative input from a distinct set - i.e. this input is different from all others w.r.t. this function's behavior.
The RowTest actually was a much-asked for feature for NUnit - having originated from MBUnit... I think Schlapsi wrote it as a NUnit extension which then got promoted to std distribution status. The NUnit GUI also groups all RowTests under one node in the GUI and shows which input failed/passed.. which is cool.
The minor disadvantage of the 'need to debug' is something I personally can live with.. It's after all commenting out a number of Row attributes temporarily (First of all most of the time I can eyeball the function once I find ScenarioX failed and solve it without needing a step-through) or conversely just copy the test out and pass it fixed (problematic) inputs temporarily

Categories