How to verify number of calls method of 'this' service - c#

I'm using NUnit framework with moq for testing. I've got a problem with veryfing how many times private method of this class has been called. To do so with mock object it's enough to call Verify() with parameters of type Times, but my method is part of this class. I was trying to mock current service (SUT), but it probably isn't the best idea and it doesn't work properly.
SUT:
public object Post(Operations.Campaign.Merge request)
{
List<CampaignIdWithNumberOfAds> campaignList = new List<CampaignIdWithNumberOfAds>();
for (int i = 0; i < request.CampaignIdsToMerge.Count; i++)
{
if (this.CampaignRepository.Exist(request.CampaignIdsToMerge[i]))
{
campaignList.Add(new CampaignIdWithNumberOfAds()
{
CampaignId = request.CampaignIdsToMerge[i],
NumberOfAdvertisement = this.CampaignRepository.GetNumberOfAdvertisementsInCampaign(request.CampaignIdsToMerge[i])
});
}
}
if (campaignList.Count > 1)
{
campaignList = campaignList.OrderByDescending(p => (p == null) ? -1 : p.NumberOfAdvertisement).ToList();
List<CampaignIdWithNumberOfAds> campaignsToMerge = campaignList.Skip(1).ToList();
CampaignIdWithNumberOfAds chosenCampaign = campaignList.FirstOrDefault<CampaignIdWithNumberOfAds>();
uint chosenCampaignId = chosenCampaign.CampaignId;
foreach (var campaignToMerge in campaignsToMerge)
{
this.MergeCampaigns(chosenCampaignId, campaignToMerge.CampaignId);
}
}
return true;
}
Test:
[Test]
public void MergeCampaignsPost_ValidMergeCampaignsRequest_ExecuteMergeCampaignsMethodAppropriateNumberOfTimes()
{
// Arrange
var mockCampaignService = new Mock<Toucan.Api.Services.CampaignService>();
var request = Mother.GetValidMergeCampaignsRequest_WithDifferentNumbersOfAdvertisement();
mockCampaignService.Setup(x => x.MergeCampaigns(It.IsAny<uint>(), It.IsAny<uint>()));
// Act
var response = this.Service.Post(request);
// Assert
mockCampaignService.Verify(x => x.MergeCampaigns(It.IsAny<uint>(), It.IsAny<uint>()), Times.Exactly(request.CampaignIdsToMerge.Count - 1));
}

I am afraid that I won't give you a solution here, although I would rather suggest you some sort of guidance. There are many different strategies to unit testing and different people would suggest different solutions. Basically in my opinion you could change the way you are testing your code (you might agree or disagree with those, but please take them into consideration).
Unit test should be independent from the implementation
Easy as it sounds, it is very hard to keep to this approach. Private methods are your implementation of solving the problem. The typical pitfall for a developer writing a unit test for his own code is the fact that you know how your code works and mirror it in unit test. What if the implementation changes, but the public method will still fulfill the requested contract? Hardly ever you want to directly your unit test with a private method. This is related to the following...
Test should check the output result of the method
Which basically means do not check how many times something is executed if you don't have to. I am not sure what is your MergeCampaigns method doing but it would be better if you check the result of the operation instead of how many times it is executed.
Don't overdo your unit tests - keep it maintainable
Try to test each functional scenario you can imagine with as simple and as independent test as possible. Don't go too deep checking if something is called. Otherwise, you will get a 100% coverage at start, but you will curse each time changing a thing in your service, because this will make half of your test fail (assuming that the service is still doing its job, but in different way than designed at the beginning). So you will spend time rewriting unit tests that actually give you no gain in terms of creating a bulletproof solution.
It is very easy to start writing unit tests and keep the coverage green, it starts to get very tricky if you want to write good unit tests. There are many valuable resources to help with that. Good luck!

Related

Do Guard Clauses Alone Require Unit Testing?

TLDR
Does this method require Unit Testing? If your answer is Yes, please ensure you understand my thought process by reading the whole Question.
public void UpdateChildSomethings(int parentId, string newVal, int bulkSize) {
var skip = 0;
List<Child> children = null;
while ((children = _.GetChildrenFromDB(parentId, skip, bulkSize)).Count > 0) {
var alteredChildren = AlterChildren(children, newValue); // Note: AlterChildren is fully tested separately.
_.BulkUpdateDB(alteredChildren);
skip += bulkSize;
}
}
Foreword
First off, I am a heavy Unit Tester. I do it often, and I do it well. But given my experience, I have gained opinions and may need somebody to put my in my place, or provide me with documentation to support or oppose me.
Opening Disclaimer: If I have an obviously tested method (Like Alter and AlterChildren below), and they have Guard Clauses in them, I'm probably going to end up testing the Guard Clauses, if for nothing more than 100% coverage in those tests. But apart from that...
The Question
Let's begin my question with this method:
public void UpdateSomething(int id, string newVal) {
var actualSomething = _.GetFromDB(id);
var alteredSomething = Alter(actualSomething, newVal);
_.UpdateDB(id, alteredSomething);
}
Does this method require Unit Testing? For multiple reasons, I would personally say no, at least not at this time. Especially if Alter() is abundantly tested. The action of Getting from DB and Updating DB have no value to Unit Test, and would be mocked anyway.
Assuming you follow my mindset and agree that method shouldn't be tested, what about this method?
public void UpdateSomething(int id, string newVal) {
var actualSomething = _.GetFromDB(id);
if (actualSomething == null) return;
var alteredSomething = Alter(actualSomething, newVal);
_.UpdateDB(id, alteredSomething);
}
I added a "Guard Clause". This is not business logic or calculation. It is code which determines the flow of code and early return. If I were to Unit Test this, I would essentially be testing the result of GetFromDB, and therefore be Testing a Mock. As far as I am concerned, Testing a Mock is not a useful test.
More Complex
But assuming you STILL follow my mindset and agree that Guard Clauses based on External Data is a waste to Unit Test, what about this method?
public void UpdateChildSomethings(int parentId, string newVal, int bulkSize) {
var skip = 0;
List<Child> children = null;
while ((children = _.GetChildrenFromDB(parentId, skip, bulkSize)).Count > 0) {
var alteredChildren = AlterChildren(children, newValue);
_.BulkUpdateDB(alteredChildren);
skip += bulkSize;
}
}
For clarity, I'll refactor this to break down the while clause
/// Uses parentId to retrieve applicable children in chunks of bulkSize.
/// children are processed separately.
/// Passes processed children to the DB to be updated.
public void UpdateChildSomethings(int parentId, string newVal, int bulkSize) {
var skip = 0;
List<Child> children = null;
while (true) {
children = _.GetChildrenFromDB(parentId, skip, bulkSize);
if (children.Count == 0) break;
var alteredChildren = AlterChildren(children, newValue);
_.BulkUpdateDB(alteredChildren);
skip += bulkSize;
}
}
At first glance, this looks complex enough to test, but what are you testing? Once again assuming that AlterChildren() is abundantly tested, the only thing left to test is the result of GetChildrenFromDB(), which is mocked. Once again Testing a Mock. The only line here doing something is skip += bulkSize. What would you be testing there, the += operator? I still don't see the point.
So, that is my most complex example, should it be Unit Tested?
The code in question here does not seem to contain any business logic. I think your point is: Should this be tested although it does not contain business logic and is fairly trivial?
There is nothing wrong with testing "mechanics" (as opposed to business logic). There is no reason you can only test business logic. UpdateSomething provides a service to other parts of the application. You have an interest in that service being performed correctly.
I do not quite see the difference between "guard clauses" and any other logic. It's behavior that is relevant to the functioning of the application.
You question whether logic based on external data is to be tested. I do not see this as a criterion either.
These things make it more likely that a test should be written: The code is easy to test; bugs have a high cost; quality is important for this piece of code; the test will not cause additional maintenance work; the test does not require much change to production code.
Act according to concrete criteria like that.
Update:
Can you update your answer, or comment, about whether or not you'd test the simplest code example I gave under "Let's begin my question with this method", and why/not?
Well, I can't say that because I don't know how valuable this is to test to you. If you have other tests that implicitly exercise this then I'd tend not to test it I guess. I'm personally not keen on writing tests for trivial things but I guess that's a matter of personal experience. Really, I feel that the criteria you proposed in the question have no intrinsic bearing on the decision at all. The decision should be made according to the criteria I set forth. This means that I lack the knowledge to come to a decision.
In my career I have found time and time again that programming by rules does not work. Programming is like chess - it is infinitely complex. No set of rules can adequately make decisions for you. Rather, develop a mental toolbox of heuristics and patterns to guide you in the concrete case. In the end you must decide based on the concrete case as a whole, not based on a rigid rule. That's why I said "these things make tests more likely", not "you should test when...".
That's why rules such as "test getters and setters" or "do not test getters and setters" are simply false! Sometimes you test them, sometimes you don't.
Your code is a combination of computations and interactions. The value of unit-testing such code often appears questionable a) due to the effort for mocking and b) due to the resulting dependency of the unit-tests on implementation details. One approach that often helps is to modify the code in a way that the computations and interactions are separated into different functions/methods. Functions with computations are then tested using unit-testing with no or at least reduced need for mocking, and interaction dominated functions are tested with integration testing.
In your example, a possible separation between computations and interactions could look as shown below:
public int BulkUpdateChildren(int parentId, string newVal, int skip, int bulkSize) {
List<Child> children = _.GetChildrenFromDB(parentId, skip, bulkSize);
if (children.Count > 0) {
var alteredChildren = AlterChildren(children, newValue);
_.BulkUpdateDB(alteredChildren);
}
return children.Count;
}
public void UpdateChildSomethings(int parentId, string newVal, int bulkSize) {
var skip = 0;
var updated;
do {
updated = BulkUpdateChildren(parentId, newVal, skip, bulksize);
skip += updated;
} while (updated == bulksize);
}
The new method BulkUpdateChildren contains all the interactions with the dependencies - this would best be tested using integration testing. What remains within UpdateChildSomethings is computation dominated, and testing it only requires to mock one single method within the SUT itself, namely the call to BulkUpdateChildren. Unit-testing UpdateChildSomethings is therefore easier and can focus on the question whether in all possible cases the value of skip is updated properly and if the loop terminates as expected.
There is only very little logic left within BulkUpdateChildren, namely the check whether there were any children found, and the 'computation' of the return value. Maybe the check for zero children is even unnecessary (unless for performance reasons), if the methods AlterChildren and BulkUpdateDB are able to deal with empty lists. When leavin this check out, this code would consist almost only of interactions:
public int BulkUpdateChildren(int parentId, string newVal, int skip, int bulkSize) {
List<Child> children = _.GetChildrenFromDB(parentId, skip, bulkSize);
var alteredChildren = AlterChildren(children, newValue);
_.BulkUpdateDB(alteredChildren);
return children.Count;
}

Can there be multiple asserts per one test?

I started looking at FsCheck yesterday, and I am trying to write a simple test, that any instance of DiscountAmount will always have negative value. My question is, is it ok to have multiple asserts within one test. For example, here I am saying that amount from which discountAmount has been created plus discount amount should be 0. But I also say that discount amount should be less than 0. Should this be 2 tests or 1?
public class DiscountAmountTests
{
[Property()]
public void value_or_created_discountAmount_should_be_negative()
{
Arb.Register<AmountArbitrary>();
Prop.ForAll<Amount>(
v =>
{
var sut = new DiscountAmount(v);
var expectedResult = 0;
var result = v + sut;
result.Should().Be(expectedResult);
sut.Value.Should().BeLessThan(0);
})
.QuickCheckThrowOnFailure();
}
public class AmountArbitrary
{
public static Arbitrary<Amount> Amounts()
{
return Arb.Generate<decimal>().Where(x => x > 0)
.Select(x => new Amount(x))
.ToArbitrary();
}
}
}
}
I would say this is really up to you. I think there are arguments pro and cons - on the one hand, sometimes setup cost is expensive (be it in terms of programmer work to get the system into a particular state, or really compute resource cost, e.g. you have to do a expensive query to the DB or something) and then in my opinion it's worth making tests more coarsely grained.
The trade-off is that it's typically less clear what the problem is if a coarse grained test fails.
In comparison with unit tests, FsCheck has a bit more setup costs in terms of argument generation, and it is attractive to make FsCheck tests more coarse-grained than unit tests. Also note that FsCheck has some methods like Label, And. Or to combine different properties together while sharing the argument generation, and still allow you to see what part of your test fails, somewhat off-setting one downside.

Unit testing retrieval methods - redundant?

I have the following method in my service layer
public ModuleResponse GetModules(ModuleRequest request)
{
var response = new ModuleResponse(request.RequestId);
try
{
response.Modules = Mapper.ToDataTransferObjects(ModuleDao.GetModules());
return response;
}
catch (Exception ex)
{
Log.Error(ex);
response.Acknowledge = AcknowledgeType.Failure;
response.Message = "An error occured.";
return response;
}
}
And I have a unit test written in xUnit like this:
[Fact]
public void GetModulesTest()
{
//Arrange
var mockModuleDao = Mock.Create<IModuleDao>();
var mockLog = Mock.Create<ILog>();
var mockAuditDao = Mock.Create<IAuditDao>();
var moduleList = new List<ModuleItem>
{
new ModuleItem {Id = 100, Category = "User Accounts", Feature = "Users"},
new ModuleItem {Id = 101, Category = "User Accounts", Feature = "Roles Permissions"}
};
mockModuleDao.Arrange(dao => dao.GetModules()).Returns(moduleList);
IUserManagementService userService = new UserManagementService(mockModuleDao, mockLog, mockAuditDao);
var request = new ModuleRequest().Prepare();
//Act
var actualResponse = userService.GetModules(request);
//Assert
Assert.Equal(AcknowledgeType.Success, actualResponse.Acknowledge);
Assert.Equal(2, actualResponse.Modules.Count);
}
Now I have a whole other bunch of retrieval methods in my code similar to the one above.
Are testing such methods redundant? I mean, they are almost a sure pass test, unless I mess up the logic of my Mapping or something.
Also, when testing retrieval methods, what is it that I should be testing for? In my scenario above, I have 2 assert statements, 1 to check if the response is a success, and the 2nd is to check the count of the list.
Is this sufficient? or how can this be further improved to enhance the value of such a unit test?
As always, whether or not a test like that is valuable depends on your motivation for testing.
Is this piece of code mission-critical?
What is the cost if that code fails?
How easily can you address errors, should they occur?
The higher the cost of failure, the more important it is to test a piece of code.
The GetModules method does at least four things:
It returns the modules from the DAO.
It maps the modules from the DAO into the desired return types.
It returns an error message if something goes wrong.
It logs any errors that may occur.
The GetModulesTest tests a single of these four responsibilities, which means that three other tests are still required to fully cover the GetModules method.
Writing small-grained unit tests are valuable, because it enables you do decompose a complex piece of production code into a set of simple, easy-to-understand unit tests. Sometimes, these unit tests become almost inanely simple, to the point where you'll begin to doubt the value of it, but the value isn't in a single unit test - it's in the accumulation of simple tests, which, together, specify how the entire system ought to work.
Now I have a whole other bunch of retrieval methods in my code similar to the one above.
Really? Don't they feel a little... repetitive?
I think Lilshieste made a very appropriate point, that one intrinsic value of unit tests is that they highlight maintainability issues like this. You might say they make code smells more pungent.
Mark Seemann identified four individual responsibilities for this one method you showed us. The Single Responsibility Principle would dictate that you should only have one.
You could conceivably turn this method (and all its kin) into something more like this:
public ModuleResponse GetModules(ModuleRequest request)
{
return _responder.CreateMappedDtoResponse(
request,
ModuleDao.GetModules,
modules => new ModuleResponse {Modules = modules}));
}
Now, at this point, I think you could make a decent argument against unit-testing this method. You'd pretty much be testing the implementation of this method, rather than its behavior. Your unit test would be testing that you call a given method with given arguments, and that's it!
But even if you decided to be a purist and unit test this, there's really only one unit test that you could conceivably write, as opposed to the four that you would have needed to fully cover this method before. Then you write the appropriate unit tests for the CreateMappedDtoResponse methods (and whatever methods it may delegate parts of its work to), and you've got a DRY, well-tested system with a fraction of the number of tests. And if you change a common responsibility like your exception-logging strategy, you can change it in one place, change one unit test, and be done.
So even if your unit tests never catch a bug for you, being a purist helped you to avoid a maintainability issue that would have forced you to write just as much extra code in the first place, and be likely to re-write just as much code later on. Of course, this only happens if you know to listen to your unit tests and change your design accordingly.

When is it OK to group similar unit tests?

I'm writing unit tests for a simple IsBoolean(x) function to test if a value is boolean. There's 16 different values I want to test.
Will I be burnt in hell, or mocked ruthlessly by the .NET programming community (which would be worse?), if I don't break them up into individual unit tests, and run them together as follows:
[TestMethod]
public void IsBoolean_VariousValues_ReturnsCorrectly()
{
//These should all be considered Boolean values
Assert.IsTrue(General.IsBoolean(true));
Assert.IsTrue(General.IsBoolean(false));
Assert.IsTrue(General.IsBoolean("true"));
Assert.IsTrue(General.IsBoolean("false"));
Assert.IsTrue(General.IsBoolean("tRuE"));
Assert.IsTrue(General.IsBoolean("fAlSe"));
Assert.IsTrue(General.IsBoolean(1));
Assert.IsTrue(General.IsBoolean(0));
Assert.IsTrue(General.IsBoolean(-1));
//These should all be considered NOT boolean values
Assert.IsFalse(General.IsBoolean(null));
Assert.IsFalse(General.IsBoolean(""));
Assert.IsFalse(General.IsBoolean("asdf"));
Assert.IsFalse(General.IsBoolean(DateTime.MaxValue));
Assert.IsFalse(General.IsBoolean(2));
Assert.IsFalse(General.IsBoolean(-2));
Assert.IsFalse(General.IsBoolean(int.MaxValue));
}
I ask this because "best practice" I keep reading about would demand I do the following:
[TestMethod]
public void IsBoolean_TrueValue_ReturnsTrue()
{
//Arrange
var value = true;
//Act
var returnValue = General.IsBoolean(value);
//Assert
Assert.IsTrue(returnValue);
}
[TestMethod]
public void IsBoolean_FalseValue_ReturnsTrue()
{
//Arrange
var value = false;
//Act
var returnValue = General.IsBoolean(value);
//Assert
Assert.IsTrue(returnValue);
}
//Fell asleep at this point
For the 50+ functions and 500+ values I'll be testing against this seems like a total waste of time.... but it's best practice!!!!!
-Brendan
I would not worry about it. This sort of thing isn't the point. JB Rainsberger talked about this briefly in his talk Integration Tests are a Scam. He said something like, "If you have never forced yourself to use one assert per test, I recommend you try it for a month. It will give you a new perspective on test, and teach you when it matters to have one assert per test, and when it doesn't". IMO, this falls into the doesn't matter category.
Incidentally, if you use nunit, you can use the TestCaseAttribute, which is a little nicer:
[TestCase(true)]
[TestCase("tRuE")]
[TestCase(false)]
public void IsBoolean_ValidBoolRepresentations_ReturnsTrue(object candidate)
{
Assert.That(BooleanService.IsBoolean(candidate), Is.True);
}
[TestCase("-3.14")]
[TestCase("something else")]
[TestCase(7)]
public void IsBoolean_InvalidBoolRepresentations_ReturnsFalse(object candidate)
{
Assert.That(BooleanService.IsBoolean(candidate), Is.False);
}
EDIT: wrote the tests in a slightly different way, that I think communicates intent a little better.
Although I agree it's best practice to separate the values in order to more easily identify the error. I think one still has to use their own common sense and follow such rules as guidelines and not as an absolute. You want to minimize assertion counts in a unit test, but what's generally most important is to insure a single concept per test.
In your specific case, given the simplicity of the function, I think that the one unit test you provided is fine. It's easy to read, simple, and clear. It also tests the function thoroughly and if ever it were to break somewhere down the line, you would be able to quickly identify the source and debug it.
As an extra note, in order to maintain good unit tests, you'll want to always keep them up to date and treat them with the same care as you do the actual production code. That's in many ways the greatest challenge. Probably the best reason to do Test Driven Development is how it actually allows you to program faster in the long run because you stop worrying about breaking the code that exists.
It's best practice to split each of the values you want to test into separate unit tests. Each unit test should be named specifically to the value you're passing and the expected result. If you were changing code and broke just one of your tests, then that test alone would fail and the other 15 would pass. This buys you the ability to instantly know what you broke without then having to debug the one unit test and find out which of the Asserts failed.
Hope this helps.
I can't comment on "Best Practice" because there is no such thing.
I agree with what Ayende Rahien says in his blog:
At the end, it boils down to the fact that I don’t consider tests to
be, by themselves, a value to the product. Their only value is their
binary ability to tell me whatever the product is okay or not.
Spending a lot of extra time on the tests distract from creating real
value, shippable software.
If you put them all in one test and this test fails "somewhere", then what do you do? Either your test framework will tell you exactly which line it failed on, or, failing that, you step through it with a debugger. The extra effort required because it's all in one function is negligible.
The extra value of knowing exactly which subset of tests failed in this particular instance is small, and overshadowed by the ponderous amount of code you had to write and maintain.
Think for a minute the reasons for breaking them up into individual tests. It's to isolate different functionality and to accurately identify all the things that went wrong when a test breaks. It looks like you might be testing two things: Boolean and Not Boolean, so consider two tests if your code follows two different paths. The bigger point, though, is that if none of the tests break, there are no errors to pinpoint.
If you keep running them, and later have one of these tests fail, that would be the time to refactor them into individual tests, and leave them that way.

How to test callbacks with NUnit

Is there any special support when you come to test callbacks with NUnit? Or some kind of of "best practice" which is better than my solution below?
I just started writing some tests and methods, so I have still full control - however I think it might be annoying if there are better ways to test callbacks thoroughly, especially if complexity is increasing. So this is a simple example how I am testing right now:
The method to be tested uses a delegate, which calls a Callback function, for instance as soon as a new xml element is being discovered in a stream. For testing purpose I pass the NewElementCallback Method to the delegate and store the arguments content in some test classes properties when the function is called. These properties I use for assertion. (Of course they are being reset in the test setup)
[Test]
public void NewElement()
{
String xmlString = #"<elem></elem>";
this.xml.InputStream = new StringReader(xmlString);
this.xml.NewElement += this.NewElementCallback;
this.xml.Start();
Assert.AreEqual("elem", this.elementName);
Assert.AreEqual(0, this.elementDepth);
}
private void NewElementCallback(string elementName, int elementDepth)
{
this.elementName = elementName;
this.elementDepth = elementDepth;
}
You could avoid the need for private fields if you use a lambda expression, that's how I usually do this.
[Test]
public void NewElement()
{
String xmlString = #"<elem></elem>";
string elementName;
int elementDepth;
this.xml.InputStream = new StringReader(xmlString);
this.xml.NewElement += (name,depth) => { elementName = name; elementDepth = depth };
this.xml.Start();
Assert.AreEqual("elem", elementName);
Assert.AreEqual(0, elementDepth);
}
it makes your tests more cohesive and having fields on any test class is always asking for disaster!
There isn't anything special in NUnit for this that I know of. I test these things the same way you do. I do tend to put the callback method and the state it stores on another class. I think it makes it a bit cleaner, but it isn't fundamentally different.
From your example, I can't tell exactly what you're trying to do, however, NUnit doesn't provide any specific way to test this kind of thing, however this link should present you some ideas on how to start unit-testing asynchronous code: Unit Testing Asynchronous code

Categories