I find the concept of partitioning the statements of my unit tests as suggested in the AAA pattern useful.
I tend to add heading comments so that the tests look like this:
// Arrange
int a = 1;
int b = 2;
// Act
int c = a + b;
// Assert
Assert.AreEqual(3, c);
But I am curious, is it normal to always include these header comments?
...or is this something which I should avoid?
int a = 1;
int b = 2;
int c = a + b;
Assert.AreEqual(3, c);
That doesn't seem to add much value once the basic premise is understood. Since you mention C#, I suggest taking a look at The Art of Unit Testing for examples. Naming a unit test correctly is more important IMHO than arrange/act/assert comments within it. As the book points out, when a test fails, if it is named well you can often deduce the cause of a regression directly if you know what changes were made recently.
I've gotten a lot of value out of doing this. It (to me) looks cleaner, is immediately clear which parts of the test are doing what, and it somewhat enforces the pattern. So no, I don't think you need to avoid it.
If your tests are getting really complicated that's a separate issue. Even a six line test can benefit from those comments. If you have no assert section because you're checking that an exception is thrown, then obviously don't include the assert comment.
I'm always thankful to have those in code that I'm reviewing, particularly for integration tests. I feel it saves me time.
Related
Hello I am a newbie to TDD style programming in c# and am struggling a lot for getting it right. Could you please let me know if I am doing this in the right way. I have followed a lot of tutorials but haven't succeeded. I get the theory aspect of it but when it comes to putting it practically I always fail.
I have this repository for practising tdd https://github.com/dev-test-tdd/AlgorithmPractice/. I have started writing all the algorithms from scratch to understand tdd. For example , I have this simple method to check if the given string is a palindrome or not.
Here is my test
[Test]
public void IsPalindrome3Test()
{
var sourceString = "civic";
var result = Program.IsPalindrome3(sourceString);
Assert.AreEqual(true, result);
}
and the function
public static bool IsPalindrome3(string source)
{
int min = 0;
int max = source.Length - 1;
while(true)
{
if(min > max)
{
return true;
}
char a = source[min];
char b = source[max];
if(char.ToLower(a)!= char.ToLower(b))
{
return false;
}
min++;
max--;
}
}
Am I right here when writing the test ? Please let me know if the approach taken is right. Any pointers for that matter would be great !!
This isn't really TDD you're talking about. This is just a unit test. TDD refers specifically to the process of writing your tests before your code. You start out with the most trivial case, see the test fail, make it pass in the simplest way possible and then you impose some new assumptions by writing more tests. The point is that as your tests become more specific and cover more edge cases, the code becomes more generic.
There are many ways to do this and people prefer different levels of granularity. One version would be something like:
// Single character is always a palindrome
Assert.True(IsPalindrome("a"));
Which would prompt us to write the simplest possible code to make this pass
bool IsPalindrome(string input)
{
return true;
}
This code isn't "correct" though (although it's correct for all things we are testing for at the moment). We need more tests!
// Two non-equal characters are not a palindrome
Assert.False(IsPalindrome("ab"));
leading to
bool IsPalindrome(string input)
{
return input.Length == 1;
}
And so forth. Stepping through the whole process of implementing the full algorithm takes too long for an SO answer, I just want to show that it's an iterive process with short feedback loops where you constantly impose stronger and stronger assertions about how the code should work, and then you let the algorithm grow. There are plenty of videos on youtube about this, and books and blogs as well. Go check them out!
Last but not least, it's also important that we when our tests are passing make sure to "clean up" the code too. Making the code pass in the simplest way possible often leads to some ugly repetitions and such. When tests are passing we can refactor this while staying confident that our code still holds up to the assertions we made. It's important not to add more functionality when refactoring though, because then that functionality isn't written test first, which is the whole point of the endeavour.
I'm using NUnit framework with moq for testing. I've got a problem with veryfing how many times private method of this class has been called. To do so with mock object it's enough to call Verify() with parameters of type Times, but my method is part of this class. I was trying to mock current service (SUT), but it probably isn't the best idea and it doesn't work properly.
SUT:
public object Post(Operations.Campaign.Merge request)
{
List<CampaignIdWithNumberOfAds> campaignList = new List<CampaignIdWithNumberOfAds>();
for (int i = 0; i < request.CampaignIdsToMerge.Count; i++)
{
if (this.CampaignRepository.Exist(request.CampaignIdsToMerge[i]))
{
campaignList.Add(new CampaignIdWithNumberOfAds()
{
CampaignId = request.CampaignIdsToMerge[i],
NumberOfAdvertisement = this.CampaignRepository.GetNumberOfAdvertisementsInCampaign(request.CampaignIdsToMerge[i])
});
}
}
if (campaignList.Count > 1)
{
campaignList = campaignList.OrderByDescending(p => (p == null) ? -1 : p.NumberOfAdvertisement).ToList();
List<CampaignIdWithNumberOfAds> campaignsToMerge = campaignList.Skip(1).ToList();
CampaignIdWithNumberOfAds chosenCampaign = campaignList.FirstOrDefault<CampaignIdWithNumberOfAds>();
uint chosenCampaignId = chosenCampaign.CampaignId;
foreach (var campaignToMerge in campaignsToMerge)
{
this.MergeCampaigns(chosenCampaignId, campaignToMerge.CampaignId);
}
}
return true;
}
Test:
[Test]
public void MergeCampaignsPost_ValidMergeCampaignsRequest_ExecuteMergeCampaignsMethodAppropriateNumberOfTimes()
{
// Arrange
var mockCampaignService = new Mock<Toucan.Api.Services.CampaignService>();
var request = Mother.GetValidMergeCampaignsRequest_WithDifferentNumbersOfAdvertisement();
mockCampaignService.Setup(x => x.MergeCampaigns(It.IsAny<uint>(), It.IsAny<uint>()));
// Act
var response = this.Service.Post(request);
// Assert
mockCampaignService.Verify(x => x.MergeCampaigns(It.IsAny<uint>(), It.IsAny<uint>()), Times.Exactly(request.CampaignIdsToMerge.Count - 1));
}
I am afraid that I won't give you a solution here, although I would rather suggest you some sort of guidance. There are many different strategies to unit testing and different people would suggest different solutions. Basically in my opinion you could change the way you are testing your code (you might agree or disagree with those, but please take them into consideration).
Unit test should be independent from the implementation
Easy as it sounds, it is very hard to keep to this approach. Private methods are your implementation of solving the problem. The typical pitfall for a developer writing a unit test for his own code is the fact that you know how your code works and mirror it in unit test. What if the implementation changes, but the public method will still fulfill the requested contract? Hardly ever you want to directly your unit test with a private method. This is related to the following...
Test should check the output result of the method
Which basically means do not check how many times something is executed if you don't have to. I am not sure what is your MergeCampaigns method doing but it would be better if you check the result of the operation instead of how many times it is executed.
Don't overdo your unit tests - keep it maintainable
Try to test each functional scenario you can imagine with as simple and as independent test as possible. Don't go too deep checking if something is called. Otherwise, you will get a 100% coverage at start, but you will curse each time changing a thing in your service, because this will make half of your test fail (assuming that the service is still doing its job, but in different way than designed at the beginning). So you will spend time rewriting unit tests that actually give you no gain in terms of creating a bulletproof solution.
It is very easy to start writing unit tests and keep the coverage green, it starts to get very tricky if you want to write good unit tests. There are many valuable resources to help with that. Good luck!
I have the following method in my service layer
public ModuleResponse GetModules(ModuleRequest request)
{
var response = new ModuleResponse(request.RequestId);
try
{
response.Modules = Mapper.ToDataTransferObjects(ModuleDao.GetModules());
return response;
}
catch (Exception ex)
{
Log.Error(ex);
response.Acknowledge = AcknowledgeType.Failure;
response.Message = "An error occured.";
return response;
}
}
And I have a unit test written in xUnit like this:
[Fact]
public void GetModulesTest()
{
//Arrange
var mockModuleDao = Mock.Create<IModuleDao>();
var mockLog = Mock.Create<ILog>();
var mockAuditDao = Mock.Create<IAuditDao>();
var moduleList = new List<ModuleItem>
{
new ModuleItem {Id = 100, Category = "User Accounts", Feature = "Users"},
new ModuleItem {Id = 101, Category = "User Accounts", Feature = "Roles Permissions"}
};
mockModuleDao.Arrange(dao => dao.GetModules()).Returns(moduleList);
IUserManagementService userService = new UserManagementService(mockModuleDao, mockLog, mockAuditDao);
var request = new ModuleRequest().Prepare();
//Act
var actualResponse = userService.GetModules(request);
//Assert
Assert.Equal(AcknowledgeType.Success, actualResponse.Acknowledge);
Assert.Equal(2, actualResponse.Modules.Count);
}
Now I have a whole other bunch of retrieval methods in my code similar to the one above.
Are testing such methods redundant? I mean, they are almost a sure pass test, unless I mess up the logic of my Mapping or something.
Also, when testing retrieval methods, what is it that I should be testing for? In my scenario above, I have 2 assert statements, 1 to check if the response is a success, and the 2nd is to check the count of the list.
Is this sufficient? or how can this be further improved to enhance the value of such a unit test?
As always, whether or not a test like that is valuable depends on your motivation for testing.
Is this piece of code mission-critical?
What is the cost if that code fails?
How easily can you address errors, should they occur?
The higher the cost of failure, the more important it is to test a piece of code.
The GetModules method does at least four things:
It returns the modules from the DAO.
It maps the modules from the DAO into the desired return types.
It returns an error message if something goes wrong.
It logs any errors that may occur.
The GetModulesTest tests a single of these four responsibilities, which means that three other tests are still required to fully cover the GetModules method.
Writing small-grained unit tests are valuable, because it enables you do decompose a complex piece of production code into a set of simple, easy-to-understand unit tests. Sometimes, these unit tests become almost inanely simple, to the point where you'll begin to doubt the value of it, but the value isn't in a single unit test - it's in the accumulation of simple tests, which, together, specify how the entire system ought to work.
Now I have a whole other bunch of retrieval methods in my code similar to the one above.
Really? Don't they feel a little... repetitive?
I think Lilshieste made a very appropriate point, that one intrinsic value of unit tests is that they highlight maintainability issues like this. You might say they make code smells more pungent.
Mark Seemann identified four individual responsibilities for this one method you showed us. The Single Responsibility Principle would dictate that you should only have one.
You could conceivably turn this method (and all its kin) into something more like this:
public ModuleResponse GetModules(ModuleRequest request)
{
return _responder.CreateMappedDtoResponse(
request,
ModuleDao.GetModules,
modules => new ModuleResponse {Modules = modules}));
}
Now, at this point, I think you could make a decent argument against unit-testing this method. You'd pretty much be testing the implementation of this method, rather than its behavior. Your unit test would be testing that you call a given method with given arguments, and that's it!
But even if you decided to be a purist and unit test this, there's really only one unit test that you could conceivably write, as opposed to the four that you would have needed to fully cover this method before. Then you write the appropriate unit tests for the CreateMappedDtoResponse methods (and whatever methods it may delegate parts of its work to), and you've got a DRY, well-tested system with a fraction of the number of tests. And if you change a common responsibility like your exception-logging strategy, you can change it in one place, change one unit test, and be done.
So even if your unit tests never catch a bug for you, being a purist helped you to avoid a maintainability issue that would have forced you to write just as much extra code in the first place, and be likely to re-write just as much code later on. Of course, this only happens if you know to listen to your unit tests and change your design accordingly.
I'm writing unit tests for a simple IsBoolean(x) function to test if a value is boolean. There's 16 different values I want to test.
Will I be burnt in hell, or mocked ruthlessly by the .NET programming community (which would be worse?), if I don't break them up into individual unit tests, and run them together as follows:
[TestMethod]
public void IsBoolean_VariousValues_ReturnsCorrectly()
{
//These should all be considered Boolean values
Assert.IsTrue(General.IsBoolean(true));
Assert.IsTrue(General.IsBoolean(false));
Assert.IsTrue(General.IsBoolean("true"));
Assert.IsTrue(General.IsBoolean("false"));
Assert.IsTrue(General.IsBoolean("tRuE"));
Assert.IsTrue(General.IsBoolean("fAlSe"));
Assert.IsTrue(General.IsBoolean(1));
Assert.IsTrue(General.IsBoolean(0));
Assert.IsTrue(General.IsBoolean(-1));
//These should all be considered NOT boolean values
Assert.IsFalse(General.IsBoolean(null));
Assert.IsFalse(General.IsBoolean(""));
Assert.IsFalse(General.IsBoolean("asdf"));
Assert.IsFalse(General.IsBoolean(DateTime.MaxValue));
Assert.IsFalse(General.IsBoolean(2));
Assert.IsFalse(General.IsBoolean(-2));
Assert.IsFalse(General.IsBoolean(int.MaxValue));
}
I ask this because "best practice" I keep reading about would demand I do the following:
[TestMethod]
public void IsBoolean_TrueValue_ReturnsTrue()
{
//Arrange
var value = true;
//Act
var returnValue = General.IsBoolean(value);
//Assert
Assert.IsTrue(returnValue);
}
[TestMethod]
public void IsBoolean_FalseValue_ReturnsTrue()
{
//Arrange
var value = false;
//Act
var returnValue = General.IsBoolean(value);
//Assert
Assert.IsTrue(returnValue);
}
//Fell asleep at this point
For the 50+ functions and 500+ values I'll be testing against this seems like a total waste of time.... but it's best practice!!!!!
-Brendan
I would not worry about it. This sort of thing isn't the point. JB Rainsberger talked about this briefly in his talk Integration Tests are a Scam. He said something like, "If you have never forced yourself to use one assert per test, I recommend you try it for a month. It will give you a new perspective on test, and teach you when it matters to have one assert per test, and when it doesn't". IMO, this falls into the doesn't matter category.
Incidentally, if you use nunit, you can use the TestCaseAttribute, which is a little nicer:
[TestCase(true)]
[TestCase("tRuE")]
[TestCase(false)]
public void IsBoolean_ValidBoolRepresentations_ReturnsTrue(object candidate)
{
Assert.That(BooleanService.IsBoolean(candidate), Is.True);
}
[TestCase("-3.14")]
[TestCase("something else")]
[TestCase(7)]
public void IsBoolean_InvalidBoolRepresentations_ReturnsFalse(object candidate)
{
Assert.That(BooleanService.IsBoolean(candidate), Is.False);
}
EDIT: wrote the tests in a slightly different way, that I think communicates intent a little better.
Although I agree it's best practice to separate the values in order to more easily identify the error. I think one still has to use their own common sense and follow such rules as guidelines and not as an absolute. You want to minimize assertion counts in a unit test, but what's generally most important is to insure a single concept per test.
In your specific case, given the simplicity of the function, I think that the one unit test you provided is fine. It's easy to read, simple, and clear. It also tests the function thoroughly and if ever it were to break somewhere down the line, you would be able to quickly identify the source and debug it.
As an extra note, in order to maintain good unit tests, you'll want to always keep them up to date and treat them with the same care as you do the actual production code. That's in many ways the greatest challenge. Probably the best reason to do Test Driven Development is how it actually allows you to program faster in the long run because you stop worrying about breaking the code that exists.
It's best practice to split each of the values you want to test into separate unit tests. Each unit test should be named specifically to the value you're passing and the expected result. If you were changing code and broke just one of your tests, then that test alone would fail and the other 15 would pass. This buys you the ability to instantly know what you broke without then having to debug the one unit test and find out which of the Asserts failed.
Hope this helps.
I can't comment on "Best Practice" because there is no such thing.
I agree with what Ayende Rahien says in his blog:
At the end, it boils down to the fact that I don’t consider tests to
be, by themselves, a value to the product. Their only value is their
binary ability to tell me whatever the product is okay or not.
Spending a lot of extra time on the tests distract from creating real
value, shippable software.
If you put them all in one test and this test fails "somewhere", then what do you do? Either your test framework will tell you exactly which line it failed on, or, failing that, you step through it with a debugger. The extra effort required because it's all in one function is negligible.
The extra value of knowing exactly which subset of tests failed in this particular instance is small, and overshadowed by the ponderous amount of code you had to write and maintain.
Think for a minute the reasons for breaking them up into individual tests. It's to isolate different functionality and to accurately identify all the things that went wrong when a test breaks. It looks like you might be testing two things: Boolean and Not Boolean, so consider two tests if your code follows two different paths. The bigger point, though, is that if none of the tests break, there are no errors to pinpoint.
If you keep running them, and later have one of these tests fail, that would be the time to refactor them into individual tests, and leave them that way.
I'm trying to write a unit test for a class that generates distinct strings. My initial reaction was the following:
public void GeneratedStringsShouldBeDistinct()
{
UniqueStringCreator stringCreator = new UniqueStringCreator();
HashSet<string> generatedStrings = new HashSet<string>();
string str;
for (int i = 0; i < 10000; i++)
{
str = stringCreator.GetNext();
if (!generatedStrings.Add(str))
{
Assert.Fail("Generated {0} twice", str);
}
}
}
I liked this approach because I knew the underlying algorithm wasn't using any randomness so I'm not in a situation where it might fail one time but succeed the next - but that could be swapped out underneath by someone in the future. OTOH, testing of any randomized algorithm would cause that type of test inconsistency, so why not do it this way?
Should I just get 2 elements out and check distinctness (using a 0/1/Many philosophy)?
Any other opinions or suggestions?
I would keep using your approach; it's probably the most reliable option.
By the way, you don't need the if statement:
Assert.IsTrue(generatedStrings.Add(str), "Generated {0} twice", str);
If I wanted to test code that relied on random input, I would try to stub out the random generation (say, ITestableRandomGenerator) so that it could be mocked for testing. You can then inject different 'random' sequences that appropriately trigger the different execution pathways of your code under test and guarantee the necessary code coverage.
The particular test you've shown is basically a black box test, as you're just generating outputs and verifying that it works for at least N cycles. Since the code does not have any inputs, this is a reasonable test, though it might be better if you know what different conditions may impact your algorithm so that you can test those particular inputs. This may mean somehow running the algorithm with different 'seed' values, choosing the seeds so that it will exercise the code in different ways.
If you passed the algorithm into UniqueStringCreator's constructor, you could use a stub object in your unit testing to generate psuedo-random (predictable) data. See also the strategy pattern.
If there's some kind of check inside the class, you can always separate out the part which checks for distinctness with the part that generates the strings.
Then you mock out the checker, and test the behaviour in each of the two contexts; the one in which the checker thinks a string has been created, and the one in which it doesn't.
You may find similar ways to split up the responsibilities, whatever the underlying implementation logic.
Otherwise, I agree with SLaks - stick with what you have. The main reason for having tests is so that the code stays easy to change, so as long as people can read it and think, "Oh, that's what it's doing!" you're probably good.