Are multiple asserts wrong in this test? - c#

I am trying to test a method called Login that, when the user and password parameters are correct, sets the value of two session variables and three cookies and finally returns true.
I've been reading several posts about unit testing, but somehow that didn't make my case fully clear to me. I know that there should be just a single assert per unit test, although you can use more than one as long as you test a single "logical concept".
Login method is correct only if it sets correctly every session variable and cookie and returns the expected value, so I'm not sure if it would be ok to check all these values at once (that would lead me to use six asserts in the unit test, a bit dirty I think) or if I should check the value of each session variable and cookie separately in different tests.
[TestMethod()]
public void SuccessfulLoginTest()
{
// Arrange.
String username = "foo";
String password = "correct password";
Boolean remember = true;
// Act.
Boolean actual = Login(username, password, remember);
// Assert.
Assert.IsTrue(actual);
Assert.AreEqual("foo", HttpContext.Current.Session["Username"]);
Assert.AreEqual(1, HttpContext.Current.Session["Group"]);
Assert.AreEqual("foo", HttpContext.Current.Response.Cookies["Username"].Value);
Assert.AreEqual("en", HttpContext.Current.Response.Cookies["Lang1"].Value);
Assert.AreEqual("es", HttpContext.Current.Response.Cookies["Lang2"].Value);
}

Looks fine to me too. I'm not sure where you got the notion that there should only be one Assert per unit test. That sounds like an "ivory tower" rule that is just plain silly IMO. If your method sets a bunch of variables given a particular input then you should check all those variables given that input. Writing six different unit tests (and associated setup code) seems incredibly inefficient.
But then I tend to lean towards pragmatism over academic "correctness" when it comes to writing software.

You've already noticed correctly, that the one assert rule concerns conceptual assertions, not bare calls to the Assert methods. A nice and common trick that reduces confusion when one logical assert is composed of many assertions and also makes the test more readable is wrapping the asserts in an utility method. In your case it could look similar to this:
void AssertSessionIsValid(string username, int group, ...)
{
Assert.AreEqual(username, HttpContext.Current.Session["Username"]);
Assert.AreEqual(group, HttpContext.Current.Session["Group"]);
...
}
There are many frameworks that help in increasing readability of your tests, like Shouldly.

I'd suggest to assert everything you care about being 'working' and no more than that. Too few asserts and your test will ignore clearly broken functionality; too many and your test will be brittle and break unnecessarily when some irrelevant detail changes.

Have your conditions matched in a boolean variable and if all are fulfilled, then the boolean variable should return true.
Asset this boolean variable
bool AreAllConditionsFulfilled = condition1 && condtionCheckingSessionVariablesOne && conditionCheckingSessionVariablesTwo
Also I would suggest have individual test cases testing each of the Session Variables and assert if they are correct.
[Test]
public void TestUserNameSessionVariable()
{
//Login Code
Assert.AreEqual("foo", HttpContext.Current.Session["Username"]);
}

Related

Unit testing retrieval methods - redundant?

I have the following method in my service layer
public ModuleResponse GetModules(ModuleRequest request)
{
var response = new ModuleResponse(request.RequestId);
try
{
response.Modules = Mapper.ToDataTransferObjects(ModuleDao.GetModules());
return response;
}
catch (Exception ex)
{
Log.Error(ex);
response.Acknowledge = AcknowledgeType.Failure;
response.Message = "An error occured.";
return response;
}
}
And I have a unit test written in xUnit like this:
[Fact]
public void GetModulesTest()
{
//Arrange
var mockModuleDao = Mock.Create<IModuleDao>();
var mockLog = Mock.Create<ILog>();
var mockAuditDao = Mock.Create<IAuditDao>();
var moduleList = new List<ModuleItem>
{
new ModuleItem {Id = 100, Category = "User Accounts", Feature = "Users"},
new ModuleItem {Id = 101, Category = "User Accounts", Feature = "Roles Permissions"}
};
mockModuleDao.Arrange(dao => dao.GetModules()).Returns(moduleList);
IUserManagementService userService = new UserManagementService(mockModuleDao, mockLog, mockAuditDao);
var request = new ModuleRequest().Prepare();
//Act
var actualResponse = userService.GetModules(request);
//Assert
Assert.Equal(AcknowledgeType.Success, actualResponse.Acknowledge);
Assert.Equal(2, actualResponse.Modules.Count);
}
Now I have a whole other bunch of retrieval methods in my code similar to the one above.
Are testing such methods redundant? I mean, they are almost a sure pass test, unless I mess up the logic of my Mapping or something.
Also, when testing retrieval methods, what is it that I should be testing for? In my scenario above, I have 2 assert statements, 1 to check if the response is a success, and the 2nd is to check the count of the list.
Is this sufficient? or how can this be further improved to enhance the value of such a unit test?
As always, whether or not a test like that is valuable depends on your motivation for testing.
Is this piece of code mission-critical?
What is the cost if that code fails?
How easily can you address errors, should they occur?
The higher the cost of failure, the more important it is to test a piece of code.
The GetModules method does at least four things:
It returns the modules from the DAO.
It maps the modules from the DAO into the desired return types.
It returns an error message if something goes wrong.
It logs any errors that may occur.
The GetModulesTest tests a single of these four responsibilities, which means that three other tests are still required to fully cover the GetModules method.
Writing small-grained unit tests are valuable, because it enables you do decompose a complex piece of production code into a set of simple, easy-to-understand unit tests. Sometimes, these unit tests become almost inanely simple, to the point where you'll begin to doubt the value of it, but the value isn't in a single unit test - it's in the accumulation of simple tests, which, together, specify how the entire system ought to work.
Now I have a whole other bunch of retrieval methods in my code similar to the one above.
Really? Don't they feel a little... repetitive?
I think Lilshieste made a very appropriate point, that one intrinsic value of unit tests is that they highlight maintainability issues like this. You might say they make code smells more pungent.
Mark Seemann identified four individual responsibilities for this one method you showed us. The Single Responsibility Principle would dictate that you should only have one.
You could conceivably turn this method (and all its kin) into something more like this:
public ModuleResponse GetModules(ModuleRequest request)
{
return _responder.CreateMappedDtoResponse(
request,
ModuleDao.GetModules,
modules => new ModuleResponse {Modules = modules}));
}
Now, at this point, I think you could make a decent argument against unit-testing this method. You'd pretty much be testing the implementation of this method, rather than its behavior. Your unit test would be testing that you call a given method with given arguments, and that's it!
But even if you decided to be a purist and unit test this, there's really only one unit test that you could conceivably write, as opposed to the four that you would have needed to fully cover this method before. Then you write the appropriate unit tests for the CreateMappedDtoResponse methods (and whatever methods it may delegate parts of its work to), and you've got a DRY, well-tested system with a fraction of the number of tests. And if you change a common responsibility like your exception-logging strategy, you can change it in one place, change one unit test, and be done.
So even if your unit tests never catch a bug for you, being a purist helped you to avoid a maintainability issue that would have forced you to write just as much extra code in the first place, and be likely to re-write just as much code later on. Of course, this only happens if you know to listen to your unit tests and change your design accordingly.

When is it OK to group similar unit tests?

I'm writing unit tests for a simple IsBoolean(x) function to test if a value is boolean. There's 16 different values I want to test.
Will I be burnt in hell, or mocked ruthlessly by the .NET programming community (which would be worse?), if I don't break them up into individual unit tests, and run them together as follows:
[TestMethod]
public void IsBoolean_VariousValues_ReturnsCorrectly()
{
//These should all be considered Boolean values
Assert.IsTrue(General.IsBoolean(true));
Assert.IsTrue(General.IsBoolean(false));
Assert.IsTrue(General.IsBoolean("true"));
Assert.IsTrue(General.IsBoolean("false"));
Assert.IsTrue(General.IsBoolean("tRuE"));
Assert.IsTrue(General.IsBoolean("fAlSe"));
Assert.IsTrue(General.IsBoolean(1));
Assert.IsTrue(General.IsBoolean(0));
Assert.IsTrue(General.IsBoolean(-1));
//These should all be considered NOT boolean values
Assert.IsFalse(General.IsBoolean(null));
Assert.IsFalse(General.IsBoolean(""));
Assert.IsFalse(General.IsBoolean("asdf"));
Assert.IsFalse(General.IsBoolean(DateTime.MaxValue));
Assert.IsFalse(General.IsBoolean(2));
Assert.IsFalse(General.IsBoolean(-2));
Assert.IsFalse(General.IsBoolean(int.MaxValue));
}
I ask this because "best practice" I keep reading about would demand I do the following:
[TestMethod]
public void IsBoolean_TrueValue_ReturnsTrue()
{
//Arrange
var value = true;
//Act
var returnValue = General.IsBoolean(value);
//Assert
Assert.IsTrue(returnValue);
}
[TestMethod]
public void IsBoolean_FalseValue_ReturnsTrue()
{
//Arrange
var value = false;
//Act
var returnValue = General.IsBoolean(value);
//Assert
Assert.IsTrue(returnValue);
}
//Fell asleep at this point
For the 50+ functions and 500+ values I'll be testing against this seems like a total waste of time.... but it's best practice!!!!!
-Brendan
I would not worry about it. This sort of thing isn't the point. JB Rainsberger talked about this briefly in his talk Integration Tests are a Scam. He said something like, "If you have never forced yourself to use one assert per test, I recommend you try it for a month. It will give you a new perspective on test, and teach you when it matters to have one assert per test, and when it doesn't". IMO, this falls into the doesn't matter category.
Incidentally, if you use nunit, you can use the TestCaseAttribute, which is a little nicer:
[TestCase(true)]
[TestCase("tRuE")]
[TestCase(false)]
public void IsBoolean_ValidBoolRepresentations_ReturnsTrue(object candidate)
{
Assert.That(BooleanService.IsBoolean(candidate), Is.True);
}
[TestCase("-3.14")]
[TestCase("something else")]
[TestCase(7)]
public void IsBoolean_InvalidBoolRepresentations_ReturnsFalse(object candidate)
{
Assert.That(BooleanService.IsBoolean(candidate), Is.False);
}
EDIT: wrote the tests in a slightly different way, that I think communicates intent a little better.
Although I agree it's best practice to separate the values in order to more easily identify the error. I think one still has to use their own common sense and follow such rules as guidelines and not as an absolute. You want to minimize assertion counts in a unit test, but what's generally most important is to insure a single concept per test.
In your specific case, given the simplicity of the function, I think that the one unit test you provided is fine. It's easy to read, simple, and clear. It also tests the function thoroughly and if ever it were to break somewhere down the line, you would be able to quickly identify the source and debug it.
As an extra note, in order to maintain good unit tests, you'll want to always keep them up to date and treat them with the same care as you do the actual production code. That's in many ways the greatest challenge. Probably the best reason to do Test Driven Development is how it actually allows you to program faster in the long run because you stop worrying about breaking the code that exists.
It's best practice to split each of the values you want to test into separate unit tests. Each unit test should be named specifically to the value you're passing and the expected result. If you were changing code and broke just one of your tests, then that test alone would fail and the other 15 would pass. This buys you the ability to instantly know what you broke without then having to debug the one unit test and find out which of the Asserts failed.
Hope this helps.
I can't comment on "Best Practice" because there is no such thing.
I agree with what Ayende Rahien says in his blog:
At the end, it boils down to the fact that I don’t consider tests to
be, by themselves, a value to the product. Their only value is their
binary ability to tell me whatever the product is okay or not.
Spending a lot of extra time on the tests distract from creating real
value, shippable software.
If you put them all in one test and this test fails "somewhere", then what do you do? Either your test framework will tell you exactly which line it failed on, or, failing that, you step through it with a debugger. The extra effort required because it's all in one function is negligible.
The extra value of knowing exactly which subset of tests failed in this particular instance is small, and overshadowed by the ponderous amount of code you had to write and maintain.
Think for a minute the reasons for breaking them up into individual tests. It's to isolate different functionality and to accurately identify all the things that went wrong when a test breaks. It looks like you might be testing two things: Boolean and Not Boolean, so consider two tests if your code follows two different paths. The bigger point, though, is that if none of the tests break, there are no errors to pinpoint.
If you keep running them, and later have one of these tests fail, that would be the time to refactor them into individual tests, and leave them that way.

How to keep my unit tests DRY when mocking doesn't work?

Edit:
It seems that by trying to provide some solutions to my own problem I blurred the whole problem. So I'm modifying the question little bit.
Suppose I have this class:
public class ProtocolMessage : IMessage
{
public IHeader GetProtocolHeader(string name)
{
// Do some logic here including returning null
// and throw exception in some cases
return header;
}
public string GetProtocolHeaderValue(string name)
{
IHeader header = GetProtocolHeader(name);
// Do some logic here including returning null
// and throw exception in some cases
return value;
}
}
It is actually not important what's going on in these methods. The important is that I have multiple unit tests to cover GetProtocolHeader method covering all situations (returning correct header, null or exception) and now I'm writing unit tests for GetProtocolHeaderValue.
If GetProtocolHeaderValue would be dependent on external dependency I would be able to mock it and inject it (I'm using Moq + NUnit). Then my unit test would just test expectation that external dependency was called and returned expected value. The external dependency would be tested by its own unit test and I would be done but how to correctly proceed in this example where method is not external dependency?
Clarification of the problem:
I believe my test suite for GetProtocolHeaderValue must test situation where GetProtocolHeader returns header, null or exception. So the main question is: Should I write tests where GetProtocolHeader will be really executed (some tests will be duplicated because they will test same code as tests for GetProtocolHeader itself) or should I use mocking approach described by #adrift and #Eric Nicholson where I will not run real GetProtoclHeader but just configure mock to return header, null or exception when this method is called?
In the call to GetProtocolHeaderValue, do you actually need to know whether or not it called GetProtocolHeader?
Surely it is enough to know that it is getting the correct value from the correct header. How it actually got it is irrelevant to the unit test.
You are testing units of functionality, the unit of functionality of GetProtocolHeaderValue is whether it returns the expected value, given a header name.
It is true that you may wish to guard against inappropriate caching or cross-contamination or fetching the value from a different header, but I don't think that testing that it has called GetProtocolHeader is the best way to do this. You can infer that it somehow fetched the right header from the fact that it returned the expected value for the header.
As long as you craft your tests and test data in such a way as to ensure that duplicate headers don't mask errors, then all should be well.
EDIT for updated question:
If GetProtocolHeader works quickly, reliably and is idempotent, then I still believe that there is no need to mock it. A shortfall in any of those three aspects is (IMO) the principal reason for mocking.
If (as I suspect from the question title), the reason you wish to mock it is that the preamble required to set up an appropriate state to return a real value is too verbose, and you'd rather not repeat it across the two tests, why not do it in the setup phase?
One of the roles performed by good unit tests is documentation.
If someone wishes to know how to use your class, they can examine the tests, and possibly copy and alter the test code to fit their purpose. This becomes difficult if the real idiom of usage has been obscured by the creation and injection of mocks.
Mocks can obscure potential bugs.
Let's say that GetProtocolHeader throws an exception if name is empty. You create a mock accordingly, and ensure that GetProtocolHeaderValue handles that exception appropriately. Later, you decide that GetProtocolHeader should return null for an empty name. If you forget to update your mock, GetProtocolHeaderValue("") will now behave differently in real life vs. the test suite.
Mocking might present an advantage if the mock is less verbose than the setup, but give the above points due consideration first.
Though you give three different GetProtocolHeader responses (header, null or exception) that GetProtocolHeaderValue needs to test, I imagine that the first one is likely to be "a range of headers". (e.g. What does it do with a header that is present, but empty? How does it treat leading and trailing whitespace? What about non-ASCII chars? Numbers?). If the setup for all of these is exceptionally verbose, it might be better to mock.
I often use a partial mock (in Rhino) or the equivalent (like CallsBaseMethod in FakeItEasy) to mock the actual class I'm testing. Then you can make GetProtocolHeader virtual and mock your calls to it. You could argue that it's violating the single responsibility principal, but that's still clearly very cohesive code.
Alternatively you could make a method like
internal static string GetProtocolHeaderValue(string name, IHeader header )
and test that processing independently. The public GetProtocolHeaderValue method wouldn't have any/many tests.
Edit: In this particular case, I'd also consider adding GetValue() as an extension method to IHeader. That would be very easy to read, and you could still do the null checking.
I'm probably missing something, but given the code listed it seems to me that you don't need to worry about whether its called or not.
Two possibilities exist:
That the GetProtocolHeader() method needs to be public in which case you write the set of tests that tell you whether it works as expected or not.
That its an implementation detail and doesn't need to be public except in so far as you want to be able to test it directly but in that case all you really care about is the set of tests that tell you whether GetProtocolHeaderValue() works as required.
In either case you are testing the exposed functionality and at the end of the day that's all that matters. If it were a dependency then yes you might be worrying about whether it was called but if its not the surely its an implemenation detail and not relevant?
With Moq, you can use CallBase to do the equivalent of a partial mock in Rhino Mocks.
In your example, change GetProtocolHeader to virtual, then create a mock of ProtocolMessage, setting CallBase = true. You can then setup the GetProtocolHeader method as you wish, and have your base class functionality of GetProtocolHeaderValue called.
See the Customizing Mock Behavior section of the moq quickstart for more details.
Why not simply change GetProtocolHeaderValue(string name) so that it calls 2 methods, the second one accepting a IHeader? That way, you can test all the // do some logic part in a separate test, via the RetrieveHeaderValue method, without having to worry about Mocks. Something like:
public string GetProtocolHeaderValue(string name)
{
IHeader header = GetProtocolHeader(name);
return RetrieveHeaderValue(IHeader header);
}
now you can test both parts of GetProtocolHeaderValue fairly easily. Now you still have the same problem testing that method, but the amount of logic in it has been reduced to a minimum.
Following the same line of thinking, these methods could be extracted in a IHeaderParser interface, and the GetProtocol methods would take in a IHeaderParser, which would be trivial to test/mock.
public string GetProtocolHeaderValue(string name, IHeaderParser parser)
{
IHeader header = parser.GetProtocolHeader(name);
return parser.HeaderValue(IHeader header);
}
Try the simplest thing that might work.
If the real GetProtocolHeader() implementation is quick and easy to control (e.g. to simulate header, null and exception cases), just use it.
If not (i.e. either the real implementation is time-consuming or you can easily simulate the 3 cases), then look at redesigning such that the constraints are eased.
I refrain from using Mocks unless absolutely required (e.g. file/network/external dependency), but as you may know this is just a personal choice not a rule. Ensure that the choice is worth the extra cognitive overhead (drop in readability) of the test.
It's all a matter of oppinion, pure tdd-ists will say no mocks, mockers will mock it all.
In my honest oppinion there is something wrong with the code you wrote, the GetProtocolHeader seems important enough not to be discarded as an implementation detail, as you defined it public.
The problem here lies within the second method GetProtocolHeaderValue that does not have the possibility to use an existing instance of IHeader
I would suggest a GetValue(string name) on IHeader interface

Test does not fail at first run

I have the following test:
[Test]
public void VerifyThat_WhenProvidingAServiceOrderWithALinkedAccountGetSerivceProcessWithStatusReasonOfEndOfEntitlementToUpdateStatusAndStopReasonForAccountGetServiceProcessesAndServiceOrders_TheProcessIsUpdatedWithAStatusReasonOfEndOfEntitlement()
{
IFixture fixture = new Fixture()
.Customize(new AutoMoqCustomization());
Mock<ICrmService> crmService = new Mock<ICrmService>();
fixture.Inject(crmService);
var followupHandler = fixture.CreateAnonymous<FollowupForEndOfEntitlementHandler>();
var accountGetService = fixture.Build<EndOfEntitlementAccountGetService>()
.With(handler => handler.ServiceOrders, new HashedSet<EndOfEntitlementServiceOrder>
{
{
fixture.Build<EndOfEntitlementServiceOrder>()
.With(order => order.AccountGetServiceProcess, fixture.Build<EndOfEntitlementAccountGetServiceProcess>()
.With(process => process.StatusReason, fixture.Build<StatusReason>()
.With(statusReason=> statusReason.Id == MashlatReasonStatus.Worthiness)
.CreateAnonymous())
.CreateAnonymous())
.CreateAnonymous()
}
})
.CreateAnonymous();
followupHandler.UpdateStatusAndStopReasonForAccountGetServiceProcessesAndServiceOrders(accountGetService);
crmService.Verify(svc => svc.Update(It.IsAny<DynamicEntity>()), Times.Never());
}
My problem is that it will never fail on the first run, like TDD specifies that it should.
What it should test is that whenever there is a certain value to a status for a process of a service order, perform no updates.
Is this test checking what it should?
I'm struggling a bit to understand the question here...
Is your problem that this test passes on the first try?
If yes, that means one of two things
your test has an error
you have already met this spec/requirement
Since the first has been ruled out, Green it is. Off you go to the next one on the list..
Somewhere down the line I assume, you will implement more functionality that results in the expected method being called. i.e. when the status value is different, perform an update.
The fix for that test must ensure that both tests pass.
If not, give me more information to help me understand.
Following TDD methodology, we only write new tests for functionality that doesn't exist. If a test passes on the first run, it is important to understand why.
One of my favorite things about TDD is its subtle ability to challenge our assumptions, and knock our egos flat. The practice of "Calling your Shots" is not only a great way to work through tests, but it's also a lot of fun. I love when a test fails when I expect it to pass - many great learning opportunities come from this; Time after time, evidence of working software trumps developer ego.
When a test passes when I think it shouldn't, the next step is to make it fail.
For example, your test, which expects that something doesn't happen, is guaranteed to pass if the implementation is commented out. Tamper with the logic that you think you are implementing by commenting it out or by altering the conditions of the implementation and verify if you get the same results.
If after doing this, and you're confident that the functionality is correct, write another test that proves the opposite. Will Update get called with different state or inputs?
With both sets in place, you should be able to comment out that feature and have the ability to know in advance which test will be impacted. (8-ball, corner pocket)
I would also suggest that you add another assertion to the above test to ensure that the subject and functionality under test is actually being invoked.
change the Times.Never() to Times.AtLeastOnce() and you got a good start for tdd.
Try to find nothing in nothing, well that's a good test ,but not they way to start tdd, first go with the simple specification, the naive operation the user could do (from your view point of course).
As you done some work, keep it for later, when it fails.

Unit testing void methods?

What is the best way to unit test a method that doesn't return anything? Specifically in c#.
What I am really trying to test is a method that takes a log file and parses it for specific strings. The strings are then inserted into a database. Nothing that hasn't been done before but being VERY new to TDD I am wondering if it is possible to test this or is it something that doesn't really get tested.
If a method doesn't return anything, it's either one of the following
imperative - You're either asking the object to do something to itself.. e.g change state (without expecting any confirmation.. its assumed that it will be done)
informational - just notifying someone that something happened (without expecting action or response) respectively.
Imperative methods - you can verify if the task was actually performed. Verify if state change actually took place. e.g.
void DeductFromBalance( dAmount )
can be tested by verifying if the balance post this message is indeed less than the initial value by dAmount
Informational methods - are rare as a member of the public interface of the object... hence not normally unit-tested. However if you must, You can verify if the handling to be done on a notification takes place. e.g.
void OnAccountDebit( dAmount ) // emails account holder with info
can be tested by verifying if the email is being sent
Post more details about your actual method and people will be able to answer better.
Update: Your method is doing 2 things. I'd actually split it into two methods that can now be independently tested.
string[] ExamineLogFileForX( string sFileName );
void InsertStringsIntoDatabase( string[] );
String[] can be easily verified by providing the first method with a dummy file and expected strings. The second one is slightly tricky.. you can either use a Mock (google or search stackoverflow on mocking frameworks) to mimic the DB or hit the actual DB and verify if the strings were inserted in the right location. Check this thread for some good books... I'd recomment Pragmatic Unit Testing if you're in a crunch.
In the code it would be used like
InsertStringsIntoDatabase( ExamineLogFileForX( "c:\OMG.log" ) );
Test its side-effects. This includes:
Does it throw any exceptions? (If it should, check that it does. If it shouldn't, try some corner cases which might if you're not careful - null arguments being the most obvious thing.)
Does it play nicely with its parameters? (If they're mutable, does it mutate them when it shouldn't and vice versa?)
Does it have the right effect on the state of the object/type you're calling it on?
Of course, there's a limit to how much you can test. You generally can't test with every possible input, for example. Test pragmatically - enough to give you confidence that your code is designed appropriately and implemented correctly, and enough to act as supplemental documentation for what a caller might expect.
As always: test what the method is supposed to do!
Should it change global state (uuh, code smell!) somewhere?
Should it call into an interface?
Should it throw an exception when called with the wrong parameters?
Should it throw no exception when called with the right parameters?
Should it ...?
Try this:
[TestMethod]
public void TestSomething()
{
try
{
YourMethodCall();
Assert.IsTrue(true);
}
catch {
Assert.IsTrue(false);
}
}
Void return types / Subroutines are old news. I haven't made a Void return type (Unless I was being extremely lazy) in like 8 years (From the time of this answer, so just a bit before this question was asked).
Instead of a method like:
public void SendEmailToCustomer()
Make a method that follows Microsoft's int.TryParse() paradigm:
public bool TrySendEmailToCustomer()
Maybe there isn't any information your method needs to return for usage in the long-run, but returning the state of the method after it performs its job is a huge use to the caller.
Also, bool isn't the only state type. There are a number of times when a previously-made Subroutine could actually return three or more different states (Good, Normal, Bad, etc). In those cases, you'd just use
public StateEnum TrySendEmailToCustomer()
However, while the Try-Paradigm somewhat answers this question on how to test a void return, there are other considerations too. For example, during/after a "TDD" cycle, you would be "Refactoring" and notice you are doing two things with your method... thus breaking the "Single Responsibility Principle." So that should be taken care of first. Second, you might have idenetified a dependency... you're touching "Persistent" Data.
If you are doing the data access stuff in the method-in-question, you need to refactor into an n-tier'd or n-layer'd architecture. But we can assume that when you say "The strings are then inserted into a database", you actually mean you're calling a business logic layer or something. Ya, we'll assume that.
When your object is instantiated, you now understand that your object has dependencies. This is when you need to decide if you are going to do Dependency Injection on the Object, or on the Method. That means your Constructor or the method-in-question needs a new Parameter:
public <Constructor/MethodName> (IBusinessDataEtc otherLayerOrTierObject, string[] stuffToInsert)
Now that you can accept an interface of your business/data tier object, you can mock it out during Unit Tests and have no dependencies or fear of "Accidental" integration testing.
So in your live code, you pass in a REAL IBusinessDataEtc object. But in your Unit Testing, you pass in a MOCK IBusinessDataEtc object. In that Mock, you can include Non-Interface Properties like int XMethodWasCalledCount or something whose state(s) are updated when the interface methods are called.
So your Unit Test will go through your Method(s)-In-Question, perform whatever logic they have, and call one or two, or a selected set of methods in your IBusinessDataEtc object. When you do your Assertions at the end of your Unit Test you have a couple of things to test now.
The State of the "Subroutine" which is now a Try-Paradigm method.
The State of your Mock IBusinessDataEtc object.
For more information on Dependency Injection ideas on the Construction-level... as they pertain to Unit Testing... look into Builder design patterns. It adds one more interface and class for each current interface/class you have, but they are very tiny and provide HUGE functionality increases for better Unit-Testing.
You can even try it this way:
[TestMethod]
public void ReadFiles()
{
try
{
Read();
return; // indicates success
}
catch (Exception ex)
{
Assert.Fail(ex.Message);
}
}
it will have some effect on an object.... query for the result of the effect. If it has no visible effect its not worth unit testing!
Presumably the method does something, and doesn't simply return?
Assuming this is the case, then:
If it modifies the state of it's owner object, then you should test that the state changed correctly.
If it takes in some object as a parameter and modifies that object, then your should test the object is correctly modified.
If it throws exceptions is certain cases, test that those exceptions are correctly thrown.
If its behaviour varies based on the state of its own object, or some other object, preset the state and test the method has the correct Ithrough one of the three test methods above).
If youy let us know what the method does, I could be more specific.
Use Rhino Mocks to set what calls, actions and exceptions might be expected. Assuming you can mock or stub out parts of your method. Hard to know without knowing some specifics here about the method, or even context.
Depends on what it's doing. If it has parameters, pass in mocks that you could ask later on if they have been called with the right set of parameters.
What ever instance you are using to call the void method , You can just use ,Verfiy
For Example:
In My case its _Log is the instance and LogMessage is the method to be tested:
try
{
this._log.Verify(x => x.LogMessage(Logger.WillisLogLevel.Info, Logger.WillisLogger.Usage, "Created the Student with name as"), "Failure");
}
Catch
{
Assert.IsFalse(ex is Moq.MockException);
}
Is the Verify throws an exception due to failure of the method the test would Fail ?

Categories