Test does not fail at first run - c#

I have the following test:
[Test]
public void VerifyThat_WhenProvidingAServiceOrderWithALinkedAccountGetSerivceProcessWithStatusReasonOfEndOfEntitlementToUpdateStatusAndStopReasonForAccountGetServiceProcessesAndServiceOrders_TheProcessIsUpdatedWithAStatusReasonOfEndOfEntitlement()
{
IFixture fixture = new Fixture()
.Customize(new AutoMoqCustomization());
Mock<ICrmService> crmService = new Mock<ICrmService>();
fixture.Inject(crmService);
var followupHandler = fixture.CreateAnonymous<FollowupForEndOfEntitlementHandler>();
var accountGetService = fixture.Build<EndOfEntitlementAccountGetService>()
.With(handler => handler.ServiceOrders, new HashedSet<EndOfEntitlementServiceOrder>
{
{
fixture.Build<EndOfEntitlementServiceOrder>()
.With(order => order.AccountGetServiceProcess, fixture.Build<EndOfEntitlementAccountGetServiceProcess>()
.With(process => process.StatusReason, fixture.Build<StatusReason>()
.With(statusReason=> statusReason.Id == MashlatReasonStatus.Worthiness)
.CreateAnonymous())
.CreateAnonymous())
.CreateAnonymous()
}
})
.CreateAnonymous();
followupHandler.UpdateStatusAndStopReasonForAccountGetServiceProcessesAndServiceOrders(accountGetService);
crmService.Verify(svc => svc.Update(It.IsAny<DynamicEntity>()), Times.Never());
}
My problem is that it will never fail on the first run, like TDD specifies that it should.
What it should test is that whenever there is a certain value to a status for a process of a service order, perform no updates.
Is this test checking what it should?

I'm struggling a bit to understand the question here...
Is your problem that this test passes on the first try?
If yes, that means one of two things
your test has an error
you have already met this spec/requirement
Since the first has been ruled out, Green it is. Off you go to the next one on the list..
Somewhere down the line I assume, you will implement more functionality that results in the expected method being called. i.e. when the status value is different, perform an update.
The fix for that test must ensure that both tests pass.
If not, give me more information to help me understand.

Following TDD methodology, we only write new tests for functionality that doesn't exist. If a test passes on the first run, it is important to understand why.
One of my favorite things about TDD is its subtle ability to challenge our assumptions, and knock our egos flat. The practice of "Calling your Shots" is not only a great way to work through tests, but it's also a lot of fun. I love when a test fails when I expect it to pass - many great learning opportunities come from this; Time after time, evidence of working software trumps developer ego.
When a test passes when I think it shouldn't, the next step is to make it fail.
For example, your test, which expects that something doesn't happen, is guaranteed to pass if the implementation is commented out. Tamper with the logic that you think you are implementing by commenting it out or by altering the conditions of the implementation and verify if you get the same results.
If after doing this, and you're confident that the functionality is correct, write another test that proves the opposite. Will Update get called with different state or inputs?
With both sets in place, you should be able to comment out that feature and have the ability to know in advance which test will be impacted. (8-ball, corner pocket)
I would also suggest that you add another assertion to the above test to ensure that the subject and functionality under test is actually being invoked.

change the Times.Never() to Times.AtLeastOnce() and you got a good start for tdd.
Try to find nothing in nothing, well that's a good test ,but not they way to start tdd, first go with the simple specification, the naive operation the user could do (from your view point of course).
As you done some work, keep it for later, when it fails.

Related

How to stop XUnit Theory on the first fail?

I use Theory with MemberData like this:
[Theory]
[MemberData(nameof(TestParams))]
public void FijewoShortcutTest(MapMode mapMode)
{
...
and when it works, it is all fine, but when it fails XUnit iterates over all data I pass as parameters. In my case it is fruitless attempt, I would like to stop short -- i.e. when the first set of parameters make the test to fail, stop the rest (because they will fail as well -- again, it is my case, not general rule).
So how to tell XUnit to stop Theory on the first fail?
The point of a Theory is to have multiple independent tests running the same code of different data. If you only actually want one test, just use a Fact and iterate over the data you want to test within the method:
[Fact]
public void FijewoShortcutTest()
{
foreach (MapMode mapMode in TestParams)
{
// Test code here
}
}
That will mean you can't easily run just the test for just one MapMode though. Unless it takes a really long time to execute the tests for some reason, I'd just live with "if something is badly messed up, I get a lot of broken tests".

How can I test if a private method of a class is called or not with rhino mock?

I am quite new at C# and also rhino mocks. I searched and found similar topics with my question but couldnt find a proper solution.
I am trying to understand if the private method is called or not in my unit test. I am using rhino mock, read many files about it, some of them just say that change the access specifier of the method from private to public, but I can not change the source code. I tried to link source file to my test project but it doesnt change.
public void calculateItems()
{
var result = new Result(fileName, ip, localPath, remotePath);
calculateItems(result, nameOfString);
}
private void calculateItems(Result result, string nameOfString )
As you see from the code above, I have two methods have exactly same name, calculateItems, but public one has no parameter, private one has two parameters. I am trying to understand when I called public one in my unittest, is private method called?
private CalculateClass sut;
private Result result;
[SetUp]
public void Setup()
{
result = MockRepository.GenerateStub<Result>();
sut = new CalculateClass();
}
[TearDown]
public void TearDown()
{
}
[Test]
public void test()
{
sut.Stub(stub => stub.calculateItems(Arg<Result>.Is.Anything, Arg<string>.Is.Anything));
sut.calculateItems();
sut.AssertWasCalled(stub => stub.calculateItems(Arg<Result>.Is.Anything, Arg<string>.Is.Anything));
}
In my unittest, I am taking such an error which says "No overload method for calculateItems take two arguments". Is there a way to test it without any changing in source code?
You're testing the wrong thing. Private methods are private. They are of no concern to consuming code, and unit tests are consuming code like any other.
In your tests you test and validate the outward facing functionality of the component. Its inner implementation details aren't relevant to the tests. All the tests care about is whether the invoked operation produces the expected results.
So the question you must ask yourself is... What are the expected results when invoking this operation?:
calculateItems()
It doesn't return anything, so what does it do? What state does it modify in some way? That is what your test needs to observe, not the implementation details but the observable result. (And if the operation has no observable result, then there's no difference between "passed" or "failed" so there's nothing to test.)
We can't see the details of your code, but it's possible that the observable result is coupled to another component entirely. If that's the case then that other component is a dependency for this operation and the goal of the unit test is to mock that dependency so the operation can be tested independently of the dependency. The component may then need to be modified so that a dependency is provided rather than internally controlled. (This is referred to as the Dependency Inversion Principle.)
Also of note...
but I can not change the source code
That's a separate problem entirely. If you truly can't change the source code, then the value of these tests is drastically reduced and possibly eliminated entirely. If a test fails, what can you do about it? Nothing. Because you can't change the code. So what are you testing?
Keep in mind that it's not only possible but unfortunately very common for programmers to write code which can't be meaningfully unit tested. If this code was provided to you by someone else and you are forbidden to change it for some non-technical reason, then it will be the responsibility of that someone else to correct the code. "Correcting" may include "making it possible to meaningfully unit test". (Or, honestly, they should be unit testing it. Not you.)
If your public method calls your private one then the same thing will happen in your tests. Tests are nothing more than code that can be run and debugged and you can try that so see what happens.
Private methods can't be tested directly but they can be tested via their public callers which is what you are doing, so it's all good. Whether it's a good idea to have a setup like this well, that's a different story entirely but I am not going into that now.
Now, let's discuss what you are actually testing.
Unit tests should not have deep knowledge of the code they test. The reason is that you should have inputs and outputs and you shouldn't care what happens in between.
If you refactor the code and eliminate the private method then your test would break, even if your inputs and outputs to your public method remain the same. That's not a good position to be in, this is what we call brittle tests.
So add your functional tests around the public method, verify that you get hat you expect and don't worry whether it calls your private method or not.
When you say you need to know whether your private methods are called, this can have two different interpretations:
You want to ensure that the private method is called within one particular test, making it a success criterion for that very test.
You want to know if the private method is called at all, by any of your test cases. You might be interested in this because you want to be sure if the private method is covered by your test suite, or as you said, just to form an understanding of what is actually going on in your code.
Regarding the second interpretation: If you want to understand what is going on in the code, a good approach is to use a debugger and just step through the code to see what function is called. As I am not a C# expert here, I can not recommend any specific debugging tool, but finding some recommendations about this on the web should not be difficult. This approach would fulfill your requirements not to require changes to the source code
Another possibility, in particular if you are interested in whether your private function is covered by the tests, is to use a test coverage tool for C#. The coverage tool would show you whether or not the private method was called or not. Again, this would not require to make any changes to the source code.
Regarding the first interpretation of your question: If you want to test that some privat function is called as part of your test's success criterion, you preferrably do this with tests that use the public API. Then, in these tests, you should be able to judge if the private function is called because of the effect that the private function has on the test result.
And, in contrast to other opinions, you should test the implementation. The primary goal of unit-testing is to find the bugs in the code. Different implementations have different bugs. This is why people also use coverage tools, to see if they have covered the code of their implementation. And, coverage is not enough, you also need to check boundary cases of expressions etc. Certainly, having maintainable tests and tests that do not break unnecessarily in case of refactorings are good goals (why testing through the public API is typically a good approach - but not always), but they are secondary goals compared to the goal to find all bugs.

Do we lie to ourselves when creating a mock for a unit test

I have just started learning all about unit testing since yesterday and today was reading about Mocks and NSub in particular.
The problem I have is that I don't get the philosophy and way of thinking behind it. So for example reading my book came to this:
[Test]
public void Returns_ByDefault_WorksForHardCodedArgument()
{
IFileNameRules fakeRules = Substitute.For<IFileNameRules>();
fakeRules.IsValidLogFileName(Arg.Any<String>())
.Returns(true);
Assert.IsTrue(fakeRules.IsValidLogFileName("anything.txt"));
}
OK so first we make a fake object to represent the interface of the actual class that we have a actual method in it that does some actual work. Then we call that method but we also tell it to return true.
Then we assert it to see if it is returning true ? Well we just told it one line before that return true! now we test it is returning true! they we say ok good passed?
I don't get it! To me feel like this: Teacher tell the kid in order to pass the exam answer yes to this question if asked, then goes and asks that question and kid says yes and exam is passed?
As per the comments on this question, this test is likely demonstrating how the mocking library works. For our test code we are extremely unlikely (partial mocks being a potential exception) to mock out the class we want to test. Instead we may want to mock out some things the code uses, in order to get more deterministic tests, or faster tests, or tests that simulate rare events, etc.
To your direct question, yes I guess we are sort lying to ourselves when we mock out dependencies for a test. We are saying "let's pretend that our dependency does X, then check our code does Y". Now it is possible the dependency never actually does "X". To me, the aim of mocking is to start off with this fiction, then test our dependency and make sure it actually does do "X", to the point where the fiction ends up matching reality.
The purpose of testing is to check every single way a method could possibly behave. If you tell the method a true statement and it returns false. Obviously something is wrong with the method you wrote. Sometimes the most complex issues can be solved by finding simple mistakes in your code. (In this case checking to see if the method will actually return true when asked to return true.) IF it fails to do so. You done messed up.

When is it OK to group similar unit tests?

I'm writing unit tests for a simple IsBoolean(x) function to test if a value is boolean. There's 16 different values I want to test.
Will I be burnt in hell, or mocked ruthlessly by the .NET programming community (which would be worse?), if I don't break them up into individual unit tests, and run them together as follows:
[TestMethod]
public void IsBoolean_VariousValues_ReturnsCorrectly()
{
//These should all be considered Boolean values
Assert.IsTrue(General.IsBoolean(true));
Assert.IsTrue(General.IsBoolean(false));
Assert.IsTrue(General.IsBoolean("true"));
Assert.IsTrue(General.IsBoolean("false"));
Assert.IsTrue(General.IsBoolean("tRuE"));
Assert.IsTrue(General.IsBoolean("fAlSe"));
Assert.IsTrue(General.IsBoolean(1));
Assert.IsTrue(General.IsBoolean(0));
Assert.IsTrue(General.IsBoolean(-1));
//These should all be considered NOT boolean values
Assert.IsFalse(General.IsBoolean(null));
Assert.IsFalse(General.IsBoolean(""));
Assert.IsFalse(General.IsBoolean("asdf"));
Assert.IsFalse(General.IsBoolean(DateTime.MaxValue));
Assert.IsFalse(General.IsBoolean(2));
Assert.IsFalse(General.IsBoolean(-2));
Assert.IsFalse(General.IsBoolean(int.MaxValue));
}
I ask this because "best practice" I keep reading about would demand I do the following:
[TestMethod]
public void IsBoolean_TrueValue_ReturnsTrue()
{
//Arrange
var value = true;
//Act
var returnValue = General.IsBoolean(value);
//Assert
Assert.IsTrue(returnValue);
}
[TestMethod]
public void IsBoolean_FalseValue_ReturnsTrue()
{
//Arrange
var value = false;
//Act
var returnValue = General.IsBoolean(value);
//Assert
Assert.IsTrue(returnValue);
}
//Fell asleep at this point
For the 50+ functions and 500+ values I'll be testing against this seems like a total waste of time.... but it's best practice!!!!!
-Brendan
I would not worry about it. This sort of thing isn't the point. JB Rainsberger talked about this briefly in his talk Integration Tests are a Scam. He said something like, "If you have never forced yourself to use one assert per test, I recommend you try it for a month. It will give you a new perspective on test, and teach you when it matters to have one assert per test, and when it doesn't". IMO, this falls into the doesn't matter category.
Incidentally, if you use nunit, you can use the TestCaseAttribute, which is a little nicer:
[TestCase(true)]
[TestCase("tRuE")]
[TestCase(false)]
public void IsBoolean_ValidBoolRepresentations_ReturnsTrue(object candidate)
{
Assert.That(BooleanService.IsBoolean(candidate), Is.True);
}
[TestCase("-3.14")]
[TestCase("something else")]
[TestCase(7)]
public void IsBoolean_InvalidBoolRepresentations_ReturnsFalse(object candidate)
{
Assert.That(BooleanService.IsBoolean(candidate), Is.False);
}
EDIT: wrote the tests in a slightly different way, that I think communicates intent a little better.
Although I agree it's best practice to separate the values in order to more easily identify the error. I think one still has to use their own common sense and follow such rules as guidelines and not as an absolute. You want to minimize assertion counts in a unit test, but what's generally most important is to insure a single concept per test.
In your specific case, given the simplicity of the function, I think that the one unit test you provided is fine. It's easy to read, simple, and clear. It also tests the function thoroughly and if ever it were to break somewhere down the line, you would be able to quickly identify the source and debug it.
As an extra note, in order to maintain good unit tests, you'll want to always keep them up to date and treat them with the same care as you do the actual production code. That's in many ways the greatest challenge. Probably the best reason to do Test Driven Development is how it actually allows you to program faster in the long run because you stop worrying about breaking the code that exists.
It's best practice to split each of the values you want to test into separate unit tests. Each unit test should be named specifically to the value you're passing and the expected result. If you were changing code and broke just one of your tests, then that test alone would fail and the other 15 would pass. This buys you the ability to instantly know what you broke without then having to debug the one unit test and find out which of the Asserts failed.
Hope this helps.
I can't comment on "Best Practice" because there is no such thing.
I agree with what Ayende Rahien says in his blog:
At the end, it boils down to the fact that I don’t consider tests to
be, by themselves, a value to the product. Their only value is their
binary ability to tell me whatever the product is okay or not.
Spending a lot of extra time on the tests distract from creating real
value, shippable software.
If you put them all in one test and this test fails "somewhere", then what do you do? Either your test framework will tell you exactly which line it failed on, or, failing that, you step through it with a debugger. The extra effort required because it's all in one function is negligible.
The extra value of knowing exactly which subset of tests failed in this particular instance is small, and overshadowed by the ponderous amount of code you had to write and maintain.
Think for a minute the reasons for breaking them up into individual tests. It's to isolate different functionality and to accurately identify all the things that went wrong when a test breaks. It looks like you might be testing two things: Boolean and Not Boolean, so consider two tests if your code follows two different paths. The bigger point, though, is that if none of the tests break, there are no errors to pinpoint.
If you keep running them, and later have one of these tests fail, that would be the time to refactor them into individual tests, and leave them that way.

Unit test passes when in debug but fails when run

A search method returns any matching Articles and the most recent Non-matching articles up to a specified number.
Prior to being returned, the IsMatch property of the matching articles is set to true as follows:
articles = matchingArticles.Select(c => { c.IsMatch = true; return c; }).ToList();
In a test of this method,
[Test]
public void SearchForArticle1Returns1MatchingArticleFirstInTheList()
{
using (var session = _sessionFactory.OpenSession())
{
var maxResults = 10;
var searchPhrase = "Article1";
IArticleRepository articleRepository = new ArticleRepository(session);
var articles = articleRepository.GetSearchResultSet(searchPhrase, maxResults);
Assert.AreEqual(10, articles.Count);
Assert.AreEqual(1, articles.Where(a => a.Title.Contains(searchPhrase)).Count());
var article = articles[0];
Assert.IsTrue(article.Title.Contains(searchPhrase));
Assert.IsTrue(article.IsMatch);
}
}
All assertions pass when the test is run in debug, however the final assertion fails when the test is run in release:
Expected: True
But was: False
In the app itself the response is correct.
Any ideas as to why this is happening?
Edit:
I figured out what the problem is. It's essentially a race condition. When I am setting up the tests, I am dropping the db table, recreating it and populating it with the test data. Since the search relies on Full Text search, I am creating a text index on the relevant columns and setting it to auto populate. When this is run in debug, there appears to be sufficient time to populate the text index and the search query returns matches. When I run the test I don't think the index has been populated in time, no matches are returned and the test fails. It's similar to issues with datetimes. If I put a delay between creating the catalog and running the test the test passes.
Pones, you have since clarified that the unit test fails when not debugging.
At this stage it could be anything however you should continue to run the unit test not debugging and insert the following statement somewhere you know (or think you know) is true
if(condition)
Debugger.Launch();
This will do the obvious and allow you to zone in on whats going wrong. 1 Place i suggest is on the IsMatch property (for starters)
Another common place you can run into issues like this is using DateTime's. If your unit test is running 'too fast' then it may break an assumption you had.
Obviously the problem will be different for other users, but I just hit it, and figured my solution may help some. Basically when you are running in debug mode, you are running a single test only. When you are running in run mode, you are running multiple tests in addition to the one you are having a problem with.
In my situation the problem was those other tests writing to a global list that I was not explicitly clearing in my test setup. I fixed the issue by clearing the list at the beginning of the test.
My advice to see if this is the type of problem you are facing would be to disable all other tests and only 'run' the test you have an issue with. If it works when ran by itself, but not with others, you'll know you have some dependency between tests.
Another tip is to use Console.WriteLine("test") statements in the test. That's actually how I found my list had items with it leftover from another test.
try to print out the actual result that you are comparing them with expected on debug and normal run
in my case, I created entities (JBA) in the test method
in debug mode, the generated ids were 1, 2 and 3
but in the normal running mode, they ware different
that caused my hard-coded values to make the test fail, so I changed them to get id from entity instead of the hard-coded way
hope this helps

Categories