Unit test passes when in debug but fails when run - c#

A search method returns any matching Articles and the most recent Non-matching articles up to a specified number.
Prior to being returned, the IsMatch property of the matching articles is set to true as follows:
articles = matchingArticles.Select(c => { c.IsMatch = true; return c; }).ToList();
In a test of this method,
[Test]
public void SearchForArticle1Returns1MatchingArticleFirstInTheList()
{
using (var session = _sessionFactory.OpenSession())
{
var maxResults = 10;
var searchPhrase = "Article1";
IArticleRepository articleRepository = new ArticleRepository(session);
var articles = articleRepository.GetSearchResultSet(searchPhrase, maxResults);
Assert.AreEqual(10, articles.Count);
Assert.AreEqual(1, articles.Where(a => a.Title.Contains(searchPhrase)).Count());
var article = articles[0];
Assert.IsTrue(article.Title.Contains(searchPhrase));
Assert.IsTrue(article.IsMatch);
}
}
All assertions pass when the test is run in debug, however the final assertion fails when the test is run in release:
Expected: True
But was: False
In the app itself the response is correct.
Any ideas as to why this is happening?
Edit:
I figured out what the problem is. It's essentially a race condition. When I am setting up the tests, I am dropping the db table, recreating it and populating it with the test data. Since the search relies on Full Text search, I am creating a text index on the relevant columns and setting it to auto populate. When this is run in debug, there appears to be sufficient time to populate the text index and the search query returns matches. When I run the test I don't think the index has been populated in time, no matches are returned and the test fails. It's similar to issues with datetimes. If I put a delay between creating the catalog and running the test the test passes.

Pones, you have since clarified that the unit test fails when not debugging.
At this stage it could be anything however you should continue to run the unit test not debugging and insert the following statement somewhere you know (or think you know) is true
if(condition)
Debugger.Launch();
This will do the obvious and allow you to zone in on whats going wrong. 1 Place i suggest is on the IsMatch property (for starters)
Another common place you can run into issues like this is using DateTime's. If your unit test is running 'too fast' then it may break an assumption you had.

Obviously the problem will be different for other users, but I just hit it, and figured my solution may help some. Basically when you are running in debug mode, you are running a single test only. When you are running in run mode, you are running multiple tests in addition to the one you are having a problem with.
In my situation the problem was those other tests writing to a global list that I was not explicitly clearing in my test setup. I fixed the issue by clearing the list at the beginning of the test.
My advice to see if this is the type of problem you are facing would be to disable all other tests and only 'run' the test you have an issue with. If it works when ran by itself, but not with others, you'll know you have some dependency between tests.
Another tip is to use Console.WriteLine("test") statements in the test. That's actually how I found my list had items with it leftover from another test.

try to print out the actual result that you are comparing them with expected on debug and normal run
in my case, I created entities (JBA) in the test method
in debug mode, the generated ids were 1, 2 and 3
but in the normal running mode, they ware different
that caused my hard-coded values to make the test fail, so I changed them to get id from entity instead of the hard-coded way
hope this helps

Related

How to stop XUnit Theory on the first fail?

I use Theory with MemberData like this:
[Theory]
[MemberData(nameof(TestParams))]
public void FijewoShortcutTest(MapMode mapMode)
{
...
and when it works, it is all fine, but when it fails XUnit iterates over all data I pass as parameters. In my case it is fruitless attempt, I would like to stop short -- i.e. when the first set of parameters make the test to fail, stop the rest (because they will fail as well -- again, it is my case, not general rule).
So how to tell XUnit to stop Theory on the first fail?
The point of a Theory is to have multiple independent tests running the same code of different data. If you only actually want one test, just use a Fact and iterate over the data you want to test within the method:
[Fact]
public void FijewoShortcutTest()
{
foreach (MapMode mapMode in TestParams)
{
// Test code here
}
}
That will mean you can't easily run just the test for just one MapMode though. Unless it takes a really long time to execute the tests for some reason, I'd just live with "if something is badly messed up, I get a lot of broken tests".

Execute some code after a successful SQL Unit Test

We have an application that has many SPROCs being developed and maintained by multiple developers, and we are trying to automate the process to keep track of modifying and testing the SPROCs. We currently have a table in our database that is populated and modified based on a trigger that fires when a SPROC is created, modified, or deleted. In this table there is a column that specifies whether the SPROC was tested and deemed a success by a unit test. We are using Visual Studio's Test Explorer and Unit Test designer to handle the SQL Unit tests. We have them functioning fine, but are trying to add automate to update the database after a test succeeds. Is there some kind of event or something similar that is touched by every successful unit test? If not, then at least something that can catch the results and allow some kind of additional logic after a(n) (un)successful execution?
Within the TestMethod itself, one of the objects returned is the SqlExecutionResult[] testResults object. Within this object is the hasError attribute, that when successful is set to true. It seems testResults isn't populated on some errors and is only ever null. Is there some method or something similar called by ALL unit tests upon completion that might be able to look back/use testResults to get confirmation of success? Something that can be used and catch the output from all unit tests?
We found the results using a couple of slightly adjacent posts.
It comes down to creating a base test class that only has the TestCleanup() method and referencing the TestContext.CurrentTestOutcome.
You will then have the other test classes reference this as a base class and remove the reference to TestCleanup() in all of the other tests. This will allow for any kind of extra work to be done based on a success or failure of a unit test. To avoid this extra work, you could probably create a template using the base class. However, at this time we are not putting in the effort to figure this out since it's not a necessity.
These are the posts we referenced:
How to get test result status from MSTest?
In MSTest how to check if last test passed (in TestCleanup)
This is something for all tests executed, not specifically passing tests only. You can create a stored procedure to reset and update that testing table/db as you need, and then use the [SetUp] method to ensure it triggers before every test is executed
As a basic example, using nUnit I have done
private IMyRepo _repo;
[SetUp]
public void Init()
{
_repo = new MyRepoImpl();
//reset db to a known state for each test
using (SqlConnection c = new SqlConnection(Settings.GetConnString()))
{
SqlCommand cmd = new SqlCommand
{
Connection = c,
CommandText = "DbReset",
CommandType = CommandType.StoredProcedure
};
c.Open();
cmd.ExecuteNonQuery();
}
}
You can setup the DbReset stored procedure used here to update that specific column
But a major caveat here is that this will fire before every test, not just successful or failed ones, and I'm not sure if there is such a specialization for a something to trigger for only passed tests.
In nUnit, [TearDown] methods are guaranteed to fire assuming the [SetUp] method did not throw an exception so you will have the same issue here. If there was a scenario where [TearDown]'s don't fire if the test fails, that could've been a hacky approach to solve your problem, but these types of methods are typically aimed at object creation and cleanup, so I highly doubt there is another testing suite that will attempt to do this (as, even in a failed test, the developer would still want cleanup to take place)
Sorry I can't provide the fix to the exact scenario you have, but I hope this gets you closer to your answer

Test does not fail at first run

I have the following test:
[Test]
public void VerifyThat_WhenProvidingAServiceOrderWithALinkedAccountGetSerivceProcessWithStatusReasonOfEndOfEntitlementToUpdateStatusAndStopReasonForAccountGetServiceProcessesAndServiceOrders_TheProcessIsUpdatedWithAStatusReasonOfEndOfEntitlement()
{
IFixture fixture = new Fixture()
.Customize(new AutoMoqCustomization());
Mock<ICrmService> crmService = new Mock<ICrmService>();
fixture.Inject(crmService);
var followupHandler = fixture.CreateAnonymous<FollowupForEndOfEntitlementHandler>();
var accountGetService = fixture.Build<EndOfEntitlementAccountGetService>()
.With(handler => handler.ServiceOrders, new HashedSet<EndOfEntitlementServiceOrder>
{
{
fixture.Build<EndOfEntitlementServiceOrder>()
.With(order => order.AccountGetServiceProcess, fixture.Build<EndOfEntitlementAccountGetServiceProcess>()
.With(process => process.StatusReason, fixture.Build<StatusReason>()
.With(statusReason=> statusReason.Id == MashlatReasonStatus.Worthiness)
.CreateAnonymous())
.CreateAnonymous())
.CreateAnonymous()
}
})
.CreateAnonymous();
followupHandler.UpdateStatusAndStopReasonForAccountGetServiceProcessesAndServiceOrders(accountGetService);
crmService.Verify(svc => svc.Update(It.IsAny<DynamicEntity>()), Times.Never());
}
My problem is that it will never fail on the first run, like TDD specifies that it should.
What it should test is that whenever there is a certain value to a status for a process of a service order, perform no updates.
Is this test checking what it should?
I'm struggling a bit to understand the question here...
Is your problem that this test passes on the first try?
If yes, that means one of two things
your test has an error
you have already met this spec/requirement
Since the first has been ruled out, Green it is. Off you go to the next one on the list..
Somewhere down the line I assume, you will implement more functionality that results in the expected method being called. i.e. when the status value is different, perform an update.
The fix for that test must ensure that both tests pass.
If not, give me more information to help me understand.
Following TDD methodology, we only write new tests for functionality that doesn't exist. If a test passes on the first run, it is important to understand why.
One of my favorite things about TDD is its subtle ability to challenge our assumptions, and knock our egos flat. The practice of "Calling your Shots" is not only a great way to work through tests, but it's also a lot of fun. I love when a test fails when I expect it to pass - many great learning opportunities come from this; Time after time, evidence of working software trumps developer ego.
When a test passes when I think it shouldn't, the next step is to make it fail.
For example, your test, which expects that something doesn't happen, is guaranteed to pass if the implementation is commented out. Tamper with the logic that you think you are implementing by commenting it out or by altering the conditions of the implementation and verify if you get the same results.
If after doing this, and you're confident that the functionality is correct, write another test that proves the opposite. Will Update get called with different state or inputs?
With both sets in place, you should be able to comment out that feature and have the ability to know in advance which test will be impacted. (8-ball, corner pocket)
I would also suggest that you add another assertion to the above test to ensure that the subject and functionality under test is actually being invoked.
change the Times.Never() to Times.AtLeastOnce() and you got a good start for tdd.
Try to find nothing in nothing, well that's a good test ,but not they way to start tdd, first go with the simple specification, the naive operation the user could do (from your view point of course).
As you done some work, keep it for later, when it fails.

Writing unit test for persistent data creation and deletion

When writing a test for persistently stored data, I come up with a test along the lines of:
[TestMethod]
public void DoCreateDeleteTest() {
PersistentDataStore pds = new PersistentDataStore();
bool createSuccess = pds.Save("id", "payload");
Assert.AreEqual(true, createSuccess);
bool deleteSuccess = pds.Delete("id");
Assert.AreEqual(true, deleteSuccess);
}
As long as everything works, this seems fine. The function has no prior dependencies and it cleans up after itself. The problem is: when the .Save() method performs the save but returns false/failure. The assertion fires and the delete is not called so it doesn't clean up after itself.
After this, there is persisted data in the database with name "id" and all future saves fail.
The only way I can think to get around it is to do a precautionary delete before the save, but that seems like way to large a hack.
Put the delete in a method marked with the TestCleanup attribute (I assume you are using MSTest).
By the way your test is also testing two different things: whether the save works and it also tests the delete. Tests should only test one thing at a time.
Wrap both within the one transaction? Do a delete in a catch?

Is it a good practice to use RowTest in a unit test

NUnit and MbUnit has a RowTest attribute that allows you to sent different set of parameters into a single test.
[RowTest]
[Row(5, 10, 15)]
[Row(3.5, 2.7, 6.2)]
[Row(-5, 6, 1)]
public void AddTest(double firstNumber, double secondNumber, double result)
{
Assert.AreEqual(result, firstNumber + secondNumber);
}
I used to be huge fan of this feature. I used it everywhere. However, lately I'm not sure if it's a very good idea to use RowTest in Unit Tests. Here are more reasons:
A unit test must be very simple. If there's a bug, you don't want to spent a lot of time to figure out what your test tests. When you use multiple rows, each row has different sent set of parameter and tests something different.
Also I'm using TestDriven.NET, that allows me to run my unit tests from my IDE, Visual Studio. With TestDrivent.NET I cannot instruct to run a specific row, it will execute all the rows. Therefore, when I debug I have to comment out all other rows and leave only the one I'm working with.
Here's an example how would write my tests today:
[Test]
public void Add_with_positive_whole_numbers()
{
Assert.AreEqual(5, 10 + 15);
}
[Test]
public void Add_with_one_decimal_number()
{
Assert.AreEqual(6.2, 3.5 + 2.7);
}
[Test]
public void Add_with_negative_number()
{
Assert.AreEqual(1, -5 + 6);
}
Saying that I still occasionally use RowTest attribute but only when I believe that it's not going to slow me down when I need to work on this later.
Do you think it's a good idea to use this feature in a Unit test?
Yes. It's basically executing the same test over and over again with different inputs... saving you the trouble of repeating yourself for each distinct input combination.
Thus upholding the 'once and only once' or DRY principle. So if you need to update this test you just update one test (vs multiple) tests.
Each Row should be a representative input from a distinct set - i.e. this input is different from all others w.r.t. this function's behavior.
The RowTest actually was a much-asked for feature for NUnit - having originated from MBUnit... I think Schlapsi wrote it as a NUnit extension which then got promoted to std distribution status. The NUnit GUI also groups all RowTests under one node in the GUI and shows which input failed/passed.. which is cool.
The minor disadvantage of the 'need to debug' is something I personally can live with.. It's after all commenting out a number of Row attributes temporarily (First of all most of the time I can eyeball the function once I find ScenarioX failed and solve it without needing a step-through) or conversely just copy the test out and pass it fixed (problematic) inputs temporarily

Categories