C# unit test with TestCase() attribute doesn't clean database/cache? - c#

Weird thing, hope you can help.
[TestFixture]
public class TestClass
{
[TestCase(Size.Big, Color.Blue)]
[TestCase(Size.Big, Color.Red)]
[TestCase(Size.Small, Color.Blue)]
[TestCase(Size.Small, Color.Red)]
public void TestChunkAndRun(Size a, Color b)
{
using (new TransactionScope())
{
try
{
//Data generation + test
}
finally
{
//manually rollbacking, disposing objects
}
}
}
}
With this code, i am executing the unit test 4 times with different parameters. The unit test generates some data for the test itself. In the database 'Size' is part of a unique index, so it has to be unique.
The problem is that (no matter in what order the tests are executed) the 3rd and 4th testcases are ALWAYS failed due to duplicate row in database.
If I execute the tests one by one, separately, they pass. Only when I execute them as one group (no matter which order) the last 2 fail. Even when I manually rollback the transaction.
The weird part is that the tables are empty indeed before each test. Somehow the data is being kept inbetween the TestCases so that i get Duplicate error
Any idea on what's happening?
Additional question: what's the difference between selecting multiple tests and clicking 'run all' & running the tests one by one

This post helped me with my issue.
There were some readonly fields that were initialized outside the methods. Upon moving the initialization to a [TestInitialize]/[SetUp] method, it worked like a charm.

Related

How to stop XUnit Theory on the first fail?

I use Theory with MemberData like this:
[Theory]
[MemberData(nameof(TestParams))]
public void FijewoShortcutTest(MapMode mapMode)
{
...
and when it works, it is all fine, but when it fails XUnit iterates over all data I pass as parameters. In my case it is fruitless attempt, I would like to stop short -- i.e. when the first set of parameters make the test to fail, stop the rest (because they will fail as well -- again, it is my case, not general rule).
So how to tell XUnit to stop Theory on the first fail?
The point of a Theory is to have multiple independent tests running the same code of different data. If you only actually want one test, just use a Fact and iterate over the data you want to test within the method:
[Fact]
public void FijewoShortcutTest()
{
foreach (MapMode mapMode in TestParams)
{
// Test code here
}
}
That will mean you can't easily run just the test for just one MapMode though. Unless it takes a really long time to execute the tests for some reason, I'd just live with "if something is badly messed up, I get a lot of broken tests".

Execute some code after a successful SQL Unit Test

We have an application that has many SPROCs being developed and maintained by multiple developers, and we are trying to automate the process to keep track of modifying and testing the SPROCs. We currently have a table in our database that is populated and modified based on a trigger that fires when a SPROC is created, modified, or deleted. In this table there is a column that specifies whether the SPROC was tested and deemed a success by a unit test. We are using Visual Studio's Test Explorer and Unit Test designer to handle the SQL Unit tests. We have them functioning fine, but are trying to add automate to update the database after a test succeeds. Is there some kind of event or something similar that is touched by every successful unit test? If not, then at least something that can catch the results and allow some kind of additional logic after a(n) (un)successful execution?
Within the TestMethod itself, one of the objects returned is the SqlExecutionResult[] testResults object. Within this object is the hasError attribute, that when successful is set to true. It seems testResults isn't populated on some errors and is only ever null. Is there some method or something similar called by ALL unit tests upon completion that might be able to look back/use testResults to get confirmation of success? Something that can be used and catch the output from all unit tests?
We found the results using a couple of slightly adjacent posts.
It comes down to creating a base test class that only has the TestCleanup() method and referencing the TestContext.CurrentTestOutcome.
You will then have the other test classes reference this as a base class and remove the reference to TestCleanup() in all of the other tests. This will allow for any kind of extra work to be done based on a success or failure of a unit test. To avoid this extra work, you could probably create a template using the base class. However, at this time we are not putting in the effort to figure this out since it's not a necessity.
These are the posts we referenced:
How to get test result status from MSTest?
In MSTest how to check if last test passed (in TestCleanup)
This is something for all tests executed, not specifically passing tests only. You can create a stored procedure to reset and update that testing table/db as you need, and then use the [SetUp] method to ensure it triggers before every test is executed
As a basic example, using nUnit I have done
private IMyRepo _repo;
[SetUp]
public void Init()
{
_repo = new MyRepoImpl();
//reset db to a known state for each test
using (SqlConnection c = new SqlConnection(Settings.GetConnString()))
{
SqlCommand cmd = new SqlCommand
{
Connection = c,
CommandText = "DbReset",
CommandType = CommandType.StoredProcedure
};
c.Open();
cmd.ExecuteNonQuery();
}
}
You can setup the DbReset stored procedure used here to update that specific column
But a major caveat here is that this will fire before every test, not just successful or failed ones, and I'm not sure if there is such a specialization for a something to trigger for only passed tests.
In nUnit, [TearDown] methods are guaranteed to fire assuming the [SetUp] method did not throw an exception so you will have the same issue here. If there was a scenario where [TearDown]'s don't fire if the test fails, that could've been a hacky approach to solve your problem, but these types of methods are typically aimed at object creation and cleanup, so I highly doubt there is another testing suite that will attempt to do this (as, even in a failed test, the developer would still want cleanup to take place)
Sorry I can't provide the fix to the exact scenario you have, but I hope this gets you closer to your answer

Returning multiple assert messages in one test

Im running some tests on my code at the moment. My main test method is used to verify some data, but within that check there is a lot of potential for it to fail at any one point.
Right now, I've set up multiple Assert.Fail statements within my method and when the test is failed, the message I type is displayed as expected. However, if my method fails multiple times, it only shows the first error. When I fix that, it is only then I discover the second error.
None of my tests are dependant on any others that I'm running. Ideally what I'd like is the ability to have my failure message to display every failed message in one pass. Is such a thing possible?
As per the comments, here are how I'm setting up a couple of my tests in the method:
private bool ValidateTestOne(EntityModel.MultiIndexEntities context)
{
if (context.SearchDisplayViews.Count() != expectedSdvCount)
{
Assert.Fail(" Search Display View count was different from what was expected");
}
if (sdv.VirtualID != expectedSdVirtualId)
{
Assert.Fail(" Search Display View virtual id was different from what was expected");
}
if (sdv.EntityType != expectedSdvEntityType)
{
Assert.Fail(" Search Display View entity type was different from what was expected");
}
return true;
}
Why not have a string/stringbuilder that holds all the fail messages, check for its length at the end of your code, and pass it into Assert.Fail? Just a suggestion :)
The NUnit test runner (assuming thats what you are using) is designed to break out of the test method as soon as anything fails.
So if you want every failure to show up, you need to break up your test into smaller, single assert ones. In general, you only want to be testing one thing per test anyways.
On a side note, using Assert.Fail like that isn't very semantically correct. Consider using the other built-in methods (like Assert.Equals) and only using Assert.Fail when the other methods are not sufficient.
None of my tests are dependent on any others that I'm running. Ideally
what I'd like is the ability to have my failure message to display
every failed message in one pass. Is such a thing possible?
It is possible only if you split your test into several smaller ones.
If you are afraid code duplication which is usually exists when tests are complex, you can use setup methods. They are usually marked by attributes:
NUnit - SetUp,
MsTest - TestInitialize,
XUnit - constructor.
The following code shows how your test can be rewritten:
public class HowToUseAsserts
{
int expectedSdvCount = 0;
int expectedSdVirtualId = 0;
string expectedSdvEntityType = "";
EntityModelMultiIndexEntities context;
public HowToUseAsserts()
{
context = new EntityModelMultiIndexEntities();
}
[Fact]
public void Search_display_view_count_should_be_the_same_as_expected()
{
context.SearchDisplayViews.Should().HaveCount(expectedSdvCount);
}
[Fact]
public void Search_display_view_virtual_id_should_be_the_same_as_expected()
{
context.VirtualID.Should().Be(expectedSdVirtualId);
}
[Fact]
public void Search_display_view_entity_type_should_be_the_same_as_expected()
{
context.EntityType.Should().Be(expectedSdvEntityType);
}
}
So your test names could provide the same information as you would write as messages:
Right now, I've set up multiple Assert.Fail statements within my
method and when the test is failed, the message I type is displayed as
expected. However, if my method fails multiple times, it only shows
the first error. When I fix that, it is only then I discover the
second error.
This behavior is correct and many testing frameworks follow it.
I'd like to recommend stop using Assert.Fail() because it forces you to write specific messages for every failure. Common asserts provide good enough messages so you can replace you code with the following lines:
// Act
var context = new EntityModelMultiIndexEntities();
// Assert
Assert.Equal(expectedSdvCount, context.SearchDisplayViews.Count());
Assert.Equal(expectedSdVirtualId, context.VirtualID);
Assert.Equal(expectedSdvEntityType, context.EntityType);
But I'd recommend start using should-frameworks like Fluent Assertions which make your code mere readable and provide better output.
// Act
var context = new EntityModelMultiIndexEntities();
// Assert
context.SearchDisplayViews.Should().HaveCount(expectedSdvCount);
context.VirtualID.Should().Be(expectedSdVirtualId);
context.EntityType.Should().Be(expectedSdvEntityType);

Unit test passes when in debug but fails when run

A search method returns any matching Articles and the most recent Non-matching articles up to a specified number.
Prior to being returned, the IsMatch property of the matching articles is set to true as follows:
articles = matchingArticles.Select(c => { c.IsMatch = true; return c; }).ToList();
In a test of this method,
[Test]
public void SearchForArticle1Returns1MatchingArticleFirstInTheList()
{
using (var session = _sessionFactory.OpenSession())
{
var maxResults = 10;
var searchPhrase = "Article1";
IArticleRepository articleRepository = new ArticleRepository(session);
var articles = articleRepository.GetSearchResultSet(searchPhrase, maxResults);
Assert.AreEqual(10, articles.Count);
Assert.AreEqual(1, articles.Where(a => a.Title.Contains(searchPhrase)).Count());
var article = articles[0];
Assert.IsTrue(article.Title.Contains(searchPhrase));
Assert.IsTrue(article.IsMatch);
}
}
All assertions pass when the test is run in debug, however the final assertion fails when the test is run in release:
Expected: True
But was: False
In the app itself the response is correct.
Any ideas as to why this is happening?
Edit:
I figured out what the problem is. It's essentially a race condition. When I am setting up the tests, I am dropping the db table, recreating it and populating it with the test data. Since the search relies on Full Text search, I am creating a text index on the relevant columns and setting it to auto populate. When this is run in debug, there appears to be sufficient time to populate the text index and the search query returns matches. When I run the test I don't think the index has been populated in time, no matches are returned and the test fails. It's similar to issues with datetimes. If I put a delay between creating the catalog and running the test the test passes.
Pones, you have since clarified that the unit test fails when not debugging.
At this stage it could be anything however you should continue to run the unit test not debugging and insert the following statement somewhere you know (or think you know) is true
if(condition)
Debugger.Launch();
This will do the obvious and allow you to zone in on whats going wrong. 1 Place i suggest is on the IsMatch property (for starters)
Another common place you can run into issues like this is using DateTime's. If your unit test is running 'too fast' then it may break an assumption you had.
Obviously the problem will be different for other users, but I just hit it, and figured my solution may help some. Basically when you are running in debug mode, you are running a single test only. When you are running in run mode, you are running multiple tests in addition to the one you are having a problem with.
In my situation the problem was those other tests writing to a global list that I was not explicitly clearing in my test setup. I fixed the issue by clearing the list at the beginning of the test.
My advice to see if this is the type of problem you are facing would be to disable all other tests and only 'run' the test you have an issue with. If it works when ran by itself, but not with others, you'll know you have some dependency between tests.
Another tip is to use Console.WriteLine("test") statements in the test. That's actually how I found my list had items with it leftover from another test.
try to print out the actual result that you are comparing them with expected on debug and normal run
in my case, I created entities (JBA) in the test method
in debug mode, the generated ids were 1, 2 and 3
but in the normal running mode, they ware different
that caused my hard-coded values to make the test fail, so I changed them to get id from entity instead of the hard-coded way
hope this helps

Writing unit test for persistent data creation and deletion

When writing a test for persistently stored data, I come up with a test along the lines of:
[TestMethod]
public void DoCreateDeleteTest() {
PersistentDataStore pds = new PersistentDataStore();
bool createSuccess = pds.Save("id", "payload");
Assert.AreEqual(true, createSuccess);
bool deleteSuccess = pds.Delete("id");
Assert.AreEqual(true, deleteSuccess);
}
As long as everything works, this seems fine. The function has no prior dependencies and it cleans up after itself. The problem is: when the .Save() method performs the save but returns false/failure. The assertion fires and the delete is not called so it doesn't clean up after itself.
After this, there is persisted data in the database with name "id" and all future saves fail.
The only way I can think to get around it is to do a precautionary delete before the save, but that seems like way to large a hack.
Put the delete in a method marked with the TestCleanup attribute (I assume you are using MSTest).
By the way your test is also testing two different things: whether the save works and it also tests the delete. Tests should only test one thing at a time.
Wrap both within the one transaction? Do a delete in a catch?

Categories