Unit test fails when running all, but passes when running individually - c#

I have about 12 unit tests for different scenarios, and I need to call one async method in these tests (sometimes multiple times in one test). When I do "Run all", 3 of them will always fail. If I run them one by one using "Run selected test", they will pass. The exception in output I'm getting is this:
System.AppDomainUnloadedException: Attempted to access an unloaded
AppDomain. This can happen if the test(s) started a thread but did not
stop it. Make sure that all the threads started by the test(s) are
stopped before completion.
I can't really share the code, as it's quite big and I don't know where to start, so here is example:
[TestMethod]
public async Task SampleTest()
{
var someProvider = new SomeProvider();
var result = await someProvider.IsSomethingValid();
Assert.IsTrue(result == SomeProvider.Status.Valid);
NetworkController.Disable();
result = await someProvider.IsSomethingValid();
Assert.IsTrue(result == SomeProvider.Status.Valid);
NetworkController.Enable();
}
EDIT:
The other 2 failing methods set time to the future and to the past respectively.
[TestMethod]
public async Task SetTimeToFutureTest()
{
var someProvider = new SomeProvider();
var today = TimeProvider.UtcNow().Date;
var result = await someProvider.IsSomethingValid();
Assert.IsTrue(result == SomeProvider.Status.Valid);
TimeProvider.SetDateTime(today.AddYears(1));
var result2 = await someProvider.IsSomethingValid();
Assert.IsTrue(result2 == SomeProvider.Status.Expired);
}
Where TimeProvider looks like this:
public static class TimeProvider
{
/// <summary> Normally this is a pass-through to DateTime.Now, but it can be overridden with SetDateTime( .. ) for testing or debugging.
/// </summary>
public static Func<DateTime> UtcNow = () => DateTime.UtcNow;
/// <summary> Set time to return when SystemTime.UtcNow() is called.
/// </summary>
public static void SetDateTime(DateTime newDateTime)
{
UtcNow = () => newDateTime;
}
public static void ResetDateTime()
{
UtcNow = () => DateTime.UtcNow;
}
}
EDIT 2:
[TestCleanup]
public void TestCleanup()
{
TimeProvider.ResetDateTime();
}
Other methods are similar, I will simulate time/date change, etc.
I tried calling the method synchronously by getting .Result() out of it, etc, but it didn't help. I read ton material on the web about this but still struggling.
Did anyone run into the same problem? Any tips will be highly appreciated.

I can't see what you're doing with your test initialization or cleanup but it could be that since all of your test methods are attempting to run asynchronously, the test runner is not allowing all tasks to finish before performing cleanup.
Are the same few methods failing when you run all of the tests or is it random? Are you sure you are doing unit testing and not integration testing? The class "NetworkController" gives me the impression that you may be doing more of an integration test. If that were the case and you are using a common class, provider, service, or storage medium (database, file system) then interactions or state changes caused by one method could affect another test method's efficacy.

When running tests in async/await mode, you will incur some lag. It looks like all your processing is happening in memory. They're probably passing one an one-by-one basis because the lag time is minimal. When running multiple in async mode, the lag time is sufficient to cause differentiation in the time results.
I've run into this before doing NUnit tests run by NCrunch where a DateTime component is being tested. You can mitigate this by reducing the scope of your validation / expiration logic to match to second instead of millisecond, as long as this is permissible within your acceptance criteria. I can't tell from your code what the logic is driving validation status or expiration date, but I'm willing to bet the async lag is the root cause of the test failure when run concurrently.

Both tests shown use the same static TimeProvider, thus interference by methods like ResetDateTime in the cleanup and TimeProvider.SetDateTime(today.AddYears(1)); in a test are to be expected. Also the NetworkController seems to be a static resource, and connecting/disconnecting it could interfere with your tests.
You can solve the issues in several ways:
get rid of static resources, use instances instead
lock the tests such that only one test can be run at a time
Aside from that, almost every test framework offers more than just Assert.IsTrue. Doesn't your framework offer an Assert.AreEqual? That improves readabilty. Also, with more than one Assert in a test, custom messages indicating which of the test failed (or that an Assert is for pre-condition, not the actual test) are recommended.

Related

Fixing flaky nunit tests

I want to fix flaky tests that are semi-sensitive to resource availability.
In the local machine, when you run all the tests they will likely all pass without an issue, especially when ran 1 by 1. However there are about +10k tests and ~60 of them have anywhere from 0.01%-20% chance to fail when ran on Team City Continuous Integration build agents. The CI build + running all the tests and other stuff takes 6m:40s - 18m:30s. The chances of resource congestion can be increased by having multiple agents on the same server, basically having high load in 1 will impact the other
The tests that fail are using multiple threads or reactive extension observables and observe on thread pool. In following example the pipline runner extends DataflowEx.
[Test]
public void Cancel_CancelsAllProcessingAtOnce()
{
var items1 = 0;
var items2 = 0;
var pipe = Pipeline.For<int>()
.AddAction(x => { Thread.Sleep(10); items1++; }, 1)
.AddAction(x => { Thread.Sleep(20); items2++; }, 1);
var runner = PipelineRunner.For(pipe);
Task.Run(() => runner.Run(Enumerable.Range(0, 100)));
Thread.Sleep(100);
runner.Cancel();
var lastBlock = pipe.Blocks.Last().Blocks.Last();
lastBlock.Completion.Wait();
Assert.IsTrue(lastBlock.Completion.IsCompleted);
items1.Should().BeLessThan(1000);
items1.Should().BeGreaterThan(items2);
}
In normal state this would be fine, but if not given enough resources then items1 and items2 will be 0 and equal, but there are various other invalid states. In case of observables, it seems that usually ~1 subscribers of stream of following stream will fail to produce answer if test is going to await fixed amount of time.
var reportStream = _subject.BuildFilteredStream(runner.ReportsRaw, reportTypes)
.SubscribeOn(testScheduler.TaskPool)
.ObserveOn(testScheduler.TaskPool);
I have tried running the test fixures in Non Parallelizable STA apartment state
[NonParallelizable]
[Apartment(ApartmentState.STA)]
Making the test run in deterministic order by specifying the order attribute, requires thread, setting higher timeouts, using the retry attribute (even 10 will fail...) changing some old tests to stop using thread sleep and waits to delay and awaits for async task. Ex:
I want to keep cases where there are multiple agents on same server, and for large majority of test running them in parallel should be possible. What I seem to need is the ability to run tests when CPU utilization does not exceed 80%, allocating of threads is possible, or something like that, however this seemingly exceeds what I can do with nunit.
What should I try?

Integration testing garbage data

I have set up integration testing using MSTest. My integration tests create fake data and insert them into the database (real dependencies). For every business object, I have a method like this, which creates a "Fake" and inserts it into the db:
public static EventAction Mock()
{
EventAction action = Fixture.Build<EventAction>().Create();
action.Add(false);
AddCleanupAction(action.Delete);
AppendLog("EventAction was created.");
return action;
}
I clean up all the fakes in [AssemblyCleanup]:
public static void CleanupAllMockData()
{
foreach (Action action in CleanUpActions)
{
try
{
action();
}
catch
{
AppendLog($"Failed to clean up {action.GetType()}. It is possible that it was already cleaned up by parent objects.");
}
}
}
Now, I have a big problem. In my continuous integration environment (TeamCity), we have a separate database for testing, and it cleans itself after every test run, but on my local environment, the integration tests point to my local database. Now, If I cancel the test run for any reason, that leaves a bunch of garbage data in my local database, because CleanupAllMockData() never gets called.
What is the best way to handle this? I couldn't find a way to intercept the test cancellation in MSTest.
I see two options for solving your problem:
Cleanup mock data before each start. Only before start.
Each test is wrapped as a db-transaction, which is never commited. I explain
this option here

Writing msUnit tests for asynchronous procedures

If you call the Start()-Method of a MyClass-Object the Object will start sending data with the DataEvent.
class MyClass {
// Is called everytime new Data comes
public event DataEventHandler DataEvent;
// Starts de Data Process
public void StartDataDelivery()
{
}
}
How do I write a Test for that functionality if i can Guarantee that the DataEvent will be Invoked at least three times during a fix time period.
I haven't done any asynchronous Unittests yet. How is that done, assuming that someone else needs to understand the test later?
MSTest hasn't had any serious updates for some time and I don't see that changing.
I'd strongly recommend moving to xUnit. It supports async tests (just return a Task from the test and await to your heart's content), and is used by many new Microsoft projects.

NUnit and testing in different threads

I'm testing an application. A [TearDown] methods contains another method which sends a request a server. This one is pretty slow. Meanwhile a server is unable to handle more than 3 requests at the same time.
So I decided to use a semaphore.
[TestFixture]
public class TestBase
{
private const int MaxThreadsCount = 3;
private readonly Semaphore _semaphore = new Semaphore(MaxThreadsCount, MaxThreadsCount);
[SetUp]
public virtual void Setup()
{
}
[TearDown]
public void CleanUp()
{
//...some code
new Thread(_ => SendRequestAsync("url/of/a/server", parameters)).Start();
}
private void SendRequestAsync(string url, NameValueCollection parameters)
{
_semaphore.WaitOne();
string result = MyServerHelper.SendRequest(url, parameters);
Assert.That(string.IsNullOrEmpty(result), Is.False, "SendRequest returned false");
}
[Test]
public void Test01()
{
Assert.AreEqual(1, 1);
}
[Test]
public void Test02()
{
Assert.AreEqual(1, 1);
}
[Test]
public void Test03()
{
Assert.AreEqual(1, 1);
}
//...........................
[Test]
public void TestN()
{
Assert.AreEqual(1, 1);
}
}
However it seems like it does not work properly. Now in log file on a server there are no records, which means a server does not receive any requests.
1) What did I do wrong?
2) How do I initialize a semaphore:
private readonly Semaphore _semaphore = new Semaphore(MaxThreadsCount, MaxThreadsCount);
or
private readonly Semaphore _semaphore = new Semaphore(0, MaxThreadsCount);
1) What did I do wrong?
The test runner is probably ending the test process before the thread has finished (or even started). You can verify it using something like Fiddler to verify there is no communication between the test and the server.
Is there a reason why you need to run it in a separate thread? Unless you are specifically testing threaded code, avoid it because it just creates complexities. Call it as you would normally. It also means any exception or error thrown will be caught by the test runner and reported in the test results.
If a test is taking too long (and you cannot fix the root cause like the server speed) consider an IoC container like AutoFac or reference counting to share this between tests that need it. Also consider running the tests in parallel, too.
2) How do I initialize a semaphore:
The first argument to the Semaphore class constructor is the initial number of requests. In this case, you probably want to initialize it to 0 since you have no requests running initially.
Note that a semaphore may help the test not send more than three requests but it will not help the server if tests are running concurrently but you probably realise that.
My understanding is that unit tests should be testing small units of functionality. You shouldn't need to create multiple threads to get your tests working. If you have slow external dependencies (like a network connection or database); you can define interfaces to abstract that away.
You should be able to test the behaviour you want without the threads. We can just assume that threads work - we're worried about the code you have. Presumably, somewhere in your code you have a counter that indicates the number of active connections or requests; or some other way of determining if you can accept another connection.
You want to test what happens when a request comes in, when you are already at the max.
So write a test that does that. Set that counter to the max, call the open connection code, and verify that it fails with the error you expect.

Why do my tests fail when run together, but pass individually?

When I write a test in Visual Studio, I check that it works by saving, building and then running the test it in Nunit (right click on the test then run).
The test works yay...
so I Move on...
Now I have written another test and it works as I have saved and tested it like above. But, they dont work when they are run together.
Here are my two tests that work when run as individuals but fail when run together:
using System;
using NUnit.Framework;
using OpenQA.Selenium.Support.UI;
using OpenQA.Selenium;
namespace Fixtures.Users.Page1
{
[TestFixture]
public class AdminNavigateToPage1 : SeleniumTestBase
{
[Test]
public void AdminNavigateToPage1()
{
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
NavigateTo<Page1>();
var headerelement = Driver.FindElement(By.ClassName("header"));
Assert.That(headerelement.Text, Is.EqualTo("Page Title"));
Assert.That(Driver.Url, Is.EqualTo("http://localhost/Page Title"));
}
[Test]
public void AdminNavigateToPage1ViaMenu()
{
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
Driver.FindElement(By.Id("menuitem1")).Click();
Driver.FindElement(By.Id("submenuitem4")).Click();
var headerelement = Driver.FindElement(By.ClassName("header"));
Assert.That(headerelement.Text, Is.EqualTo("Page Title"));
Assert.That(Driver.Url, Is.EqualTo("http://localhost/Page Title"));
}
}
}
When the second test fails because they have been run together
Nunit presents this:
Sse.Bec.Web.Tests.Fixtures.ManageSitesAndUsers.ChangeOfPremises.AdminNavigateToChangeOfPremises.AdminNavigateToPageChangeOfPremisesViaMenu:
OpenQA.Selenium.NoSuchElementException : The element could not be found
And this line is highlighted:
var headerelement = Driver.FindElement(By.ClassName("header"));
Does anyone know why my code fails when run together, but passes when run alone?
Any answer would be greatly appreciated!
Such a situation normally occurs when the unit tests are using shared resources/data in some way.
It can also happen if your system under test has static fields/properties which are being leveraged to compute the output on which you are asserting.
It can happen if the system under test is being shared (static) dependencies.
Two things you can try
put the break point between the following two lines. And see which page are you in when the second line is hit
Introduce a slight delay between these two lines via Thread.Sleep
Driver.FindElement(By.Id("submenuitem4")).Click();
var headerelement = Driver.FindElement(By.ClassName("header"));
If none of the answers above worked for you, i solved this issue by adding Thread.Sleep(1) before the assertion in the failing test...
Looks like tests synchronization is missed somewhere... Please note that my tests were not order dependant, that i haven't any static member nor external dependency.
look into the TestFixtureSetup, Setup, TestFixtureTearDown and TearDown.
These attributes allow you to setup the testenvironment once, instead of once per test.
Without knowing how Selenium works, my bet is on Driver which seems to be a static class so the 2 tests are sharing state. One example of shared state is Driver.Url. Because the tests are run in parallel, there is a race condition to set the state of this object.
That said, I do not have a solution for you :)
Are you sure that after running one of the tests the method
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
is taking you back to where you should be? It'd seem that the failure is due to improper navigation handler (supposing that the header element is present and found in both tests).
I think you need to ensure, that you can log on for the second test, this might fail, because you are logged on already?
-> putting the logon in a set up method or (because it seems you are using the same user for both tests) even up to the fixture setup
-> the logoff (if needed) might be put in the tear down method
[SetUp]
public void LaunchTest()
{
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
}
[TearDown]
public void StopTest()
{
// logoff
}
[Test]
public void Test1()
{...}
[Test]
public void Test2()
{...}
If there are delays in the DOM instead of a thread.sleep I recommend to use webdriver.wait in combination with conditions. The sleep might work in 80% and in others not. The wait polls until a timeout is reached which is more reliable and also readable. Here an example how I usually approach this:
var webDriverWait = new WebDriverWait(webDriver, ..);
webDriverWait.Until(d => d.FindElement(By.CssSelector(".."))
.Displayed))
I realize this is an extremely old question but I just ran into it today and none of the answers addressed my particular case.
Using Selenium with NUnit for front end automation tests.
For my case I was using in my startup [OneTimeSetUp] and [OneTimeTearDown] trying to be more efficient.
This however has the problem of using shared resources, in my case the driver itself and the helper I use to validate/get elements.
Maybe a strange edge case - but took me a few hours to figure it out.

Categories