NUnit and testing in different threads - c#

I'm testing an application. A [TearDown] methods contains another method which sends a request a server. This one is pretty slow. Meanwhile a server is unable to handle more than 3 requests at the same time.
So I decided to use a semaphore.
[TestFixture]
public class TestBase
{
private const int MaxThreadsCount = 3;
private readonly Semaphore _semaphore = new Semaphore(MaxThreadsCount, MaxThreadsCount);
[SetUp]
public virtual void Setup()
{
}
[TearDown]
public void CleanUp()
{
//...some code
new Thread(_ => SendRequestAsync("url/of/a/server", parameters)).Start();
}
private void SendRequestAsync(string url, NameValueCollection parameters)
{
_semaphore.WaitOne();
string result = MyServerHelper.SendRequest(url, parameters);
Assert.That(string.IsNullOrEmpty(result), Is.False, "SendRequest returned false");
}
[Test]
public void Test01()
{
Assert.AreEqual(1, 1);
}
[Test]
public void Test02()
{
Assert.AreEqual(1, 1);
}
[Test]
public void Test03()
{
Assert.AreEqual(1, 1);
}
//...........................
[Test]
public void TestN()
{
Assert.AreEqual(1, 1);
}
}
However it seems like it does not work properly. Now in log file on a server there are no records, which means a server does not receive any requests.
1) What did I do wrong?
2) How do I initialize a semaphore:
private readonly Semaphore _semaphore = new Semaphore(MaxThreadsCount, MaxThreadsCount);
or
private readonly Semaphore _semaphore = new Semaphore(0, MaxThreadsCount);

1) What did I do wrong?
The test runner is probably ending the test process before the thread has finished (or even started). You can verify it using something like Fiddler to verify there is no communication between the test and the server.
Is there a reason why you need to run it in a separate thread? Unless you are specifically testing threaded code, avoid it because it just creates complexities. Call it as you would normally. It also means any exception or error thrown will be caught by the test runner and reported in the test results.
If a test is taking too long (and you cannot fix the root cause like the server speed) consider an IoC container like AutoFac or reference counting to share this between tests that need it. Also consider running the tests in parallel, too.
2) How do I initialize a semaphore:
The first argument to the Semaphore class constructor is the initial number of requests. In this case, you probably want to initialize it to 0 since you have no requests running initially.
Note that a semaphore may help the test not send more than three requests but it will not help the server if tests are running concurrently but you probably realise that.

My understanding is that unit tests should be testing small units of functionality. You shouldn't need to create multiple threads to get your tests working. If you have slow external dependencies (like a network connection or database); you can define interfaces to abstract that away.
You should be able to test the behaviour you want without the threads. We can just assume that threads work - we're worried about the code you have. Presumably, somewhere in your code you have a counter that indicates the number of active connections or requests; or some other way of determining if you can accept another connection.
You want to test what happens when a request comes in, when you are already at the max.
So write a test that does that. Set that counter to the max, call the open connection code, and verify that it fails with the error you expect.

Related

Library working when called from Console app but not from NUnit test

I'm trying to evaluate a workflow library for a project I'm working on. The lib can be found at github workflow-core.
To start things up I was trying to build a simple workflow, that just writes some text to a file. The curious thing is, that the workflow works fine, when called from a console application project. But when I use the same code in an NUnit test and run it, it doesn't write anything to the file.
I'm a little lost here and don't even know which details are import for you guys to know to help me figure this out, but maybe this might be relevant?
The workflow-core library is build on .NET standard 2.0
The NUnit project and the console project both use .NET Framework 4.7.2
The workflow-core lib uses all kinds of Tasks (as in Task Parallel Library) stuff
The workflow-core lib is build with dependency injection using the Microsoft.Extensions.DependencyInjection library
And here is the relevant code:
First the workflow class:
public class HelloWorldWorkflow : IWorkflow
{
public string Id => nameof(HelloWorldWorkflow);
public int Version => 1;
public void Build(IWorkflowBuilder<object> builder)
{
builder.StartWith((context) =>
{
File.WriteAllText(#"C:\Test\test.txt", "Test line worked!");
return ExecutionResult.Next();
});
}
}
The calling code from the console app (working):
class Program
{
static void Main(string[] args)
{
var serviceCollection = new ServiceCollection();
serviceCollection.AddLogging((config) => config.AddConsole());
serviceCollection.AddWorkflow();
serviceCollection.AddTransient<LogStep>();
var serviceProvider = serviceCollection.BuildServiceProvider();
var host = serviceProvider.GetService<IWorkflowHost>();
host.RegisterWorkflow<HelloWorldWorkflow>();
host.Start();
host.StartWorkflow(nameof(HelloWorldWorkflow));
Console.WriteLine("Done");
Console.ReadLine();
host.Stop();
}
}
And the code from my test project (not working):
[TestFixture]
public class ExplorationTests
{
private ServiceProvider _serviceProvider;
private IWorkflowHost _host;
[OneTimeSetUp]
public void Init()
{
var serviceCollection = new ServiceCollection();
serviceCollection.AddLogging((config) => config.AddConsole());
serviceCollection.AddWorkflow();
serviceCollection.AddTransient<LogStep>();
serviceCollection.AddTransient<HelloWorldWorkflow>();
_serviceProvider = serviceCollection.BuildServiceProvider();
_host = _serviceProvider.GetService<IWorkflowHost>();
_host.RegisterWorkflow<HelloWorldWorkflow>();
_host.Start();
}
[Test]
public void Test()
{
_host.StartWorkflow(nameof(HelloWorldWorkflow));
}
[OneTimeTearDown]
public void TearDown()
{
_host.Stop();
}
}
I'd be glad for any clues on how to figure this out.
Execution of workflows is asynchronous, therefore you have to wait for some kind of event to occur which signals the completion.
Otherwise your test teardown will kill the host before the workflow has the chance to do anything.
This answer's first version contained:
Adding .Wait() or one of its overloadings (which allow to specify a maximum duration to wait) to the result of StartWorkflow to block the test until the workflow has completed.
Unfortunately that's wrong as StartWorkflow returns a Task yielding only the ID of the workflow instance. When this task is resolved your workflow probably hasn't done anything meaningful.
There is a feature request on GitHub asking for the desired feature: Wait for workflow to finish
Until that request is resolved, you may help yourself by creating a ManualResetEvent or maybe AutoResetEvent and putting it somewhere your final workflow step can access and call .Set() on it. Your test should wait for that by calling .WaitOne() on it (this is blocking).
Another event which might be sufficient (but inefficient) is just having waited a long enough duration: Thread.Sleep(2000) waits two seconds. Please be aware that even after that it's possible that your workflow has not completed due to the asynchronous nature of the workflow executor.
Can you try by making 'var serviceCollection' as class member of ExplorationTests. Otherwise code looks ok.
It looks like your test starts a task running in the host and then exits without waiting for completion. The one time teardown would then run immediately and stop the host.
You should not be ending the test without waiting for the task to complete.

Unit test fails when running all, but passes when running individually

I have about 12 unit tests for different scenarios, and I need to call one async method in these tests (sometimes multiple times in one test). When I do "Run all", 3 of them will always fail. If I run them one by one using "Run selected test", they will pass. The exception in output I'm getting is this:
System.AppDomainUnloadedException: Attempted to access an unloaded
AppDomain. This can happen if the test(s) started a thread but did not
stop it. Make sure that all the threads started by the test(s) are
stopped before completion.
I can't really share the code, as it's quite big and I don't know where to start, so here is example:
[TestMethod]
public async Task SampleTest()
{
var someProvider = new SomeProvider();
var result = await someProvider.IsSomethingValid();
Assert.IsTrue(result == SomeProvider.Status.Valid);
NetworkController.Disable();
result = await someProvider.IsSomethingValid();
Assert.IsTrue(result == SomeProvider.Status.Valid);
NetworkController.Enable();
}
EDIT:
The other 2 failing methods set time to the future and to the past respectively.
[TestMethod]
public async Task SetTimeToFutureTest()
{
var someProvider = new SomeProvider();
var today = TimeProvider.UtcNow().Date;
var result = await someProvider.IsSomethingValid();
Assert.IsTrue(result == SomeProvider.Status.Valid);
TimeProvider.SetDateTime(today.AddYears(1));
var result2 = await someProvider.IsSomethingValid();
Assert.IsTrue(result2 == SomeProvider.Status.Expired);
}
Where TimeProvider looks like this:
public static class TimeProvider
{
/// <summary> Normally this is a pass-through to DateTime.Now, but it can be overridden with SetDateTime( .. ) for testing or debugging.
/// </summary>
public static Func<DateTime> UtcNow = () => DateTime.UtcNow;
/// <summary> Set time to return when SystemTime.UtcNow() is called.
/// </summary>
public static void SetDateTime(DateTime newDateTime)
{
UtcNow = () => newDateTime;
}
public static void ResetDateTime()
{
UtcNow = () => DateTime.UtcNow;
}
}
EDIT 2:
[TestCleanup]
public void TestCleanup()
{
TimeProvider.ResetDateTime();
}
Other methods are similar, I will simulate time/date change, etc.
I tried calling the method synchronously by getting .Result() out of it, etc, but it didn't help. I read ton material on the web about this but still struggling.
Did anyone run into the same problem? Any tips will be highly appreciated.
I can't see what you're doing with your test initialization or cleanup but it could be that since all of your test methods are attempting to run asynchronously, the test runner is not allowing all tasks to finish before performing cleanup.
Are the same few methods failing when you run all of the tests or is it random? Are you sure you are doing unit testing and not integration testing? The class "NetworkController" gives me the impression that you may be doing more of an integration test. If that were the case and you are using a common class, provider, service, or storage medium (database, file system) then interactions or state changes caused by one method could affect another test method's efficacy.
When running tests in async/await mode, you will incur some lag. It looks like all your processing is happening in memory. They're probably passing one an one-by-one basis because the lag time is minimal. When running multiple in async mode, the lag time is sufficient to cause differentiation in the time results.
I've run into this before doing NUnit tests run by NCrunch where a DateTime component is being tested. You can mitigate this by reducing the scope of your validation / expiration logic to match to second instead of millisecond, as long as this is permissible within your acceptance criteria. I can't tell from your code what the logic is driving validation status or expiration date, but I'm willing to bet the async lag is the root cause of the test failure when run concurrently.
Both tests shown use the same static TimeProvider, thus interference by methods like ResetDateTime in the cleanup and TimeProvider.SetDateTime(today.AddYears(1)); in a test are to be expected. Also the NetworkController seems to be a static resource, and connecting/disconnecting it could interfere with your tests.
You can solve the issues in several ways:
get rid of static resources, use instances instead
lock the tests such that only one test can be run at a time
Aside from that, almost every test framework offers more than just Assert.IsTrue. Doesn't your framework offer an Assert.AreEqual? That improves readabilty. Also, with more than one Assert in a test, custom messages indicating which of the test failed (or that an Assert is for pre-condition, not the actual test) are recommended.

Writing msUnit tests for asynchronous procedures

If you call the Start()-Method of a MyClass-Object the Object will start sending data with the DataEvent.
class MyClass {
// Is called everytime new Data comes
public event DataEventHandler DataEvent;
// Starts de Data Process
public void StartDataDelivery()
{
}
}
How do I write a Test for that functionality if i can Guarantee that the DataEvent will be Invoked at least three times during a fix time period.
I haven't done any asynchronous Unittests yet. How is that done, assuming that someone else needs to understand the test later?
MSTest hasn't had any serious updates for some time and I don't see that changing.
I'd strongly recommend moving to xUnit. It supports async tests (just return a Task from the test and await to your heart's content), and is used by many new Microsoft projects.

How can I isolate unit tests by class?

I have a number of 'unit tests' (they're really integration tests) in several classes that access a shared resource, and I want each test class to only acquire the resource once (for performance reasons).
However, I'm getting issues when I release the resource in [ClassCleanup], because that isn't running until all tests are completed.
Here's a simplified example:
using Microsoft.VisualStudio.TestTools.UnitTesting;
static class State
{
public static string Thing;
}
[TestClass]
public class ClassA
{
[ClassInitialize]
public static void Initialize(TestContext ctx)
{
State.Thing = "Hello, World!";
}
[ClassCleanup]
public static void Cleanup()
{
State.Thing = null;
}
[TestMethod]
public void TestA()
{
Assert.IsNotNull(State.Thing); // Verify we have a good state
}
}
[TestClass]
public class ClassB
{
[TestMethod]
public void TestB()
{
Assert.IsNull(State.Thing); // Verify we have an uninitialized state
// Initialize state, do stuff with it
}
}
On my machine at least, TestB fails because it runs before ClassA has been cleaned up.
I read ClassCleanup May Run Later Than You Think, but that doesn't explain any way to change the behaviour. And I realize I shouldn't depend on test ordering, but it's too expensive to re-acquire the resource for every test, so I want to group them by class.
How can I fix this? Is there a way to force each test class to run as a unit, and make the cleanup occur immediately?
Although ClassCleanup might be unreliable in terms of when it is run, ClassInitialize is not; why not give each testclass that relies on this shared resource a ClassInitialize that cleans up the shared resource of the previous test class (if any), right before acquiring the resource itself?
The only time the shared resource isn't released is for the last test class, but you could handle that with the ClassCleanup because then it doesn't matter anymore when it is run (since there won't be any more test classes following it).
Is there any reason that you cannot make this resource shared by the entire assembly all at once? Make it an internal or public static variable in one of the test classes (or a separate designated class), and refer to it directly from all the test classes. The initialization of the resource will take place in the [AssemblyInitialize] method and the cleanup will take place at the [AssemblyCleanup].

How to unit test client network code?

I'm working on a piece of networking code which listens to a TCP connection, parses the incoming data and raises the appropriate event. Naturally, to avoid blocking the rest of the application, the listening and parsing are performed in a background worker. When trying to unit test this code I run into the problem that, seeing as the network code has more work to do than the unit test, the unit test completes before the adapter has a chance to raise the event and so the test fails.
Adapter class:
public class NetworkAdapter : NetworkAdapterBase //NetworkAdapterBase is just an abstract base class with event definitions and protected Raise... methods.
{
//Fields removed for brevity.
public NetworkAdapter(TcpClient tcpClient)
{
_tcpConnection = tcpClient;
//Hook up event handlers for background worker.
NetworkWorker.DoWork += NetworkWorker_DoWork;
if (IsConnected)
{
//Start up background worker.
NetworkWorker.RunWorkerAsync();
}
}
private void NetworkWorker_DoWork(object sender, DoWorkEventArgs e)
{
while (IsConnected)
{
//Listen for incoming data, parse, raise events...
}
}
}
Attempted test code:
[TestMethod]
public void _processes_network_data()
{
bool newConfigurationReceived = false;
var adapter = new NetworkAdapter(TestClient); //TestClient is just a TcpClient that is set up in a [TestInitialize] method.
adapter.ConfigurationDataReceived += (sender, config) =>
{
newConfigurationReceived = true;
};
//Send fake byte packets to TestClient.
Assert.IsTrue(newConfigurationReceived, "Results: Event not raised.");
}
How should I go about trying to test this sort of thing?
Thanks,
James
Well, first, this is not a strict "unit test"; your test depends upon layers of architecture that have side effects, in this case transmitting network packets. This is more of an integration test.
That said, your unit test could sleep for a certain number of millis, as Tony said. You could also see if you can get a handle to the background worker, and Join on it, which will cause your unit test to wait as long as it takes for the background worker to finish.
You could wait for some timeout period, then run the assertion, thusly:
//Send fake byte packets to TestClient
Thread.Sleep(TIMEOUT);
Assert.IsTrue(newConfigurationReceived, "Results: Event not raised.");
Where TIMEOUT is the number of milliseconds you want to wait.
You could use some timeout, but as always what duration should the timeout be to be sure you're test will always pass, but still not slow down your tests too much ?
I would simply test the parsing code apart. This is probably where you're going to have the most bugs, and where you most need unit tests. And it's simple to test !
Then for code that is listening on a socket ... well you could have bugs here ... but if it simply dispatches data to a function/class I'm not sure you really need to test it. And if you want to be really thorough, how are you gonna unit test that your class behaves well if the connection is lost between the client and the server for example ?
In our unit tests, we use .NET 4's parallelization library. You can say:
Parallel.Invoke(() => Dosomething(arguments), () => DosomethingElse(arguments));
And the framework will take care of spawning these actions as different threads, executing them in a number of threads ideal to the particular processes you're working on, and then joining them so that the next instruction doesn't execute until they've all finished.
However, it looks like you may not have direct access to the thread. Instead, you want to wait until the given callback method gets called. You can use an AutoResetEvent or a ManualResetEvent to accomplish this.
See Unit testing asynchronous function

Categories