Integration testing garbage data - c#

I have set up integration testing using MSTest. My integration tests create fake data and insert them into the database (real dependencies). For every business object, I have a method like this, which creates a "Fake" and inserts it into the db:
public static EventAction Mock()
{
EventAction action = Fixture.Build<EventAction>().Create();
action.Add(false);
AddCleanupAction(action.Delete);
AppendLog("EventAction was created.");
return action;
}
I clean up all the fakes in [AssemblyCleanup]:
public static void CleanupAllMockData()
{
foreach (Action action in CleanUpActions)
{
try
{
action();
}
catch
{
AppendLog($"Failed to clean up {action.GetType()}. It is possible that it was already cleaned up by parent objects.");
}
}
}
Now, I have a big problem. In my continuous integration environment (TeamCity), we have a separate database for testing, and it cleans itself after every test run, but on my local environment, the integration tests point to my local database. Now, If I cancel the test run for any reason, that leaves a bunch of garbage data in my local database, because CleanupAllMockData() never gets called.
What is the best way to handle this? I couldn't find a way to intercept the test cancellation in MSTest.

I see two options for solving your problem:
Cleanup mock data before each start. Only before start.
Each test is wrapped as a db-transaction, which is never commited. I explain
this option here

Related

Unit test fails when running all, but passes when running individually

I have about 12 unit tests for different scenarios, and I need to call one async method in these tests (sometimes multiple times in one test). When I do "Run all", 3 of them will always fail. If I run them one by one using "Run selected test", they will pass. The exception in output I'm getting is this:
System.AppDomainUnloadedException: Attempted to access an unloaded
AppDomain. This can happen if the test(s) started a thread but did not
stop it. Make sure that all the threads started by the test(s) are
stopped before completion.
I can't really share the code, as it's quite big and I don't know where to start, so here is example:
[TestMethod]
public async Task SampleTest()
{
var someProvider = new SomeProvider();
var result = await someProvider.IsSomethingValid();
Assert.IsTrue(result == SomeProvider.Status.Valid);
NetworkController.Disable();
result = await someProvider.IsSomethingValid();
Assert.IsTrue(result == SomeProvider.Status.Valid);
NetworkController.Enable();
}
EDIT:
The other 2 failing methods set time to the future and to the past respectively.
[TestMethod]
public async Task SetTimeToFutureTest()
{
var someProvider = new SomeProvider();
var today = TimeProvider.UtcNow().Date;
var result = await someProvider.IsSomethingValid();
Assert.IsTrue(result == SomeProvider.Status.Valid);
TimeProvider.SetDateTime(today.AddYears(1));
var result2 = await someProvider.IsSomethingValid();
Assert.IsTrue(result2 == SomeProvider.Status.Expired);
}
Where TimeProvider looks like this:
public static class TimeProvider
{
/// <summary> Normally this is a pass-through to DateTime.Now, but it can be overridden with SetDateTime( .. ) for testing or debugging.
/// </summary>
public static Func<DateTime> UtcNow = () => DateTime.UtcNow;
/// <summary> Set time to return when SystemTime.UtcNow() is called.
/// </summary>
public static void SetDateTime(DateTime newDateTime)
{
UtcNow = () => newDateTime;
}
public static void ResetDateTime()
{
UtcNow = () => DateTime.UtcNow;
}
}
EDIT 2:
[TestCleanup]
public void TestCleanup()
{
TimeProvider.ResetDateTime();
}
Other methods are similar, I will simulate time/date change, etc.
I tried calling the method synchronously by getting .Result() out of it, etc, but it didn't help. I read ton material on the web about this but still struggling.
Did anyone run into the same problem? Any tips will be highly appreciated.
I can't see what you're doing with your test initialization or cleanup but it could be that since all of your test methods are attempting to run asynchronously, the test runner is not allowing all tasks to finish before performing cleanup.
Are the same few methods failing when you run all of the tests or is it random? Are you sure you are doing unit testing and not integration testing? The class "NetworkController" gives me the impression that you may be doing more of an integration test. If that were the case and you are using a common class, provider, service, or storage medium (database, file system) then interactions or state changes caused by one method could affect another test method's efficacy.
When running tests in async/await mode, you will incur some lag. It looks like all your processing is happening in memory. They're probably passing one an one-by-one basis because the lag time is minimal. When running multiple in async mode, the lag time is sufficient to cause differentiation in the time results.
I've run into this before doing NUnit tests run by NCrunch where a DateTime component is being tested. You can mitigate this by reducing the scope of your validation / expiration logic to match to second instead of millisecond, as long as this is permissible within your acceptance criteria. I can't tell from your code what the logic is driving validation status or expiration date, but I'm willing to bet the async lag is the root cause of the test failure when run concurrently.
Both tests shown use the same static TimeProvider, thus interference by methods like ResetDateTime in the cleanup and TimeProvider.SetDateTime(today.AddYears(1)); in a test are to be expected. Also the NetworkController seems to be a static resource, and connecting/disconnecting it could interfere with your tests.
You can solve the issues in several ways:
get rid of static resources, use instances instead
lock the tests such that only one test can be run at a time
Aside from that, almost every test framework offers more than just Assert.IsTrue. Doesn't your framework offer an Assert.AreEqual? That improves readabilty. Also, with more than one Assert in a test, custom messages indicating which of the test failed (or that an Assert is for pre-condition, not the actual test) are recommended.

Load testing Visual Studio, start up script / setup

I was wondering if it was possible to have a start-up script before running any load tests? For example, perhaps to seed some data or clear anything down prior to the tests executing.
In my instance I have a mixed bag of designer and coded tests. Put it simply, I have:
Two coded tests
A designer created web test which points to these coded tests
A load test which runs the designer
I have tried adding a class and decorating with the attributes [TestInitialize()], [ClassInitialize()] but this code doesn't seem to get run.
Some basic code to show this in practice (see below). Is there a way of doing this whereby I can have something run only the once before test run?
[TestClass]
public class Setup : WebTest
{
[TestInitialize()]
public static void Hello()
{
// Run some code
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
return null;
}
}
Probably should also mention that on my coded tests I have added these attributes and they get ignored. I have come across a workaround which is to create a Plugin.
EDIT
Having done a little more browsing around I found this article on SO which shows how to implement a LoadTestPlugin.
Visual Studio provides a way of running a script before and also after a test run. They are intended for use in deploying data for a test and cleaning up after a test. The scripts are specified on the "Setup and cleanup" page in the ".testsettings" file.
A load test plugin can contain code to run before and after any test cases are executed, also at various stages during test execution. The interface is that events are raised at various points during the execution of a load test. User code can be called when these events occur. The LoadTestStarting event is raised before any test cases run. See here for more info.
If you are willing to use NUnit you have SetUp/TearDown for a per test scope and TestFixtureSetUp/TestFixtureTearDown to do something similar for a class (TestFixture)
Maybe a bit of a hack, but you can place your code inside the static constructor of your test class as it will automatically run exactly once before the first instance is created or any static members are referenced:
[TestClass]
public class Setup : WebTest
{
static Setup()
{
// prepare data for test
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
return null;
}
}

Writing msUnit tests for asynchronous procedures

If you call the Start()-Method of a MyClass-Object the Object will start sending data with the DataEvent.
class MyClass {
// Is called everytime new Data comes
public event DataEventHandler DataEvent;
// Starts de Data Process
public void StartDataDelivery()
{
}
}
How do I write a Test for that functionality if i can Guarantee that the DataEvent will be Invoked at least three times during a fix time period.
I haven't done any asynchronous Unittests yet. How is that done, assuming that someone else needs to understand the test later?
MSTest hasn't had any serious updates for some time and I don't see that changing.
I'd strongly recommend moving to xUnit. It supports async tests (just return a Task from the test and await to your heart's content), and is used by many new Microsoft projects.

How do I write a unit test that relies on file system events?

I have the following code that I'd like to test:
public class DirectoryProcessor
{
public string DirectoryPath
{
get;
set;
}
private FileSystemWatcher watcher;
public event EventHandler<SourceEventArgs> SourceFileChanged;
protected virtual void OnSourceFileChanged(SourceEventArgs e)
{
EventHandler<SourceEventArgs> handler = SourceFileChanged;
if(handler != null)
{
handler(this, e);
}
}
public DirectoryProcessor(string directoryPath)
{
this.DirectoryPath = directoryPath;
this.watcher = new FileSystemWatcher(directoryPath);
this.watcher.Created += new FileSystemEventHandler(Created);
}
void Created(object sender, FileSystemEventArgs e)
{
// process the newly created file
// then raise my own event indicating that processing is done
OnSourceFileChanged(new SourceEventArgs(e.Name));
}
}
Basically, I want to write an NUnit test that will do the following:
Create a directory
Setup a DirectoryProcessor
Write some files to the directory (via File.WriteAllText())
Check that DirectoryProcessor.SourceFileChanged has fired once for each file added in step 3.
I tried doing this and adding Thread.Sleep() after step 3, but it's hard to get the timeout correct. It correctly processes the first file I write to the directory, but not the second (and that's with the timeout set to 60s). Even if I could get it working this way, it seems like a terrible way to write the test.
Does anyone have a good solution to this problem?
Typically, you are concerned with testing the interaction with the file system and there is no need to test the framework classes and methods that actually perform the operations.
If you introduce a layer of abstraction into your classes, you can then mock the file system in your unit tests to verify that the interactions are correct without actually manipulating the file system.
Outside of testing, the "real" implementation calls into those framework methods to get the work done.
Yes, in theory you'll need to integration test that "real" implementation, but it should in practice be low-risk, not subject to much change, and verifiable through a few minutes of manual testing. If you use an open source file system wrapper, it may include those tests for your peace of mind.
See How do you mock out the file system in C# for unit testing?
If you are looking to test another object that uses this class my answer is not relevant.
When I write unit tests to operations I prefer using the ManualResetEvent
The unit test will be something like:
...
DirectoryProcessor.SourceFileChanged+=onChanged;
manualResetEvent.Reset();
File.WriteAllText();
var actual = manualResetEvent.WaitOne(MaxTimeout);
...
when manualResetEvent is the ManualResetEvent and the MaxTimeout is some TimeSpan (my advice always use the time out).
now we are missing the "onChanged":
private void onChanged(object sender, SourceEventArgs e)
{
manualResetEvent.Set();
}
I hope this is helpful

Is it a BAD idea to have a static connection and transaction for a Unit Test Fixture?

I plan to create static private variables for SqlConnection and SqlTransaction which I plan to create in [ClassInitialize()] signed method and then dispose in [ClassCleanup] signed methods.
What I want to achieve is to share the connection and transaction all along the tests and then roll back everything in the end of the last unit test run.
Like below.
Is this a BAD idea? Should I worry about thread safety?
[ClassInitialize()]
public static void DataManagerTestInitialize(TestContext testContext)
{
// Create Connection for Test Fixture
_connection = new SqlConnection(ConnectionString);
// Open Connection for Test Fixture
_connection.Open();
// Open Transaction for Test Fixture
_transaction = _connection.BeginTransaction();
}
[ClassCleanup]
public static void CleanUp()
{
if(_transaction!=null)
_transaction.Rollback();
if(_connection.State != ConnectionState.Closed)
_connection.Close();
}
This is a bad idea. Connections are meant to be opened - used - then closed. The same goes for transactions. Besides - your tests should be independent of each other and sharing a connection / transaction violates this principle.
You should worry about locks in the database if someone is debugging a specific unit test. If the database you use is the same database as your development occurs on this can be very frustrating when you're developing something (for example) and performance is really bad or he even gets timeouts because of someone putting locks on the database.
If you don't want to change your database (which you don't when unit testing) you should be able to mock/replace the code that hits the database. There are several ways to achieve this but my favorite is to use Dependency Injection. This makes your application a lot easier to maintain as it forces you to think carefully about what methods are exposed by the various parts of your application. Plus the abstraction will make it easier to refactor.

Categories