Load testing Visual Studio, start up script / setup - c#

I was wondering if it was possible to have a start-up script before running any load tests? For example, perhaps to seed some data or clear anything down prior to the tests executing.
In my instance I have a mixed bag of designer and coded tests. Put it simply, I have:
Two coded tests
A designer created web test which points to these coded tests
A load test which runs the designer
I have tried adding a class and decorating with the attributes [TestInitialize()], [ClassInitialize()] but this code doesn't seem to get run.
Some basic code to show this in practice (see below). Is there a way of doing this whereby I can have something run only the once before test run?
[TestClass]
public class Setup : WebTest
{
[TestInitialize()]
public static void Hello()
{
// Run some code
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
return null;
}
}
Probably should also mention that on my coded tests I have added these attributes and they get ignored. I have come across a workaround which is to create a Plugin.
EDIT
Having done a little more browsing around I found this article on SO which shows how to implement a LoadTestPlugin.

Visual Studio provides a way of running a script before and also after a test run. They are intended for use in deploying data for a test and cleaning up after a test. The scripts are specified on the "Setup and cleanup" page in the ".testsettings" file.
A load test plugin can contain code to run before and after any test cases are executed, also at various stages during test execution. The interface is that events are raised at various points during the execution of a load test. User code can be called when these events occur. The LoadTestStarting event is raised before any test cases run. See here for more info.

If you are willing to use NUnit you have SetUp/TearDown for a per test scope and TestFixtureSetUp/TestFixtureTearDown to do something similar for a class (TestFixture)

Maybe a bit of a hack, but you can place your code inside the static constructor of your test class as it will automatically run exactly once before the first instance is created or any static members are referenced:
[TestClass]
public class Setup : WebTest
{
static Setup()
{
// prepare data for test
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
return null;
}
}

Related

C# How to hit a method in debug

Probably not a good title, and this question might be stupid, but I really have no idea how to do this.
I've created a project that connects to a DB with EF Core . I've created my DbContext class and my Repository class that does the actual querying .
Then I created another class , maybe I created it wrong, but I made it similar to a controller of an MVC project . This is my class:
public class ExtractCustomersToBeMarked
{
private ICampaignRepository _repository;
private ILogger<ExtractCustomersToBeMarked> _logger;
public ExtractCustomersToBeMarked(ILogger<ExtractCustomersToBeMarked> logger, ICampaignRepository repository)
{
_repository = repository;
_logger = logger;
}
public async Task ExtractCustomers()
{
IEnumerable<CampaignKnownCustomers> result = _repository.getCustomersGUID();
... I want to debug this code
}
}
Now I want to debug my program . I tried calling the method from my Main program , but with no success as I don't know what to pass to the constructor (MVC does that on its own) . That's what I tried :
public static void Main(string[] args)
{
var p = new Program();
p.Run().Wait();
}
private async Task Run()
{
ExtractCustomersToBeMarked ectm = new ExtractCustomersToBeMarked(?,?);
await ExtractCustomersToBeMarked.ExtractCustomers();
}
I know it doesn't look good, this structure is simply for debugging . If theres another way I'm open to hear about it..
How can I hit the debugger on ExtractCustomers?
I guess you have implemented a CampaignRepository class. Provided that you want to use this implementation when debugging the ExtractCustomers() method, you should simply create an instance of it and pass to the method:
static void Main(string[] args)
{
ExtractCustomersToBeMarked ectm = new ExtractCustomersToBeMarked(null, new CampaignRepository());
ectm.ExtractCustomers().Wait();
}
The same goes for the logger. Create an instance of a class that implements the ILogger<ExtractCustomersToBeMarked> if you use the logger in the method that you want to debug. Don't forget to pass it to the constructor of the ExtractCustomersToBeMarked class when you create an instance of it.
You may want to implement an integration test instead of modifying the Main() method of the application itself though but that's another story.
Summary I'd suggest writing Unit Tests against the code you want to debug.
Detail When you have Unit Tests in place you can run each individual test under Debug, allowing you to step through the code exactly like you do when debugging the application; but with the benefit that you get to define the scope and context of the test, for example you can test isolated parts of your system independently of others.
Once a test is in place, it can shorten the time taken by the code+debug iteration cycle, making you more productive. It achieves that because the test fires you straight into the relevant code with the test scenario already set up, rather than the whole application having to start up, and you navigating through to the relevant parts each time to get into the scenario you want.
Another benefit of these tests (if they are written the right way) is that you can make them run as part of an automated build which is triggered whenever someone changes some code. This means that if anyone breaks that code, the test will fail, and the build will fail.

How to use ApprovalTests on Teamcity?

I am using Approval Tests. On my dev machine I am happy with DiffReporter that starts TortoiseDiff when my test results differ from approved:
[UseReporter(typeof (DiffReporter))]
public class MyApprovalTests
{ ... }
However when the same tests are running on Teamcity and results are different tests fail with the following error:
System.Exception : Unable to launch: tortoisemerge.exe with arguments ...
Error Message: The system cannot find the file specified
---- System.ComponentModel.Win32Exception : The system cannot find the file
specified
Obviously it cannot find tortoisemerge.exe and that is fine because it is not installed on build agent. But what if it gets installed? Then for each fail another instance of tortoisemerge.exe will start and nobody will close it. Eventually tons of tortoisemerge.exe instances will kill our servers :)
So the question is -- how tests should be decorated to run Tortoise Diff on local machine
and just report errors on build server? I am aware of #IF DEBUG [UseReporter(typeof (DiffReporter))] but would prefer another solution if possible.
There are a couple of solutions to the question of Reporters and CI. I will list them all, then point to a better solution, which is not quite enabled yet.
Use the AppConfigReporter. This allows you to set the reporter in your AppConfig, and you can use the QuietReporter for CI.
There is a video here, along with many other reporters. The AppConfigReporter appears at 6:00.
This has the advantage of separate configs, and you can decorate at the assembly level, but has the disadvantage of if you override at the class/method level, you still have the issue.
Create your own (2) reporters. It is worth noting that if you use a reporter, it will get called, regardless as to if it is working in the environment. IEnvironmentAwareReporter allows for composite reporters, but will not prevent a direct call to the reporter.
Most likely you will need 2 reporters, one which does nothing (like a quiet reporter) but only works on your CI server, or when called by TeamCity. Will call it the TeamCity Reporter. And One, which is a multiReporter which Calls teamCity if it is working, otherwise defers to .
Use a FrontLoadedReporter (not quite ready). This is how ApprovalTests currently uses NCrunch. It does the above method in front of whatever is loaded in your UseReporter attribute. I have been meaning to add an assembly level attribute for configuring this, but haven't yet (sorry) I will try to add this very soon.
Hope this helps.
Llewellyn
I recently came into this problem myself.
Borrowing from xunit and how they deal with TeamCity logging I came up with a TeamCity Reporter based on the NCrunch Reporter.
public class TeamCityReporter : IEnvironmentAwareReporter, IApprovalFailureReporter
{
public static readonly TeamCityReporter INSTANCE = new TeamCityReporter();
public void Report(string approved, string received) { }
public bool IsWorkingInThisEnvironment(string forFile)
{
return Environment.GetEnvironmentVariable("TEAMCITY_PROJECT_NAME") != null;
}
}
And so I could combine it with the NCrunch reporter:
public class TeamCityOrNCrunchReporter : FirstWorkingReporter
{
public static readonly TeamCityOrNCrunchReporter INSTANCE =
new TeamCityOrNCrunchReporter();
public TeamCityOrNCrunchReporter()
: base(NCrunchReporter.INSTANCE,
TeamCityReporter.INSTANCE) { }
}
[assembly: FrontLoadedReporter(typeof(TeamCityOrNCrunchReporter))]
I just came up with one small idea.
You can implement your own reporter, let's call it DebugReporter
public class DebugReporter<T> : IEnvironmentAwareReporter where T : IApprovalFailureReporter, new()
{
private readonly T _reporter;
public static readonly DebugReporter<T> INSTANCE = new DebugReporter<T>();
public DebugReporter()
{
_reporter = new T();
}
public void Report(string approved, string received)
{
if (IsWorkingInThisEnvironment())
{
_reporter.Report(approved, received);
}
}
public bool IsWorkingInThisEnvironment()
{
#if DEBUG
return true;
#else
return false;
#endif
}
}
Example of usage,
[UseReporter(typeof(DebugReporter<FileLauncherReporter>))]
public class SomeTests
{
[Test]
public void test()
{
Approvals.Verify("Hello");
}
}
If test is faling, it still would be red - but reporter would not came up.
The IEnvironmentAwareReporter is specially defined for that, but unfortunatelly whatever I return there, it still calls Report() method. So, I put the IsWorkingInThisEnvironment() call inside, which is a little hackish, but works :)
Hope that Llywelyn can explain why it acts like that. (bug?)
I'm using CC.NET and I do have TortoiseSVN installed on the server.
I reconfigured my build server to allow the CC.NET service to interact with the desktop. When I did that, TortiseMerge launched. So I think what's happening is that Approvals tries to launch the tool, but it cant because CC.NET is running as a service and the operating system prevents that behavior by default. If TeamCity runs as a service, you should be fine, but you might want to test.

Why do my tests fail when run together, but pass individually?

When I write a test in Visual Studio, I check that it works by saving, building and then running the test it in Nunit (right click on the test then run).
The test works yay...
so I Move on...
Now I have written another test and it works as I have saved and tested it like above. But, they dont work when they are run together.
Here are my two tests that work when run as individuals but fail when run together:
using System;
using NUnit.Framework;
using OpenQA.Selenium.Support.UI;
using OpenQA.Selenium;
namespace Fixtures.Users.Page1
{
[TestFixture]
public class AdminNavigateToPage1 : SeleniumTestBase
{
[Test]
public void AdminNavigateToPage1()
{
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
NavigateTo<Page1>();
var headerelement = Driver.FindElement(By.ClassName("header"));
Assert.That(headerelement.Text, Is.EqualTo("Page Title"));
Assert.That(Driver.Url, Is.EqualTo("http://localhost/Page Title"));
}
[Test]
public void AdminNavigateToPage1ViaMenu()
{
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
Driver.FindElement(By.Id("menuitem1")).Click();
Driver.FindElement(By.Id("submenuitem4")).Click();
var headerelement = Driver.FindElement(By.ClassName("header"));
Assert.That(headerelement.Text, Is.EqualTo("Page Title"));
Assert.That(Driver.Url, Is.EqualTo("http://localhost/Page Title"));
}
}
}
When the second test fails because they have been run together
Nunit presents this:
Sse.Bec.Web.Tests.Fixtures.ManageSitesAndUsers.ChangeOfPremises.AdminNavigateToChangeOfPremises.AdminNavigateToPageChangeOfPremisesViaMenu:
OpenQA.Selenium.NoSuchElementException : The element could not be found
And this line is highlighted:
var headerelement = Driver.FindElement(By.ClassName("header"));
Does anyone know why my code fails when run together, but passes when run alone?
Any answer would be greatly appreciated!
Such a situation normally occurs when the unit tests are using shared resources/data in some way.
It can also happen if your system under test has static fields/properties which are being leveraged to compute the output on which you are asserting.
It can happen if the system under test is being shared (static) dependencies.
Two things you can try
put the break point between the following two lines. And see which page are you in when the second line is hit
Introduce a slight delay between these two lines via Thread.Sleep
Driver.FindElement(By.Id("submenuitem4")).Click();
var headerelement = Driver.FindElement(By.ClassName("header"));
If none of the answers above worked for you, i solved this issue by adding Thread.Sleep(1) before the assertion in the failing test...
Looks like tests synchronization is missed somewhere... Please note that my tests were not order dependant, that i haven't any static member nor external dependency.
look into the TestFixtureSetup, Setup, TestFixtureTearDown and TearDown.
These attributes allow you to setup the testenvironment once, instead of once per test.
Without knowing how Selenium works, my bet is on Driver which seems to be a static class so the 2 tests are sharing state. One example of shared state is Driver.Url. Because the tests are run in parallel, there is a race condition to set the state of this object.
That said, I do not have a solution for you :)
Are you sure that after running one of the tests the method
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
is taking you back to where you should be? It'd seem that the failure is due to improper navigation handler (supposing that the header element is present and found in both tests).
I think you need to ensure, that you can log on for the second test, this might fail, because you are logged on already?
-> putting the logon in a set up method or (because it seems you are using the same user for both tests) even up to the fixture setup
-> the logoff (if needed) might be put in the tear down method
[SetUp]
public void LaunchTest()
{
NavigateTo<LogonPage>().LogonAsCustomerAdministrator();
}
[TearDown]
public void StopTest()
{
// logoff
}
[Test]
public void Test1()
{...}
[Test]
public void Test2()
{...}
If there are delays in the DOM instead of a thread.sleep I recommend to use webdriver.wait in combination with conditions. The sleep might work in 80% and in others not. The wait polls until a timeout is reached which is more reliable and also readable. Here an example how I usually approach this:
var webDriverWait = new WebDriverWait(webDriver, ..);
webDriverWait.Until(d => d.FindElement(By.CssSelector(".."))
.Displayed))
I realize this is an extremely old question but I just ran into it today and none of the answers addressed my particular case.
Using Selenium with NUnit for front end automation tests.
For my case I was using in my startup [OneTimeSetUp] and [OneTimeTearDown] trying to be more efficient.
This however has the problem of using shared resources, in my case the driver itself and the helper I use to validate/get elements.
Maybe a strange edge case - but took me a few hours to figure it out.

Execute unit tests serially (rather than in parallel)

I am attempting to unit test a WCF host management engine that I have written. The engine basically creates ServiceHost instances on the fly based on configuration. This allows us to dynamically reconfigure which services are available without having to bring all of them down and restart them whenever a new service is added or an old one is removed.
I have run into a difficulty in unit testing this host management engine, however, due to the way ServiceHost works. If a ServiceHost has already been created, opened, and not yet closed for a particular endpoint, another ServiceHost for the same endpoint can not be created, resulting in an exception. Because of the fact that modern unit testing platforms parallelize their test execution, I have no effective way to unit test this piece of code.
I have used xUnit.NET, hoping that because of its extensibility, I could find a way to force it to run the tests serially. However, I have not had any luck. I am hoping that someone here on SO has encountered a similar issue and knows how to get unit tests to run serially.
NOTE: ServiceHost is a WCF class, written by Microsoft. I don't have the ability to change it's behavior. Hosting each service endpoint only once is also the proper behavior...however, it is not particularly conducive to unit testing.
Each test class is a unique test collection and tests under it will run in sequence, so if you put all of your tests in same collection then it will run sequentially.
In xUnit you can make following changes to achieve this:
Following will run in parallel:
namespace IntegrationTests
{
public class Class1
{
[Fact]
public void Test1()
{
Console.WriteLine("Test1 called");
}
[Fact]
public void Test2()
{
Console.WriteLine("Test2 called");
}
}
public class Class2
{
[Fact]
public void Test3()
{
Console.WriteLine("Test3 called");
}
[Fact]
public void Test4()
{
Console.WriteLine("Test4 called");
}
}
}
To make it sequential you just need to put both the test classes under same collection:
namespace IntegrationTests
{
[Collection("Sequential")]
public class Class1
{
[Fact]
public void Test1()
{
Console.WriteLine("Test1 called");
}
[Fact]
public void Test2()
{
Console.WriteLine("Test2 called");
}
}
[Collection("Sequential")]
public class Class2
{
[Fact]
public void Test3()
{
Console.WriteLine("Test3 called");
}
[Fact]
public void Test4()
{
Console.WriteLine("Test4 called");
}
}
}
For more info you can refer to this link
Important: This answer applies to .NET Framework. For dotnet core, see Dimitry's answer regarding xunit.runner.json.
All good unit tests should be 100% isolated. Using shared state (e.g. depending on a static property that is modified by each test) is regarded as bad practice.
Having said that, your question about running xUnit tests in sequence does have an answer! I encountered exactly the same issue because my system uses a static service locator (which is less than ideal).
By default xUnit 2.x runs all tests in parallel. This can be modified per-assembly by defining the CollectionBehavior in your AssemblyInfo.cs in your test project.
For per-assembly separation use:
using Xunit;
[assembly: CollectionBehavior(CollectionBehavior.CollectionPerAssembly)]
or for no parallelization at all use:
[assembly: CollectionBehavior(DisableTestParallelization = true)]
The latter is probably the one you want. More information about parallelisation and configuration can be found on the xUnit documentation.
For .NET Core projects, create xunit.runner.json with:
{
"parallelizeAssembly": false,
"parallelizeTestCollections": false
}
Also, your csproj should contain
<ItemGroup>
<None Update="xunit.runner.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
</ItemGroup>
For old .Net Core projects, your project.json should contain
"buildOptions": {
"copyToOutput": {
"include": [ "xunit.runner.json" ]
}
}
For .NET Core projects, you can configure xUnit with an xunit.runner.json file, as documented at https://xunit.net/docs/configuration-files.
The setting you need to change to stop parallel test execution is parallelizeTestCollections, which defaults to true:
Set this to true if the assembly is willing to run tests inside this assembly in parallel against each other. ... Set this to false to disable all parallelization within this test assembly.
JSON schema type: boolean
Default value: true
So a minimal xunit.runner.json for this purpose looks like
{
"parallelizeTestCollections": false
}
As noted in the docs, remember to include this file in your build, either by:
Setting Copy to Output Directory to Copy if newer in the file's Properties in Visual Studio, or
Adding
<Content Include=".\xunit.runner.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</Content>
to your .csproj file, or
Adding
"buildOptions": {
"copyToOutput": {
"include": [ "xunit.runner.json" ]
}
}
to your project.json file
depending upon your project type.
Finally, in addition to the above, if you're using Visual Studio then make sure that you haven't accidentally clicked the Run Tests In Parallel button, which will cause tests to run in parallel even if you've turned off parallelisation in xunit.runner.json. Microsoft's UI designers have cunningly made this button unlabelled, hard to notice, and about a centimetre away from the "Run All" button in Test Explorer, just to maximise the chance that you'll hit it by mistake and have no idea why your tests are suddenly failing:
This is old question but I wanted to write a solution to people searching newly like me :)
Note: I use this method in Dot Net Core WebUI integration tests with xunit version 2.4.1.
Create an empty class named NonParallelCollectionDefinitionClass and then give CollectionDefinition attribute to this class as below. (The important part is DisableParallelization = true setting.)
using Xunit;
namespace WebUI.IntegrationTests.Common
{
[CollectionDefinition("Non-Parallel Collection", DisableParallelization = true)]
public class NonParallelCollectionDefinitionClass
{
}
}
After then add Collection attribute to the class which you don't want it to run in parallel as below. (The important part is name of collection. It must be same with name used in CollectionDefinition)
namespace WebUI.IntegrationTests.Controllers.Users
{
[Collection("Non-Parallel Collection")]
public class ChangePassword : IClassFixture<CustomWebApplicationFactory<Startup>>
...
When we do this, firstly other parallel tests run. After that the other tests which has Collection("Non-Parallel Collection") attribute run.
you can Use Playlist
right click on the test method -> Add to playlist -> New playlist
then you can specify the execution order, the default is, as you add them to the play list but you can change the playlist file as you want
I don't know the details, but it sounds like you might be trying to do integration testing rather than unit testing. If you could isolate the dependency on ServiceHost, that would likely make your testing easier (and faster). So (for instance) you might test the following independently:
Configuration reading class
ServiceHost factory (possibly as an integration test)
Engine class that takes an IServiceHostFactory and an IConfiguration
Tools that would help include isolation (mocking) frameworks and (optionally) IoC container frameworks. See:
http://www.mockobjects.com/
http://www.hanselman.com/blog/ListOfNETDependencyInjectionContainersIOC.aspx
Maybe you can use Advanced Unit Testing. It allows you to define the sequence in which you run the test. So you may have to create a new cs file to host those tests.
Here's how you can bend the test methods to work in the sequence you want.
[Test]
[Sequence(16)]
[Requires("POConstructor")]
[Requires("WorkOrderConstructor")]
public void ClosePO()
{
po.Close();
// one charge slip should be added to both work orders
Assertion.Assert(wo1.ChargeSlipCount==1,
"First work order: ChargeSlipCount not 1.");
Assertion.Assert(wo2.ChargeSlipCount==1,
"Second work order: ChargeSlipCount not 1.");
...
}
Do let me know whether it works.
None of the suggested answers so far worked for me. I have a dotnet core app with XUnit 2.4.1.
I achieved the desired behavior with a workaround by putting a lock in each unit test instead. In my case, I didn't care about running order, just that tests were sequential.
public class TestClass
{
[Fact]
void Test1()
{
lock (this)
{
//Test Code
}
}
[Fact]
void Test2()
{
lock (this)
{
//Test Code
}
}
}
For me, in .Net Core Console application, when I wanted to run test methods ( not classes ) synchronously, the only solution which worked was this described in this blog:
xUnit: Control the Test Execution Order
I've added the attribute [Collection("Sequential")] in a base class:
namespace IntegrationTests
{
[Collection("Sequential")]
public class SequentialTest : IDisposable
...
public class TestClass1 : SequentialTest
{
...
}
public class TestClass2 : SequentialTest
{
...
}
}

Run a single unit test from a fixture of many in MbUnit

Is there anyway to add an attribute to a [Test] method in a [TestFixture] so that only that method runs? This would be similar to the way the [CurrentFixture] attribute can be used to only run a single fixture. I ask as sometimes when I test the model I want to profile the sql being executed and I only want to focus on a single test. Currently I have to comment out all the other tests in the fixture.
Updated:
The code I'm using to initiate the test follows, I'm really looking for a solution I can weave into this code.
public static void Run(bool currentFixturesOnly) {
using(AutoRunner auto = new AutoRunner()) {
if(currentFixturesOnly) { // for processing [CurrentFixture]s only
auto.Domain.Filter = FixtureFilters.Current;
}
auto.Verbose = true;
auto.Run();
auto.ReportToHtml();
}
}
If you use a test runner like TestDriven.Net, ReSharper or Icarus then you can select the specific test to run and just run that. If you're using the command-line tools, consider using a filter.
eg.
Gallio.Echo MyTestAssembly.dll /f:Name:TheNameOfTheParticularIWantToRun

Categories