I am trying to get unit tests to run successfully for my MVC 4 Web Application project.
When I run the tests classes individually all of the tests pass, when I come to run all test in solution only 2/9 pass, I have clicked Debug Checked Tests and they all pass when I the hit run again they also all pass.
This problem is also being replicated when I check the project into TFS, I have setup continuous integration they project builds, runs the tests and fails on exactly the same tests.
The error I'm getting back is *"A route named '' is already in the route collection"
Does anybody have any ideas why this might be happening?
In each class I have a [TestInitialize] block which is shown below:
[TestInitialize]
public void Setup()
{
var builder = new TestControllerBuilder();
controller = new MyController();
builder.InitializeController(controller);
RouteConfig.RegisterRoutes(RouteTable.Routes);
}
I had a similar error in the classes when I hadn't included the TestControllerBuilder, could it be that this code is not running correctly?
RouteTable.Routes is static and therefore will only be initialized once per AppDomain.
So every time you run a test, you are in effect trying to re-register the same routes over again.
You would probably be better off moving your route registration into an AssemblyInitialize attribute so it will only run once at the beginning of the entire test run.
Related
I'm using xUnit and I have same load tests which I'd like to execute only under certain conditions:
Execute them always on our build server.
Do not execute them locally within the IDE when clicking Run all tests in the test runner.
Execute all load tests locally within the IDE if explicitly triggered.
How can I achieve this? Is there any chance to accomplish this with xUnit, preferably without conditional compiles?
I think the easiest way to get around this if not the prettiest, is enclosing the test you want to "skip" locally is to enclose your test like this:
public class TestClass1
{
#if !DEBUG
[Fact]
#endif
public void Test1()
{
Assert.True(true);
}
}
This will skip the [Fact] attribute which triggers the test at run time, as long as you IDE or editor, or however you run your code, is set to build for "Debug", which is default for most.
A better way to handle this is probably using the xUnit [Fact(Skip = "Reason")] but I have had issues with making this work on runners on Gitlab sometimes.
And for your last case where you want it to run, you can just "run your code or tests" in "Release mode".
EDIT: In regards to your answer and my own suggestion of using the build in "skip" functionality for xUnit perhaps this post could be of help.
You could then create your own Attribute method to check the environment you run on:
https://josephwoodward.co.uk/2019/01/skipping-xunit-tests-based-on-runtime-conditions
Is it possible to determine how many tests have been selected to execute before the test runner executes them? This would be helpful for local testing since our logic generates test configuration for every test suite configuration at a time, having the ability to figure out which tests have been selected to execute could allow me to create logic to only create test data configuration for those tests.
This would be useful when writing tests and testing that they work. Since we only want to generate test data for the selected test.
Right now we have to comment out code to disable it from executing the test data configuration.
Thanks.
I think you are overthinking this a bit. Your setup can be done with a combination of splitting the setup work between the whole test assembly setup that only runs once, namespace wide setup that runs once before any test in the namespace is run, the constructor for a test fixture, the start of an actual test, etc.
If you are reusing a docker instance and app pool for all the tests then initialize it in the whole assembly setup so that it is only done once. Then each test can just add whatever data it needs before it starts. If some of that data is shared between tests then just setup global flags to indicate what has been done already and if some data a test needs hasn't been setup then just do the incremental additional setup needed before continuing with that test, but this generally isn't required if you organize your tests into namespaces properly and just use the namespace wide setup for fixtures.
I'm using NUnit3 in Visual Studio 2017 and doing TDD. Something really strange is happening since I updated my code to make my latest test pass.
Now, 3 of my other tests are failing when I click Run All Tests, as below:
It is telling me that the actual and expected values in my Assert method are not equal.
However, when I put a breakpoint at the line where the Assert method is and start debugging, the stacktrace is showing that expected and actual are the same value and then the test passes, as below:
Am I doing something stupid or could there be a bug in VS2017 or NUnit or something?
This ever happen to anyone else?
[Edit: I should probably add that I have written each test as a separate class]
The failing tests share a resource that affects them all when tested together. Recheck the affected tests and their subjects.
You should also look into static fields or properties in the subjects. They tends to cause issues if not used properly when designing your classes.
Some subtle differences might occur. For instance if a first test change a state which affects the behavior of a second test, then the outcome of this 2nd test may not be the same if I run it alone.
An idea to help understand a test failure when a breakpoint can't be used, could be to add logging.
Anyway, to answer your questions:
This ever happen to anyone else?
Yes
Am I doing something stupid or could there be a bug in VS2017 or NUnit or something?
I bet that it's neither: just a case a bit more subtle
I experienced a similar issue in Visual Studio 2017 using MSTest as the testing framework. Assertions in unit tests were failing when the tests were run but would pass when the unit tests were debugged. This was occurring for a handful of unit tests but not all of them. In addition to the Assertion failures, many of the units tests were also failing due to a System.TypeLoadException (Could not load type from assembly error). I ultimately did the following which solved the problem:
Open the Local.testsettings file in the solution
Go to the "Unit Test" settings
Uncheck the "Use the Load Context for assemblies in the test directory." checkbox
After taking these steps all unit tests started passing when run.
I encountered this phenomenon myself, but found the cause quite easily. More concretely, I tested some matrix calculations, and in my test class I defined data to calculate with as a class variable and performed my calculations with it. My matrix routines, however, modified the original data, so when I used "run tests" on the test class, the first test corrupted the data, and the next test could not succeed.
The sample code below is an attempt to show what I mean.
[TestFixture]
public void MyTestClass()
{
[Test]
public void TestMethod1()
{
MyMatrix m = new MyMatrix();
// Method1() modifies the data...
m.Method1(_data);
}
[Test]
public void TestMethod2()
{
MyMatrix m = new MyMatrix();
// here you test with modified data and, in general, cannot expect success
m.Method2(_data);
}
// the data to test with
private double[] _data = new double[1, 2, 3, 4]{};
}
I am looking at setting up SpecFlow for various levels of tests, and as part of that I want to be able to filter which tests run.
For example, say I want to do a full GUI test run, where I build up the dependencies for GUI testing on a dev environment and run all the specs tagged #gui, with the steps executed through the gui. Also from the same script I want to run only the tests tagged #smoke, and set up any dependencies needed for a deployed environment, with the steps executed through the api.
I'm aware that you can filter tags when running through the specflow runner, but I need to also change the way each test works in the context of the test run. Also I want this change of behaviour to be switched with a single config/command line arg when run on a build server.
So my solution so far is to have build configuration for each kind of test run, and config transforms so I can inject behaviour into specflow when the test run starts up. But I am not sure of the right way to filter by tag as well.
I could do somethig like this:
[BeforeFeature]
public void CheckCanRun()
{
if(TestCannotBeRunInThisContext())
{
ScenarioContext.Current.Pending();
}
}
I think this would work (it would not run the feature) but the test would still come up on my test results, which would be messy if I'm filtering out most of the tests with my tag. If there a way I can do this which removes the feature from running entirely?
In short, no I don't think there is anyway to do what you want other than what you have outlined above.
How would you exclude the tests from being run if they were just normal unit tests?
In ReSharper's runner you would probably create a test session with only the tests you wanted to run in. On the CI server you would only run tests in a specific dll or in particular categories.
Specflow is a unit test generation tool. It generates unit tests in the flavour specified in the config. The runner still has to decide which of those tests to run, so the same principles of choosing the tests to run above applies to specflow tests.
Placing them into categories and running only those categories is the simplest way, but having a more fine grained programmatic control of that is not really applicable. What you are asking to do is basically like saying 'run this test, but let me decide in the test if I want it to run' which doesn't really make sense.
In my MSTest UnitTest project, before running any tests, I need to execute some commands. Is there a feature, kind of like Global.asax is for web based projects, that will let me kick off something before any tests run?
I should make it clear that when I say "execute some commands", I don't mean DOS commands, but execute some code.
If I understand correctly, you need to have some initialization code run before you start your tests. If that is indeed the case you should declare a method inside your unit-test class with the ClassInitializeAttribute like this:
[ClassInitialize]
public void ClassSetUp()
{
//initialization code goes here...
}
Edit: there is also the AssemblyInitializeAttribute that will run before any other tests in assembly
Unit test frameworks usually support set up and "tear down" methods for both the entire test fixture and individual tests. MSTest lets you specify which methods to run when with these attributes:
[ClassIntialize()]
public void ClassInitialize() {
// MSTest runs this code once before any of your tests
}
[ClassCleanup()]
public void ClassCleanUp() {
// Runs this code once after all your tests are finished.
}
[TestIntialize()]
public void TestInitialize() {
// Runs this code before every test
}
[TestCleanup()]
public void TestCleanUp() {
// Runs this code after every test
}
Having said that, be careful with the class initialize and cleanup methods if you're running ASP.NET unit tests. As it says in the ClassInitializeAttribute documentation:
This attribute should not be used on
ASP.NET unit tests, that is, any test
with [HostType("ASP.NET")] attribute.
Because of the stateless nature of IIS
and ASP.NET, a method decorated with
this attribute may be called more than
once per test run.
properties of you project and then debug field there you can specify arguments
EDIT
When you see the debug menu in the properties you can start an external program to do certain things for you when you start debugging. This will trigger when you launch an instance of your test project. You can also specify command line arguments in the command line arguments box.
For example I use NUnit I specify NUnit as the external program and specify the location of the .dll in the command line arguments