I've got Jenkins downloaded and setup on my local machine. So far I've been able to have it sync up with my github repo, pull the code down and build successfully, and then it also runs all the tests. I do all of this using the built-in Execute Windows batch command build option.
I'm trying to limit the test execution to just be a specific class. I've tried several different --where iterations and various combinations of things I've found online but every time I do it just says that there were 0 tests executed.
Here's what my current command looks like (ignore the ugliness of the directory as I'm just testing this right now)
C:\Users\<username>\Downloads\NUnit.Console-3.16.1\bin\nunit3-console.exe "%WORKSPACE%\\tree\main\DTAF\DTAF\DTAF\bin\debug\DTAF.dll" --where "class == BaseActionsTests"
I know this is the correct location for the test class because I can navigate to it directly to see it but also if I remove the --where then it runs every test...including the tests in BaseActionsTests.
So I was able to figure out what I needed to do. After looking at the documentation some more I decided to try using --name just to see what would happen and low and behold it ran every test in the specified test class.
C:\Users\<username>\Downloads\NUnit.Console-3.16.1\bin\nunit3-console.exe "%WORKSPACE%\tree\main\DTAF\DTAF\DTAF\bin\debug\DTAF.dll" --where "name == ElementActionsTests"
Am I wrong in thinking the documentation is misleading regarding --class and --name? It looks like class should run the test class and name should only run a single test.
Related
I have some tests that do some write-operation on a database. I know that´s not really unit-testing, but let´s leave that asside.
In order to enable every test to work on a clean workspace, I rollback all transactions doe so far. However I randomly get concurrency-errors due to database-locks that cannot be established.
This is my code:
Test1.dll
[TestFixture]
class MyTest1
{
[OneTimeSetup]
public void SetupFixture()
{
myworkspace.StartEditing(); // this will establish a lock on the underlying database
}
[OneTimeTearDow]
public void TearDownFixture()
{
myWorkspace.Rollback();
}
}
The same code also exists within another test-assembly, let's name it Test2.dll. Now when I use the nunit-console-runner using nunit3-console Test1.dll Test2.dll, I get the following error:
System.Runtime.InteropServices.COMException : Table 'GDB_DatabaseLocks' cannot be locked; it is currently used by user 'ADMIN' (this is me) on host 'MyHost'
at ESRI.ArcGIS.Geodatabase.IWorkspaceEdit.StartEditing(Boolean withUndoRedo)
myWorkspace is a COM-object (Arcobjects-interface IWorkspace) that relates to an MS-Access-Database. I assume this is because nunit creates multiple threads that enter the above code at the same time. So I added the NonParalizable-attribute to both assemblies without success. I also tried to add Apartment(ApartmentState.STA) to my assembly in order to execute everything in a single thread, which resulted in the console never finishing.
What drives me nuts is that running my tests using ReSahrpers test-runner works perfectly. However I have no clue how ReSharper starts nunit. It seems ReSharper does not use nunit-console, but the nunit-API instead.
Is there another way to force all my tests to run in a single thread? I use nunit3.10 and ArcGIS 10.8.
By default, the NUnit Console will run multiple test assemblies in parallel. Add --agents=1 to force the two assemblies to run sequentially, under a single agent.
Just to clarify some of the other things you tried as well...
[NonParallelizable] is used to prevent the parallelization of different tests within a single assembly. By default, tests within an assembly do not run in parallel, so adding this attribute when you haven't specifically added [Parallelizable] at a higher level will have no effect.
[Apartments(Apartment.STA)] can be added as an assembly-level attribute, and does not have to be added per test, as mentioned in the comments. Check out the docs here: https://docs.nunit.org/articles/nunit/writing-tests/attributes/apartment.html
I have a situation where feature-toggling is implemented in an application and I want to have automated Selenium tests that are in place for both states of the application. (Both states are possible in a production state.) The tests that are executed would be based on the feature-toggling configuration that is present in the web application.
In a normal situation, using "Scenario Outline" will result in a generated test that can be executed via Test Explorer, and the CI process for an application can be configured to execute all of the tests that are part of the test assembly after it is built. In my situation, an application state could cause one test to pass and another test to fail (think lack of UI controls because of a feature state). I looked into hooks and I see that there is a BeforeScenario tag, but I don't think this gets me what I'm wanting to achieve so much as it allows me to do some setup before a scenario executes.
I usually include a step that sets the scenario to "pending".
Scenario: Some new feature
Given pending story 123
Given condition
When action that includes new feature
Then something
The step definition for Given pending story 123 would call:
Assert.Inconclusive($"Pending story {storyNumber} (https://dev.azure.com/company/project/workitems/edit/{storyNumber})");
This marks the test inconclusive when using MS Test. We use Jenkins for continuous integration, and our builds continue passing. It's also handy because I can provide a link to the story or feature in something like Azure DevOps, GitHub, Jira or their ilk.
This is not exactly a feature toggle, but it is easy to remove or comment out the Given pending line of a scenario in your topic branch to enable the test.
I am looking at setting up SpecFlow for various levels of tests, and as part of that I want to be able to filter which tests run.
For example, say I want to do a full GUI test run, where I build up the dependencies for GUI testing on a dev environment and run all the specs tagged #gui, with the steps executed through the gui. Also from the same script I want to run only the tests tagged #smoke, and set up any dependencies needed for a deployed environment, with the steps executed through the api.
I'm aware that you can filter tags when running through the specflow runner, but I need to also change the way each test works in the context of the test run. Also I want this change of behaviour to be switched with a single config/command line arg when run on a build server.
So my solution so far is to have build configuration for each kind of test run, and config transforms so I can inject behaviour into specflow when the test run starts up. But I am not sure of the right way to filter by tag as well.
I could do somethig like this:
[BeforeFeature]
public void CheckCanRun()
{
if(TestCannotBeRunInThisContext())
{
ScenarioContext.Current.Pending();
}
}
I think this would work (it would not run the feature) but the test would still come up on my test results, which would be messy if I'm filtering out most of the tests with my tag. If there a way I can do this which removes the feature from running entirely?
In short, no I don't think there is anyway to do what you want other than what you have outlined above.
How would you exclude the tests from being run if they were just normal unit tests?
In ReSharper's runner you would probably create a test session with only the tests you wanted to run in. On the CI server you would only run tests in a specific dll or in particular categories.
Specflow is a unit test generation tool. It generates unit tests in the flavour specified in the config. The runner still has to decide which of those tests to run, so the same principles of choosing the tests to run above applies to specflow tests.
Placing them into categories and running only those categories is the simplest way, but having a more fine grained programmatic control of that is not really applicable. What you are asking to do is basically like saying 'run this test, but let me decide in the test if I want it to run' which doesn't really make sense.
I am trying to run my approvals tests from nUnit under TeamCity
[assembly: FrontLoadedReporter(typeof(TeamCityReporter))]
[Test]
[UseReporter(typeof(WinMergeReporter))]
public void Test()
{
}
Unfortunately test fails because approvals is trying to pick up the approved file from C drive.
Test(s) failed. ApprovalTests.Core.Exceptions.ApprovalMissingException : Failed Approval: Approval File "C:\...approved.txt" Not Found.
Is there anyway I can specify right location for my approval files?
It is appeared that TeamCityReporter was hiding real reason of this issue.
Here is result of local run and output of approvals test with listed solutions.
System.Exception : Could Not Detect Test Framework
Either: 1) Optimizer Inlined Test Methods
Solutions: a) Add [MethodImpl(MethodImplOptions.NoInlining)] b) Set
Build->Opitmize Code to False & Build->Advanced->DebugInfo to Full
or 2) Approvals is not set up to use your test framework. It currently
supports [NUnit, MsTest, MbUnit, xUnit.net, xUnit.extensions,
Machine.Specifications (MSpec)]
Solution: To add one use
ApprovalTests.Namers.StackTraceParsers.StackTraceParser.AddParser()
method to add implementation of
ApprovalTests.Namers.StackTraceParsers.IStackTraceParser with support
for your testing framework. To learn how to implement one see
http://blog.approvaltests.com/2012/01/creating-namers.html
It was tricky to catch because usually local run is done under Debug while deployment and tests under Release. Nevertheless I hope the question and answer will be helpful for somebody else.
I have three unit tests that cannot pass when run from the build server—they rely on the login credentials of the user who is running the tests.
Is there any way (attribute???) I can hide these three tests from the build server, and run all the others?
Our build-server expert tells me that generating a vsmdi file that excludes those tests will do the trick, but I'm not sure how to do that.
I know I can just put those three tests into a new project, and have our build-server admin explicitly exclude it, but I'd really love to be able to just use a simple attribute on the offending tests.
You can tag the tests with a category, and then run tests based on category.
[TestCategory("RequiresLoginCredentials")]
public void TestMethod() { ... }
When you run mstest, you can specify /category:"!RequiresLoginCredentials"
There is an IgnoreAttribute. The post also lists the other approaches.
Others answers are old.
In modern visual studio (2012 and above), tests run with vstest and not mstest.
New command line parameter is /TestCaseFilter:"TestCategory!=Nightly"
as explained in this article.
Open Test->Windows->Test List Editor.
There you can include / hide tests
I figured out how to filter the tests by category in the build definition of VS 2012. I couldn't find this information anywhere else.
in the Test Case Filter field under Test Source, under Automated Tests, under the Build process parameters, in the Process tab you need to write TestCategory=MyTestCategory (no quotes anywhere)
Then in the test source file you need to add the TestCategory attribute. I have seen a few ways to do this but what works for me is adding it in the same attribute as the TestMethod as follows.
[TestCategory("MyTestCategory"), TestMethod()]
Here you do need the quotes
When I run unit tests from VS build definition (which is not exactly MSTest), in the Criteria tab of Automated Tests property I specify:
TestCategory!=MyTestCategory
All tests with category MyTestCategory got skipped.
My preferred way to do that is to have 2 kinds of test projects in my solution : one for unit tests which can be executed from any context and should always pass and another one with integration tests that require a particular context to run properly (user credential, database, web services etc.). My test projects are using a naming convention (ex : businessLogic.UnitTests vs businessLogic.IntegrationTests) and I configure my build server to only run unit tests (*.UnitTests). This way, I don't have to comment IgnoreAttributes if I want to run the Integration Tests and I found it easier than editing test list.