I have three unit tests that cannot pass when run from the build server—they rely on the login credentials of the user who is running the tests.
Is there any way (attribute???) I can hide these three tests from the build server, and run all the others?
Our build-server expert tells me that generating a vsmdi file that excludes those tests will do the trick, but I'm not sure how to do that.
I know I can just put those three tests into a new project, and have our build-server admin explicitly exclude it, but I'd really love to be able to just use a simple attribute on the offending tests.
You can tag the tests with a category, and then run tests based on category.
[TestCategory("RequiresLoginCredentials")]
public void TestMethod() { ... }
When you run mstest, you can specify /category:"!RequiresLoginCredentials"
There is an IgnoreAttribute. The post also lists the other approaches.
Others answers are old.
In modern visual studio (2012 and above), tests run with vstest and not mstest.
New command line parameter is /TestCaseFilter:"TestCategory!=Nightly"
as explained in this article.
Open Test->Windows->Test List Editor.
There you can include / hide tests
I figured out how to filter the tests by category in the build definition of VS 2012. I couldn't find this information anywhere else.
in the Test Case Filter field under Test Source, under Automated Tests, under the Build process parameters, in the Process tab you need to write TestCategory=MyTestCategory (no quotes anywhere)
Then in the test source file you need to add the TestCategory attribute. I have seen a few ways to do this but what works for me is adding it in the same attribute as the TestMethod as follows.
[TestCategory("MyTestCategory"), TestMethod()]
Here you do need the quotes
When I run unit tests from VS build definition (which is not exactly MSTest), in the Criteria tab of Automated Tests property I specify:
TestCategory!=MyTestCategory
All tests with category MyTestCategory got skipped.
My preferred way to do that is to have 2 kinds of test projects in my solution : one for unit tests which can be executed from any context and should always pass and another one with integration tests that require a particular context to run properly (user credential, database, web services etc.). My test projects are using a naming convention (ex : businessLogic.UnitTests vs businessLogic.IntegrationTests) and I configure my build server to only run unit tests (*.UnitTests). This way, I don't have to comment IgnoreAttributes if I want to run the Integration Tests and I found it easier than editing test list.
Related
I am using c# and Nunit testing framework.
I have to get count and names of all ignored test cases.
Tried to check using testcontext and i am able to get single name of test case.
But i want to count of all ignored test cases within my project and names of those test cases.
[Test]
[Ignore("Global not supported")]
public void AddUser()
{
}
I am ignoring tests as above mentioned on the code. I have many tests in my project which are ignored. can you please check help
When you use the test adapter under Test Explorer, ignored tests appear in the display marked as warnings. NUnit considers ignored tests to be a Bad Thing, which is why you get a warning. Ideally, you should not ignore tests in NUnit unless you want them to be considered as Bad! (There are, of course, other ways to skip tests that are neutral as far as the outcome goes.)
So, if you are using Test Explorer to run the tests, you should be seeing those warnings. If you group tests by Outcome, you will see a total. Test Explorer also provides a general summary by Outcome. However, AFAIK, there is no report suitable for printing.
If you are running tests using some other runner (not Test Explorer), which makes use of the NUnit adapter, then the summary you see will depend on what that runner provides.
What NUnit itself does provide is a summary report in XML format (defaults as TestResult.xml), which you can use to produce any desired report. All the info is there. There are a number of third-party products that can produce a report from this XML or you can write a simple program yourself.
I'm using NUnit.ConsoleRunner.3.8.0 to run NUnit 3.10.1 tests.
The problem is: if there is specific tests in run filter, I should properly configure my SUT. It is quite painful process, so I would like to do it only if some specific test should be ran.
Is any way to receive list of tests to be ran by console runner, ideally in SetUpFixture?
If any tests in the same namespace (or descendants) as the SetUpFixture are selected, the the SetUpFixture will be run. If none are selected, then it will not be run.
Since this is how SetUpFixtures work, you should organize your tests so that only those that need this configuration step are in the namespaces covered by the SetUpFixture.
In my experience working with teams, I have found that they are sometimes hampered by standards (imposed or self-chosen) that require the test namespaces to conform to a particular design. This is a bad idea when using a system like NUnit that depends on the namespace structure to control how tests are executed.
I am looking at setting up SpecFlow for various levels of tests, and as part of that I want to be able to filter which tests run.
For example, say I want to do a full GUI test run, where I build up the dependencies for GUI testing on a dev environment and run all the specs tagged #gui, with the steps executed through the gui. Also from the same script I want to run only the tests tagged #smoke, and set up any dependencies needed for a deployed environment, with the steps executed through the api.
I'm aware that you can filter tags when running through the specflow runner, but I need to also change the way each test works in the context of the test run. Also I want this change of behaviour to be switched with a single config/command line arg when run on a build server.
So my solution so far is to have build configuration for each kind of test run, and config transforms so I can inject behaviour into specflow when the test run starts up. But I am not sure of the right way to filter by tag as well.
I could do somethig like this:
[BeforeFeature]
public void CheckCanRun()
{
if(TestCannotBeRunInThisContext())
{
ScenarioContext.Current.Pending();
}
}
I think this would work (it would not run the feature) but the test would still come up on my test results, which would be messy if I'm filtering out most of the tests with my tag. If there a way I can do this which removes the feature from running entirely?
In short, no I don't think there is anyway to do what you want other than what you have outlined above.
How would you exclude the tests from being run if they were just normal unit tests?
In ReSharper's runner you would probably create a test session with only the tests you wanted to run in. On the CI server you would only run tests in a specific dll or in particular categories.
Specflow is a unit test generation tool. It generates unit tests in the flavour specified in the config. The runner still has to decide which of those tests to run, so the same principles of choosing the tests to run above applies to specflow tests.
Placing them into categories and running only those categories is the simplest way, but having a more fine grained programmatic control of that is not really applicable. What you are asking to do is basically like saying 'run this test, but let me decide in the test if I want it to run' which doesn't really make sense.
I have a web test, let's call it WebTestParent that calls another web test, WebTestChild.
There is no problem when I run it from the IDE, but when I try running it from the command line using mstest, like this:
C:\MySolution> mstest.exe /testmetadata:"Tests.vsmdi" /test:"WebTestParent.webtest" /testsettings:"local.testsettings"
I get this error:
Cannot find the test 'WebTestChild' with storage 'C:\MySolution\somesubfolder\WebTestChild.webtest'.
The file local.testsettings has "Enable deployment" checked.
Did anyone experience this and maybe found a solution?
Thanks.
I am not familiar with web tests but I have done this with unit tests. I belive that your problem is not the call from one to test to the other. Maybe your 'WebTestChild' (or both tests) does not belong on the 'TestList' in your 'Tests.vsmdi' file.
If you have not a list for your tests then you should create one. Check here for more details.
I know Visual Studio offers some Unit Testing goodies. How do I use them, how do you use them? What should I know about unit testing (assume I know nothing).
This question is similar but it does not address what Visual Studio can do, please do not mark this as a duplicate because of that. Posted as Community Wiki because I'm not trying to be a rep whore.
Easily the most significant difference is that the MSTest support is built in to Visual Studio and provides unit testing, code coverage and mocking support directly. In order to do the same types of things in the external (third-party) unit test frameworks generally requires multiple frameworks (a unit testing framework and a mocking framework) and other tools to do code coverage analysis.
The easist way to use the MSTest unit testing tools is to open the file you want to create unit tests for, right click in the editor window and choose the "Create Unit Tests..." menu from the context menu. I prefer putting my unit tests in a separate project, but that's just personal perference. Doing this will create a sort of "template" test class, which will contain test methods to allow you to test each of the functions and properties of your class. At that point, you need to determine what it means for the test to pass or fail (in other words, determine what should happen given a certain set of inputs).
Generally, you end up writing tests that look similar to this:
string stringVal = "This";
Assert.IsTrue(stringVal.Length == 4);
This says that for the varaible named stringVal, the Length property should be equal to 4 after the assignment.
The resources listed in the other thread should provide a good starting point to understandng what unit testing is in general.
The unit testing structure in VS is similar to NUnit in it's usage. One interesting (and useful) feature of it does differ from NUnit significantly. VS unit testing can be used with code that was not written with unit testing in mind.
You can build a unit testing framework after an application is written because the test structure allows you to externally reference method calls and use ramp-up and tear-down code to prep the test invironment. For example: If you have a method within a class that uses resources that are external to the method, you can create them in the ramp-up class (which VS creates for you) and then test it in the unit test class (also created for you by VS). When the test finishes, the tear-down class (yet again...provided for you by VS) will release resources and clean up. This entire process exists outside of your application thus does not interfere with the code base.
The VS unit testing framework is actually a very well implemented and easy to use. Best of all, you can use it with an application that was not designed with unit testing in mind (something that is not easy with NUnit).
First thing I would do is download a copy of TestDriven.Net to use as a test runner. This will add a right-click menu that will let you run individual tests by right-clicking in the test method and selecting Run Test(s). This also works for all tests in a class (right-click in class, but outside a method), a namespace (right click on project or in namespace outside a class), or a entire solution (right-click on solution). It also adds the ability to run tests with coverage (built-in or nCover) or the debugger from the same right-click menu.
As far as setting up tests, generally I stick with the one test project per project and one test class per class under test. Sometimes I will create test classes for aspects that run across a lot of classes, but not usually. The typical way I create them is to first create the skeleton of the class -- no properties, no constructor, but with the first method that I want to test. This method simply throws an NotImplementedException.
Once the class skeleton is created, I use the right-click Create Unit Tests in the method under test. This brins up a dialog that lets you create a new test project or select an existing one. I create, and name appropriately, a new test project and have the wizard create the classes. Once this is done you may want to also create the private accessor functions for the class in the test project as well. Sometimes these need to be updated (recreated) if your class changes substantially.
Now you have a test project and your first test. Start by modifying the test to define a desired behavior of the method. Write enough code to (just barely) pass the test. continue on with writing tests/writing codes, specifying more behavior for the method until all of the behavior for the method is defined. Then move onto the next method or class as appropriate until you have enough code to complete the feature that you are working on.
You can add more and different types of tests as required. You can also set up your source code control to require that some or all tests pass before check in.