c# Nunit to get all ignored test count and names - c#

I am using c# and Nunit testing framework.
I have to get count and names of all ignored test cases.
Tried to check using testcontext and i am able to get single name of test case.
But i want to count of all ignored test cases within my project and names of those test cases.
[Test]
[Ignore("Global not supported")]
public void AddUser()
{
}
I am ignoring tests as above mentioned on the code. I have many tests in my project which are ignored. can you please check help

When you use the test adapter under Test Explorer, ignored tests appear in the display marked as warnings. NUnit considers ignored tests to be a Bad Thing, which is why you get a warning. Ideally, you should not ignore tests in NUnit unless you want them to be considered as Bad! (There are, of course, other ways to skip tests that are neutral as far as the outcome goes.)
So, if you are using Test Explorer to run the tests, you should be seeing those warnings. If you group tests by Outcome, you will see a total. Test Explorer also provides a general summary by Outcome. However, AFAIK, there is no report suitable for printing.
If you are running tests using some other runner (not Test Explorer), which makes use of the NUnit adapter, then the summary you see will depend on what that runner provides.
What NUnit itself does provide is a summary report in XML format (defaults as TestResult.xml), which you can use to produce any desired report. All the info is there. There are a number of third-party products that can produce a report from this XML or you can write a simple program yourself.

Related

Get tests to be executed in [SetUpFixture] while running via nunit3-console.exe

I'm using NUnit.ConsoleRunner.3.8.0 to run NUnit 3.10.1 tests.
The problem is: if there is specific tests in run filter, I should properly configure my SUT. It is quite painful process, so I would like to do it only if some specific test should be ran.
Is any way to receive list of tests to be ran by console runner, ideally in SetUpFixture?
If any tests in the same namespace (or descendants) as the SetUpFixture are selected, the the SetUpFixture will be run. If none are selected, then it will not be run.
Since this is how SetUpFixtures work, you should organize your tests so that only those that need this configuration step are in the namespaces covered by the SetUpFixture.
In my experience working with teams, I have found that they are sometimes hampered by standards (imposed or self-chosen) that require the test namespaces to conform to a particular design. This is a bad idea when using a system like NUnit that depends on the namespace structure to control how tests are executed.

Map Testcase ID with NUnit

I'm currently building out my automation framework using NUnit, I've got everything working just fine and the last enchancement I'd like to make is to be able to map my automated test scripts to test cases in my testing software.
I'm using TestRail for all my testcases.
My ideal situation is to be able to decorate each test case with the corresponding testcase ID in test rail and when it comes to report the test result in TestRail, I can just use the Case id. Currently I'm doing this via matching test name/script name.
Example -
[Test]
[TestCaseId("001")]
public void NavigateToSite()
{
LoginPage login = new LoginPage(Driver);
login.NavigateToLogInPage();
login.AssertLoginPageLoaded();
}
And then in my teardown method, it would be something like -
[TearDown]
public static void TestTearDown(IWebDriver Driver)
{
var testcaseId = TestContext.CurrentContext.TestCaseid;
var result = TestContext.CurrentContext.Result.Outcome;
//Static method to report to Testrail using API
Report.ReportTestResult(testcaseId, result);
}
I've just made up the testcaseid attribute, but this is what I'm looking for.
[TestCaseId("001")]
I may have missed this if it already exists, or how do I go about possibly extending NUnit to do this?
You can use PropertyAttribute supplied by NUnit.
Example:
[Property("TestCaseId", "001")]
[Test]
public void NavigateToSite()
{
...
}
[TearDown]
public void TearDown()
{
var testCaseId = TestContext.CurrentContext.Test.Properties["TestCaseId"];
}
In addition you can create custom property attribute - see NUnit link
For many years I recommended that people not do this: mix test management code into the tests themselves. It's an obvious violation of the single responsibility principle and it creates difficulties in maintaining the tests.
In addition, there's the problem that the result presented in TearDown may not be final. For example, if you have used [MaxTime] on the test and it exceeds the time specified, your successful test will change to a failure. Several other built-in attributes work this way and of course there is always the possibility of a user-created attribute. The purpose of TearDown is to clean up after your code, not as a springboard for creating a reporting or test management system.
That said, with older versions of NUnit, folks got into the habit of doing this. This was in part due to the fact that NUnit addins (the approach we designed) were fairly complicated to write. There were also fewer problems because NUNit V2 was significantly less extensible on the test side of things. With 3.0, we provided a means for creating test management functions such as this as extensions to the NUnit engine and I suggest you consider using that facility instead of mixing them in with the test code.
The general approach would be to create a property, as suggested in Sulo's answer but to replace your TearDown code with an EventListener extension that reports the result to TestRail. The EventListener has access all the result information - not just the limited fields available in TestContext - in XML format. You can readily extract whatever needs to go to TestRail.
Details of writing TestEngine extensions are found here: https://github.com/nunit/docs/wiki/Writing-Engine-Extensions
Note that are some outstanding issues if you want to use extensions under the Visual Studio adapter, which we are working on. Right now, you'll get the best experience using the console runner.

A Faster Way to Generate A Series of Ordered Tests Without Renaming My Unit Tests

Question
For a large number of Unit Tests, Is there an easy way to generate a "OrderedTest" file for each test class within my project, allowing me to run each test method in the order that they appear within their respective classes?
Background
I have a large number(1000+) of Selenium Functional Tests contained within a Unit Test Project. In my project each class represents a page and each "Unit Tests" represents one of the my functional tests. Typically the tests are run in the following manner:
Create - Complex object within the page(10ish tests)
Manipulate\Edit - The already created complex object(100ish tests)
Tear-down\Delete - Remove the complex object piece by piece until the test page is resorted to its original state(10ish tests)
Due to the many complexities and load times of each page, each one of these tests(really just the groups) must be run in a specific order within their given class. I understand that this is "not optimal" to structure my tests in the manner, but unfortunately I have not found an alternative design for my tests to run in any reasonable amount of time.
I previously use ReSharpers test tool to run these tests, with this tool I'm able to run each test in the order that is appears in each class. Now I'm attempting(for various irrelevant reasons) to us MSTest to run my tests. MSTest runs each test by default in a "non-deterministic" order.
I would like to use "Ordered Tests" to enforce the order of each test. However, since I follow this convention my tests are currently not named in the order they are to be run. The order that I need to run the tests is currently defined by their order within their class.
So here's my problem, when I create a new "Ordered Test" file, the interface does not allow me to sort the "Avaliable tests" by their "natural order"(the order in which they appear in their class), it also does not allow me move the order of each of the "Selected Tests" more than one space(once per click). In a small scale project this would be just annoying, with my 1000+ test project(with many more 1000's on the way) it's very difficult to generate a ordered test for each one of my classes due to the overhead of having to order each item by hand.
Follow Up
The simpilist way I can think of to solve this is to write a script to generate "Orderedtest" files exactly as I've stated in my question, but it really strikes me as excessive, maybe I'm not following a standard(or recommended) path in coding my selenium test structure. I would think if many people had already followed this path, that there would be more documentation on the subject, but the little I find in relation to this subject does provide me with a clear path to follow.
I wonder if there is an alternate way that I can accomplish the same functionality with MSTest?

skip specflow specs under certain conditions

I am looking at setting up SpecFlow for various levels of tests, and as part of that I want to be able to filter which tests run.
For example, say I want to do a full GUI test run, where I build up the dependencies for GUI testing on a dev environment and run all the specs tagged #gui, with the steps executed through the gui. Also from the same script I want to run only the tests tagged #smoke, and set up any dependencies needed for a deployed environment, with the steps executed through the api.
I'm aware that you can filter tags when running through the specflow runner, but I need to also change the way each test works in the context of the test run. Also I want this change of behaviour to be switched with a single config/command line arg when run on a build server.
So my solution so far is to have build configuration for each kind of test run, and config transforms so I can inject behaviour into specflow when the test run starts up. But I am not sure of the right way to filter by tag as well.
I could do somethig like this:
[BeforeFeature]
public void CheckCanRun()
{
if(TestCannotBeRunInThisContext())
{
ScenarioContext.Current.Pending();
}
}
I think this would work (it would not run the feature) but the test would still come up on my test results, which would be messy if I'm filtering out most of the tests with my tag. If there a way I can do this which removes the feature from running entirely?
In short, no I don't think there is anyway to do what you want other than what you have outlined above.
How would you exclude the tests from being run if they were just normal unit tests?
In ReSharper's runner you would probably create a test session with only the tests you wanted to run in. On the CI server you would only run tests in a specific dll or in particular categories.
Specflow is a unit test generation tool. It generates unit tests in the flavour specified in the config. The runner still has to decide which of those tests to run, so the same principles of choosing the tests to run above applies to specflow tests.
Placing them into categories and running only those categories is the simplest way, but having a more fine grained programmatic control of that is not really applicable. What you are asking to do is basically like saying 'run this test, but let me decide in the test if I want it to run' which doesn't really make sense.

MSTest - Hide some unit tests from build server

I have three unit tests that cannot pass when run from the build server—they rely on the login credentials of the user who is running the tests.
Is there any way (attribute???) I can hide these three tests from the build server, and run all the others?
Our build-server expert tells me that generating a vsmdi file that excludes those tests will do the trick, but I'm not sure how to do that.
I know I can just put those three tests into a new project, and have our build-server admin explicitly exclude it, but I'd really love to be able to just use a simple attribute on the offending tests.
You can tag the tests with a category, and then run tests based on category.
[TestCategory("RequiresLoginCredentials")]
public void TestMethod() { ... }
When you run mstest, you can specify /category:"!RequiresLoginCredentials"
There is an IgnoreAttribute. The post also lists the other approaches.
Others answers are old.
In modern visual studio (2012 and above), tests run with vstest and not mstest.
New command line parameter is /TestCaseFilter:"TestCategory!=Nightly"
as explained in this article.
Open Test->Windows->Test List Editor.
There you can include / hide tests
I figured out how to filter the tests by category in the build definition of VS 2012. I couldn't find this information anywhere else.
in the Test Case Filter field under Test Source, under Automated Tests, under the Build process parameters, in the Process tab you need to write TestCategory=MyTestCategory (no quotes anywhere)
Then in the test source file you need to add the TestCategory attribute. I have seen a few ways to do this but what works for me is adding it in the same attribute as the TestMethod as follows.
[TestCategory("MyTestCategory"), TestMethod()]
Here you do need the quotes
When I run unit tests from VS build definition (which is not exactly MSTest), in the Criteria tab of Automated Tests property I specify:
TestCategory!=MyTestCategory
All tests with category MyTestCategory got skipped.
My preferred way to do that is to have 2 kinds of test projects in my solution : one for unit tests which can be executed from any context and should always pass and another one with integration tests that require a particular context to run properly (user credential, database, web services etc.). My test projects are using a naming convention (ex : businessLogic.UnitTests vs businessLogic.IntegrationTests) and I configure my build server to only run unit tests (*.UnitTests). This way, I don't have to comment IgnoreAttributes if I want to run the Integration Tests and I found it easier than editing test list.

Categories