I am trying to run a SpecFlow scenario from code instead of through Test Explorer or the command line. Has someone managed to do this?
From a scenario, I can extract the method name and test method with recursion, but I cannot run this scenario method. It seems to need a proper initialize and teardown, but I could't manage to do this.
My first thought was to use the TechTalk.SpecFlow.TestRunner class, but it doesn't seem to have a scenario selection method.
EDIT on why I want to do this:
We want to run specific scenarios from TFS. It is very cumbersome to connect TestMethods to WorkItems in TFS, because:
You can only assign one testmethod to one workitem
For each workitem you have to search the method name, with in itself is a hassle, because the list is very long with lots of specflow scenarios.
When your specflow scenario gets a different name (which happens a lot), TFS cannot find the correct method anymore
Specflow Scenario Outlines get practically unusable, while they are a very powerful feature.
I want to create a mechanism where each automated workitem gets the same method assigned. This method extracts the workitem id and search and executes the scenario(s) with this workitem tagged.
I had a similar problem since my tests have some dependencies between Scenarios (shame on me, but it saves tons of copy-paste lines per Feature file). In most cases I would stick to isolated Scenarios of course.
I used reflection
Find all Types with a DescriptionAttribute (aka Features)
Find their MethodInfos with a TestAttribute and DescriptionAttribute (aka Scenarios)
Store them to a Dictionary
Call them by "Title of the Feature/Title of the Scenario" with Activator.CreateInstance and Invoke
You have to set the (private) field "testRunner" according to your needs of course.
Related
Id like to have one file or class, in which i can control, which tests should be executed.
I know there is TestNG for Java, which can be used for that.
But i cant find anything for C# in google or here in stackoverflow related to this problem
My current test framework has 17 automation tests (17 classes) and much more will be added this year.
Therefor id like to have one file/class/method, in which i can set, which tests should be executed/not executed, as i don't want every test to be triggered, when i'm actively working on 2-3 automation tests.
My first idea:
In NUnit we can set a [Ignore("reason")] parameter above the class or method, which skips this test.
Is it possible to control these parameter outside of the class?
Id be happy and thankful for any other suggestions!
Nunit is the way to go for a single class with however many #Test methods you create.
You can use The #Ignore annotation you suggested to filter out tests that are not ready, or just dont want to execute.
And You can also use Event Listeners. Very useful tool that will help you control the actions of before, after, fail, pass etc... processed of all your tests. you can specify a condition on the unwanted tests and in the "TestStarted" event listener you Assert an ignore, this will effect all your tests marked by the specific condition.
I have a testing framework that has been converted to heavily utilize NUnit [Parallelizable]. I used to store contextual test data in the base class of the [TestFixture] which NUnit orchestrates the hooks like [OneTimeSetUp], [TearDown], etc.
For example:
[Test]
public void GoToGoogle()
{
var driver = new ChromeDriver();
// do some stuff
// Would like to pass data outside of test scope
TestContext.CurrentContext.Test.Properties.Set("DriverUrl", driver.Url); // Obviously does not work
Assert.Fail("This test should fail");
}
In the [TearDown] hook, I would like to get certain information about the test contextually. Because not everything is able to be handled nicely in asserts.
[TearDown]
public void TearDown()
{
var url = TestContext.CurrentContext.Test.Properties["DriverUrl"].ToString();
var msg = $"Test encountered an error at URL: {url}"
TestAPI.PushResult(Result.Fail, msg);
}
The code above involving the TestContext does not work for obvious reasons, but I am wondering if there is a best practice that allows for me to pass data in this manner, keeping in mind respect to [Parallelizable] and that I cannot scope test data or dependencies to the [TestFixture]
You say "for obvious reasons" but I'll first spell out the reasons why you cannot effectively set a property on the current test through TestContext. After all, other people just might be reading this. :-)
The Obvious Part
TestContext.CurrentContext.Test does not return the internal representation of a test from inside NUnit. Doing so would allow users to break NUnit in a variety of ways. In particular, TestContext.CurrentContext.Test.Properties returns a copy of the properties used within NUnit.
That copy of the properties is not readonly, so you are able to set properties on it. For that reason, one might expect to be able to set it in the [Test] method and access the value in the [Teardown].
Unfortunately, because of a minor implementation detail, that's not the case. In fact, each time you use TestContext.CurrentContext, an entirely new copy of the context is created. The only reason for this, I'm afraid, is that it was originally implemented that way and is a bit difficult to change in a non-breaking way.
As a result of this implementation detail, we lost an easy way for the three parts (SetUp, Test method, TearDown) of a test to communicate. Prior to the availability of parallel execution, it was possible to pass such information using members of the fixture class. That no longer works once tests are run in parallel.
Workarounds
Use Thread Local Storage to hold the retained information. SetUp, Test and Teardown all run on the same thread. Note that OneTimeSetUp and OneTimeTearDown will not generally use the same thread in a parallel execution environment.
If you are wiling to run fixtures in parallel but not individual test cases, then you can still use class members to retain information. As a further step, apply the SingleThreadedAttribute to your fixture, forcing all the code associated with it (including one-time setup and teardown) to run on the same thread.
If you have many fixtures, which can run in parallel, the second approach may actually give you a better performance trade-off than other approaches. Unfortunately, not everyone can use it - at least not without a major reorganization of their tests. You have to look at what your own tests are doing.
Permanent Solution
That would be to modify NUnit so that properties are both writable and shareable, at least within a single fixture instance. There have already been a few feature requests out there to do that on the NUnit GitHub project. I'm no longer active on the framework project, so I don't know what the plans are. However, I think I can say that it's not likely to happen before a major version change, i.e. NUnit 4.0.
[AfterTestRun]
This hook for me is being called twice.
My C# code is correct and at the end of each Scenario I am saving my results to a Concurrent Bag.
Then I use the [AfterTestRun] hook to call the Concurrent Bag and save the data to a database. I see duplicated data, so I assume it’s being called twice.
Additional Info:
I am using SpecRun to run my tests in parallel with the following profile
Execution stopAfterFailures="1" retryCount="0" testThreadCount="3" testSchedulingMode="Sequential"
Packages Installed
SpecFlow Version 2.0.0
SpecRun.SpecFlow 1.3.0
SpecRun.Runner 1.3.0
I am using SpecRun.SpecFlow to run my tests.
Also, how will this hook behave if one has multiple scenarios within each feature? Currently I have 1.
Thanks
Steps are global in specflow, inheritance to get step reuse is unnecessary. In fact if you do inherit step classes the the steps they contain end up being duplicated, and you see the issue you have here. See this answer for additional details.
the simple solution is to place the [BeforeScenario] methods into their own class, and do not have your step classes inherit this. If you need to share state between your steps and your before/after scenarios then use one of the state sharing techniques outlined here
We have a large suite of tests with lots of test case functions. Due to running on many platforms/configurations, we now need to add logic to skip inapplicable tests. The "direct" way of doing so would involve adding this to the top of every single test method:
if (!CheckApplicable(/* Input some enum value of test type/platform, etc */))
{
SkipTest();
return;
}
I don't really want to paste that everywhere. I would prefer a data-driven look such as attributes if possible. But can an attribute somehow tell it to run code at function start? Any other tricks/techniques, maybe involving reflection, that are worth looking into, that wouldn't be more trouble than they're worth? If not, is there at least some clean way to collapse that type of logic into a single line, while still causing the early return?
Note that we need to exit with a return, not by throwing. Also, we cannot switch test frameworks (we are using TAEF).
Edit: Also note that the outcome of CheckApplicable currently is not known until it is checked at runtime at some point. It is not knowable statically and it cannot depend on the command line due to the peculiarities of a large infrastructure. We could run it once super early though, and somehow reconfigure the test pass then, or just run it before every test case.
I would encourage to see if "/select" might work for you:
https://msdn.microsoft.com/en-us/library/windows/hardware/hh439686%28v=vs.85%29.aspx
Specifically:
Write a module that will return true/false value, based on your your criteria
Write your script with "/select:", using that dynamic result
There are some other variations of this question here at SO, but please read the entire question.
By just using fakes, we look at the constructor to see what kind of dependencies that a class have and then create fakes for them accordingly.
Then we write a test for a method by just looking at it's contract (method signature). If we can't figure out how to test the method by doing so, shouldn't we rather try to refactor the method (most likely break it up in smaller pieces) than to look inside it to figure our how we should test it? In other words, it also gives us a quality control by doing so.
Isn't mocks a bad thing since they require us to look inside the method that we are going to test? And therefore skip the whole "look at the signature as a critic".
Update to answer the comment
Say a stub then (just a dummy class providing the requested objects).
A framework like Moq makes sure that Method A gets called with the arguments X and Y. And to be able to setup those checks, one needs to look inside the tested method.
Isn't the important thing (the method contract) forgotten when setting up all those checks, as the focus is shifted from the method signature/contract to look inside the method and create the checks.
Isn't it better to try to test the method by just looking at the contract? After all, when we use the method we'll just look at the contract when using it. So it's quite important the it's contract is easy to follow and understand.
This is a bit of a grey area and I think that there is some overlap. On the whole I would say using mock objects is preferred by me.
I guess some of it depends on how you go about testing code - test or code first?
If you follow a test driven design plan with objects implementing interfaces then you effectively produce a mock object as you go.
Each test treats the tested object / method as a black box.
It focuses you onto writing simpler method code in that you know what answer you want.
But above all else it allows you to have runtime code that uses mock objects for unwritten areas of the code.
On the macro level it also allows for major areas of the code to be switched at runtime to use mock objects e.g. a mock data access layer rather than one with actual database access.
Fakes are just stupid dummy objects. Mocks enable you to verify that the controlflow of the unit is correct (e.g. that it calls the correct functions with the expected arguments). Doing so is very often a good way to test things. An example is that a saveProject()-function probably want's to call something like saveToProject() on the objects to be saved. I consider doing this a lot better than saving the project to a temporary buffer, then loading it to verify that everything was fine (this tests more than it should - it also verifies that the saveToProject() implementation(s) are correct).
As of mocks vs stubs, I usually (not always) find that mocks provide clearer tests and (optionally) more fine-grained control over the expectations. Mocks can be too powerful though, allowing you to test an implementation to the level that changing implementation under test leaving the result unchanged, but the test failing.
By just looking on method/function signature you can test only the output, providing some input (stubs that are only able to feed you with needed data). While this is ok in some cases, sometimes you do need to test what's happening inside that method, you need to test whether it behaves correctly.
string readDoc(name, fileManager) { return fileManager.Read(name).ToString() }
You can directly test returned value here, so stub works just fine.
void saveDoc(doc, fileManager) { fileManager.Save(doc) }
here you would much rather like to test, whether method Save got called with proper arguments (doc). The doc content is not changing, the fileManager is not outputting anything. This is because the method that is tested depends on some other functionality provided by the interface. And, the interface is the contract, so you not only want to test whether your method gives correct results. You also test whether it uses provided contract in correct way.
I see it a little different. Let me explain my view:
I use a mocking framework. When I try to test a class, to ensure it will work as intended, I have to test all the situations may happening. When my class under test uses other classes, I have to ensure in certain test situation that a special exceptions is raised by a used class or a certain return value, and so on... This is hardly to simulate with the real implementations of those classes, so I have to write fakes of them. But I think that in the case I use fakes, tests are not so easy to understand. In my tests I use MOQ-Framework and have the setup for the mocks in my test method. In case I have to analyse my testmethod, I can easy see how the mocks are configured and have not to switch to the coding of the fakes to understand the test.
Hope that helps you finding your answer ...