My requirement is to test individually the dynamic list of URL's using specflow for success code 200
Current Approach: URL list is hard coded in the scenario outline like below wherein I am able to test each URL successfully. Here my test runs as many number of URL's present in the scenario outline and I can easily identify the failed tests if any.
Scenario Outline: URL test
Given list of URL's
When I launch each URL
Then I should expect 200 HTTP status code
Examples:
| URL |
| url 1 |
| url 2 |
| url..n|
New Approach: I am thinking to get the list of URL from the web service dynamically instead of hard coding in the scenario outline.
Scenario Outline: URL test
Given a service to get list of URL's
When I call the API get
Then I should expect 200 HTTP status code
With the above approach am able to get the list of URL's and by iterating am able to launch them individually. The problem here is since there is no scenario outline, In a single test, complete list of URL's will be executed.
I need a way wherein I can create a dynamic data set and each URL testing will happen in its own test method, rather executing list of URL's in a single test method.
Tools Used:
C# , NUnit , Specflow , Resharper, Visual Studio
Am using ReSharper to execute my tests and I looked about dynamic Table / Instance concepts .
Since I cannot (yet) add comments I will supply my comment as an answer.
You probably shouldn't use a scenario in this situation. Scenarios are meant to describe the behavior of the system from a business perspective (in other words business value).
Here it seems that you want to make sure a list of uri's is responding fine. This sounds more as an internal integration test than an actual business verification.
I suggest putting this in a single integration test. There you can simply create a collection of failing uri's. At the end of the test you can simply assert whether the collection contains 0 elements.
You can't dynamically generate table data during tests execution because .cs files are already compiled.
Write parameterized unit tests instead.
Or of course, you can write a tool which will parse your urls, rewrite .feature file, rebuild SpecFlow solution and run tests. JFF :)
Related
I have a situation where feature-toggling is implemented in an application and I want to have automated Selenium tests that are in place for both states of the application. (Both states are possible in a production state.) The tests that are executed would be based on the feature-toggling configuration that is present in the web application.
In a normal situation, using "Scenario Outline" will result in a generated test that can be executed via Test Explorer, and the CI process for an application can be configured to execute all of the tests that are part of the test assembly after it is built. In my situation, an application state could cause one test to pass and another test to fail (think lack of UI controls because of a feature state). I looked into hooks and I see that there is a BeforeScenario tag, but I don't think this gets me what I'm wanting to achieve so much as it allows me to do some setup before a scenario executes.
I usually include a step that sets the scenario to "pending".
Scenario: Some new feature
Given pending story 123
Given condition
When action that includes new feature
Then something
The step definition for Given pending story 123 would call:
Assert.Inconclusive($"Pending story {storyNumber} (https://dev.azure.com/company/project/workitems/edit/{storyNumber})");
This marks the test inconclusive when using MS Test. We use Jenkins for continuous integration, and our builds continue passing. It's also handy because I can provide a link to the story or feature in something like Azure DevOps, GitHub, Jira or their ilk.
This is not exactly a feature toggle, but it is easy to remove or comment out the Given pending line of a scenario in your topic branch to enable the test.
I am trying to replicate a feature currently in SpecFlow called a Step Argument Transformation but using Xunit Gherkin Quick.
Imagine I have a scenario outline, if I have the following given:
Scenario Outline: A customer might visit the home page
Given the customer <HasOrHasNot> visited the home page
.....
Examples:
| HasOrHasNot |
| HasNot |
I want to be able to transform that string into a bool so that we can set the context of the testing scenario in a clean way. Is this only a feature in SpecFlow or is Gherkin Quick able to achieve this some way?
Looking at the documentation it doesn't appear possible.
I may need to write some custom code which is called before setting up the context as a workaround.
I have a rest API that creates things. Let's say the end point is /create. The request body determines what we want to create. An example request body would be {"id":"1"}. If I try to create something with the same id I will get an error so I would like to use a different id every time. Giving a list of these kind of ID's stored in a file or something similar, is it possible to create a load test that uses a different id every time /create is called?
Use a context parameter (CP) that has changing values and put it into the body, thus {"id":"{{theCp}}"}. This will insert the value of CP theCp into the body.
The CP might be from data driving the test. It might be created by some arithmetic (would probably need a plugin) on some values provided by Visual Studio. See the "Context" tab of the web test results to see the range of available values, but they include agent number, iteration number, user id and more.
I'm a C# and UI developer and I'm interested in writing unit tests in VS CodedUI for ASP.NET web applications. Most of the usages I've seen are integration tests, in that you write a test that points at the actual web page, run through some steps and test the output. I want something smaller and more granular, that is super simple for developers (read: lazy :P) to write.
My current setup looks like:
Web application, containing pages, controls, javascript, etc.
The web app also contains test pages - pages that contain a single user control in the markup, and some hard-coded data in the code-behind.
A CodedUI project that launches the test page, runs the test and asserts the output.
This is a nice start, but I'm looking to improve it.
The (first...) problem is that the test data and the test steps are in different locations - on the web app and codedUI project respectively. In regular developer unit tests, you write code that sets the data, then you do everything else, and everything you need is in one location. With my setup, a person looking at a test failure has to know to look at both the test and the page being tested.
A few ideas I had, and why they suck:
Put the test page in the codedUI project. This is a problem for a few reasons. User controls can't be easily tested, was my stopping point, but I believe there are plenty of others.
Pass in a query-string to the test page that provides the data. This isn't a terrible idea, but it might get unwieldy quickly.
Pass in a public class to be loaded by the test page which contains the test data. This requires the test page to have a reference to the testing project, which is no good.
Dynamically write and compile a test page in the codedUI test. I don't even want to start with this.
I suppose you are talking about unit testing your web pages with external data. With CUIT, you can do that with data driven CUIT.
I have three unit tests that cannot pass when run from the build server—they rely on the login credentials of the user who is running the tests.
Is there any way (attribute???) I can hide these three tests from the build server, and run all the others?
Our build-server expert tells me that generating a vsmdi file that excludes those tests will do the trick, but I'm not sure how to do that.
I know I can just put those three tests into a new project, and have our build-server admin explicitly exclude it, but I'd really love to be able to just use a simple attribute on the offending tests.
You can tag the tests with a category, and then run tests based on category.
[TestCategory("RequiresLoginCredentials")]
public void TestMethod() { ... }
When you run mstest, you can specify /category:"!RequiresLoginCredentials"
There is an IgnoreAttribute. The post also lists the other approaches.
Others answers are old.
In modern visual studio (2012 and above), tests run with vstest and not mstest.
New command line parameter is /TestCaseFilter:"TestCategory!=Nightly"
as explained in this article.
Open Test->Windows->Test List Editor.
There you can include / hide tests
I figured out how to filter the tests by category in the build definition of VS 2012. I couldn't find this information anywhere else.
in the Test Case Filter field under Test Source, under Automated Tests, under the Build process parameters, in the Process tab you need to write TestCategory=MyTestCategory (no quotes anywhere)
Then in the test source file you need to add the TestCategory attribute. I have seen a few ways to do this but what works for me is adding it in the same attribute as the TestMethod as follows.
[TestCategory("MyTestCategory"), TestMethod()]
Here you do need the quotes
When I run unit tests from VS build definition (which is not exactly MSTest), in the Criteria tab of Automated Tests property I specify:
TestCategory!=MyTestCategory
All tests with category MyTestCategory got skipped.
My preferred way to do that is to have 2 kinds of test projects in my solution : one for unit tests which can be executed from any context and should always pass and another one with integration tests that require a particular context to run properly (user credential, database, web services etc.). My test projects are using a naming convention (ex : businessLogic.UnitTests vs businessLogic.IntegrationTests) and I configure my build server to only run unit tests (*.UnitTests). This way, I don't have to comment IgnoreAttributes if I want to run the Integration Tests and I found it easier than editing test list.