I have a web test, let's call it WebTestParent that calls another web test, WebTestChild.
There is no problem when I run it from the IDE, but when I try running it from the command line using mstest, like this:
C:\MySolution> mstest.exe /testmetadata:"Tests.vsmdi" /test:"WebTestParent.webtest" /testsettings:"local.testsettings"
I get this error:
Cannot find the test 'WebTestChild' with storage 'C:\MySolution\somesubfolder\WebTestChild.webtest'.
The file local.testsettings has "Enable deployment" checked.
Did anyone experience this and maybe found a solution?
Thanks.
I am not familiar with web tests but I have done this with unit tests. I belive that your problem is not the call from one to test to the other. Maybe your 'WebTestChild' (or both tests) does not belong on the 'TestList' in your 'Tests.vsmdi' file.
If you have not a list for your tests then you should create one. Check here for more details.
Related
We're trying to run automated tests of our functions using a unit test program in visual studio but we need to connect to the database in order to run them successfully. The program only connects to the database when the whole solution is run, so is there a way to run the unit tests when you run the solution?
The main point of unit testing is separating everything else from the class under test. For example, when you testing your UserService to see if CreateNewUser() function is successful or not, you need to remove all dependencies by using test doubles as mock, stub, fake etc.
After creating a double for the dependencies(database connection), you can call your function to see it is working if every dependency works correctly. So you will see if your unit of code is doing its job.
Is it possible to determine how many tests have been selected to execute before the test runner executes them? This would be helpful for local testing since our logic generates test configuration for every test suite configuration at a time, having the ability to figure out which tests have been selected to execute could allow me to create logic to only create test data configuration for those tests.
This would be useful when writing tests and testing that they work. Since we only want to generate test data for the selected test.
Right now we have to comment out code to disable it from executing the test data configuration.
Thanks.
I think you are overthinking this a bit. Your setup can be done with a combination of splitting the setup work between the whole test assembly setup that only runs once, namespace wide setup that runs once before any test in the namespace is run, the constructor for a test fixture, the start of an actual test, etc.
If you are reusing a docker instance and app pool for all the tests then initialize it in the whole assembly setup so that it is only done once. Then each test can just add whatever data it needs before it starts. If some of that data is shared between tests then just setup global flags to indicate what has been done already and if some data a test needs hasn't been setup then just do the incremental additional setup needed before continuing with that test, but this generally isn't required if you organize your tests into namespaces properly and just use the namespace wide setup for fixtures.
I am looking at setting up SpecFlow for various levels of tests, and as part of that I want to be able to filter which tests run.
For example, say I want to do a full GUI test run, where I build up the dependencies for GUI testing on a dev environment and run all the specs tagged #gui, with the steps executed through the gui. Also from the same script I want to run only the tests tagged #smoke, and set up any dependencies needed for a deployed environment, with the steps executed through the api.
I'm aware that you can filter tags when running through the specflow runner, but I need to also change the way each test works in the context of the test run. Also I want this change of behaviour to be switched with a single config/command line arg when run on a build server.
So my solution so far is to have build configuration for each kind of test run, and config transforms so I can inject behaviour into specflow when the test run starts up. But I am not sure of the right way to filter by tag as well.
I could do somethig like this:
[BeforeFeature]
public void CheckCanRun()
{
if(TestCannotBeRunInThisContext())
{
ScenarioContext.Current.Pending();
}
}
I think this would work (it would not run the feature) but the test would still come up on my test results, which would be messy if I'm filtering out most of the tests with my tag. If there a way I can do this which removes the feature from running entirely?
In short, no I don't think there is anyway to do what you want other than what you have outlined above.
How would you exclude the tests from being run if they were just normal unit tests?
In ReSharper's runner you would probably create a test session with only the tests you wanted to run in. On the CI server you would only run tests in a specific dll or in particular categories.
Specflow is a unit test generation tool. It generates unit tests in the flavour specified in the config. The runner still has to decide which of those tests to run, so the same principles of choosing the tests to run above applies to specflow tests.
Placing them into categories and running only those categories is the simplest way, but having a more fine grained programmatic control of that is not really applicable. What you are asking to do is basically like saying 'run this test, but let me decide in the test if I want it to run' which doesn't really make sense.
I am new to unit testing, so I am probably misunderstanding something big, but I have been asked to create some unit tests for my WCF service. It's a very simple service that executes a stored procedure and returns the result. The second line in my operation is this:
string conn = ConfigurationManager
.ConnectionStrings["AtlasMirrorConnectionString"].ConnectionString;
Everything works fine when deploying the service, but under unit testing, it seems that the config file becomes invisible. ConfigurationManager.ConnectionStrings["AtlasMirrorConnectionString"] becomes a null reference and throws accordingly.
How do I include my config file in the tests? Right now, the only behavior I'm able to test is the handling of missing config files, which is not terribly useful.
asked again and again and again and answered by me last week and this week as well :)
if you have your unit tests in another project (VS generated test project, class library etc...) just create an app config for that unit test project and put the same configuration keys as you have in the project which works.
of course I am simplifying because you could absolutely want to customize those keys with specific test values, but as a start copy what works then customize in case you want to point to another database, machine etc... :)
If you want your unit test to always have the same values as your project then you can use the following line as a post-build event in the test project
copy /Y "$(SolutionDir)ProjectName\App.config" "$(TargetDir)TestProjectName.dll.config"
You'll have to decorate the test class or method with the DeploymentItemAttribute to deploy the configuration file into the test directory.
Use something like this on your TestClass (this assumes that you have a copy of the app.config local to your testclasses):
[DeploymentItem("app.config")]
I have three unit tests that cannot pass when run from the build server—they rely on the login credentials of the user who is running the tests.
Is there any way (attribute???) I can hide these three tests from the build server, and run all the others?
Our build-server expert tells me that generating a vsmdi file that excludes those tests will do the trick, but I'm not sure how to do that.
I know I can just put those three tests into a new project, and have our build-server admin explicitly exclude it, but I'd really love to be able to just use a simple attribute on the offending tests.
You can tag the tests with a category, and then run tests based on category.
[TestCategory("RequiresLoginCredentials")]
public void TestMethod() { ... }
When you run mstest, you can specify /category:"!RequiresLoginCredentials"
There is an IgnoreAttribute. The post also lists the other approaches.
Others answers are old.
In modern visual studio (2012 and above), tests run with vstest and not mstest.
New command line parameter is /TestCaseFilter:"TestCategory!=Nightly"
as explained in this article.
Open Test->Windows->Test List Editor.
There you can include / hide tests
I figured out how to filter the tests by category in the build definition of VS 2012. I couldn't find this information anywhere else.
in the Test Case Filter field under Test Source, under Automated Tests, under the Build process parameters, in the Process tab you need to write TestCategory=MyTestCategory (no quotes anywhere)
Then in the test source file you need to add the TestCategory attribute. I have seen a few ways to do this but what works for me is adding it in the same attribute as the TestMethod as follows.
[TestCategory("MyTestCategory"), TestMethod()]
Here you do need the quotes
When I run unit tests from VS build definition (which is not exactly MSTest), in the Criteria tab of Automated Tests property I specify:
TestCategory!=MyTestCategory
All tests with category MyTestCategory got skipped.
My preferred way to do that is to have 2 kinds of test projects in my solution : one for unit tests which can be executed from any context and should always pass and another one with integration tests that require a particular context to run properly (user credential, database, web services etc.). My test projects are using a naming convention (ex : businessLogic.UnitTests vs businessLogic.IntegrationTests) and I configure my build server to only run unit tests (*.UnitTests). This way, I don't have to comment IgnoreAttributes if I want to run the Integration Tests and I found it easier than editing test list.