I have a Lab Management environment (2010), with deployed agents and controller, TFS, etc. I have Coded UI tests running successfully on my virtual environments, reporting results to TFS.
What I want to achieve is, to be able to communicate with test agents from my code, essentially acting on behalf of the controller, to run tests. Alternatively, it could be ok to instruct controller itself to schedule a run, but the emphasis is that I want to do it from code.
Do you have any idea how to do that?
Thanks.
You can run your tests from the command line using 3 different executables as described here
http://msdn.microsoft.com/en-us/library/ms182486.aspx
Related
I am working on an existing test automation framework that uses the following for UI tests on a windows application:-
C# for creating UI cases
winAppDriver to interact with UI objects
nUnit for validation
specflow for BDD
I don't have much idea about microsoft projects. Here is a simple structure of the project:-
Application
Source
Modules
features
Tests
Each test folder has a "app.config" file in it that supplies the config for the test to work like DB username, pwd, services urls etc.
We execute our test case from "Test Explorer" pane in VS Enterprise which is not the best way if the test cases need to be run remotely in jenkins. As I said I don't have much frame of reference when it comes to Microsoft apps. So, here are some questions to which I have been looking for answers; there seems to be no definite consensus online. Just curious to know how others are maintaining their projects. Here goes:-
is there a jenkins friendly way of running these test cases? using a cmd line or a runner file perhaps?
if I find a way to accomplish #1 how do I inject app.config properties at runtime?
how does one execute these cases on remote machine? Mine is a desktop windows app. What would a high-level strategy look like? I assume I will have to get a remote machine and install the app on that machine?
any pointers, resources to read about would be helpful. Just looking for a nudge in the right direction.
Since you are using nUnit, it has a default approach of running tests from console see here
It looks like you need to pass parameters from the test runner into the runtime. You can use this approach
Execute on remote machine strategy depends on your current infrastructure. E. g. if you are using Gitlab, you should set up the GitLab CI runner into your machine and set up the GitLab pipeline.
looking for a nudge in the right direction
See CI\CD test run best practices
I'm currently working on a project that tests some functionalities on a web application that I developed. It works just fine, but I need to run these features in parallel to buy time.
By changing the "testThreadCount" attribute I can run my tests in parallel. My problem is: My features are independent, but my scenarios are not. Example:
Feature: Test sql insert
Scenario: 1 - Insert client on the database
Given I insert my credentials
And I insert some data on my sql database
Then my client gets inserted succesfully
Scenario: 2 - Check if client exists
Given The above scenario is succesful
And I log in on my web application
When I'm on the dashboard
Then The client should be there
Is there a way to run in parallel only my different features? When I click on "Run selected tests", It priorizes complete features over different ones. I need wait for the first scenario to complete before running the next one
Building your scenarios up like that is a really bad practice and should be avoided as much as possible. That being said if it must be done. You can turn on parallelization at the feature level.
If you are using NUnit you can add [assembly: Parallelizable(ParallelScope.Fixtures)] at the assembly level.
Xunit should by default run tests in parallel by class or feature in this case. Meaning all tests within a feature will run in serially.
You should still consider refactoring your scenarios so they are not dependent on each other I've been down that road before and it eventually becomes an unmanageable nightmare.
Well... I found a work around that managed to make it work on SpecRun.
I just needed to run my features one by one and make another application manage the traffic line, so I developed an application with Windows Forms that does just that.
May not be the best way to approuch this problem, but given my lack of time and the circunstances, it did a great job.
I am comfortable with recording Coded UI tests using the VS2010 Ultimate.
The problem I am running into is that some of the UI controls being tested in the Silverlight app that we have built require data that is machine specific.
If I run the tests on my machine they run great. However, my team mates also have to run the same tests on their own machines. Problem is that the Coded UI tests are recorded with my machine name as an input setting to certain text boxes in the application under test. Unless my team mates re-record the same test with their own machine names, those tests will fail.
After some digging I saw that you can associate a CSV, EXCEL, a database, or XML file to drive your coded ui tests. However all examples on MSDN and elsewhere only show pre-configured answer files, and most of them are in CSV format.
What goes on in the answer file, and how can I create one of my own in XML format to drive the values being inputted into the text-boxes when the coded ui test replays?
Any links and guidance would be greatly appreciated!
Disclaimer: Not a fan of CodedUI.
Link1 - Creating a data-driven CodedUI test
It should be possible to use record-n-replay to generate the CodedUI test. Then make the modifications listed above to drive it with inputs from an input-set.
However I'm not sure if re-recording the test would obliterate your modifications... You'd have to try this out and see. If you need this I'd advise using CodedUI in scripting mode (instead of record-n-replay).
Separate the business logic from the UI and you don't have the problem with any functionality/behavior testing of the UI bits. You will then have the data issues solved. As for testing the UI bits, there are a couple of ways of handling this. One relatively simple method is to bootstrap an IOC container with mocks and setting up UI tests on top of the mocked data.
If you want to get into more automated UAT testing, there are tools for that. Not sure about Silverlight/WPF per se (as I don't spend a huge time in either due to moving all business logic out of the UI bits), but I would imagine there has to be one.
The project I'm working on has a bunch of service-tier unit tests using Microsoft.VisualStudio.TestTools.UnitTesting.TestMethodAttribute. I want to look into some web frontend automated generation tool for these tests. I don't care if I need to use some other framework like NUnit. I need some decent way to have an easy web frontend for looking at test results, that also allows adding new tests in an easy manner.
After a bit of investigation I realised that we already have TeamCity for the builds. Do I need anything else to setup test browsing from teamcity?
We use Cruise Control, NAnt, SubVersion and NUnit together to provide continuous integration. Every commit triggers a build and runs all the unit tests. The cruise control dashboard show build results, unit test results and code coverage for each build. Is that the kind of thing you are looking to do or do you want some kind of web based ad hoc test runner?
Continuous Integration systems normally let you do this and usually have a web front end.
I know that you could set this up using CruiseControl.Net (which is free), the other system that has been recommended to me in the past is TeamCity so I'm sure that could do this too (and its also free as long as you don't configure too many projects).
I wrote unit tests using NUnit. Once all the tests are done, I want to email the Test results to all my team. Is there a way to do it?
Usually this is done using an automated build tool like Cruise Control. It checks your code out of version control, builds the app, runs all the tests, packages the app, and sends it to the first deployment server. Team members can view the complete results of the build and test cycle using a browser to check the build server dashboard.
I'd prefer that to getting e-mailed test results. E-mail would soon become an annoyance.
Since you are using TFS, you can make use of NUnit for Team Build which would make your NUnit test results visible in the build log and incorporated into the data warehouse for reporting.
There is also NUnitForVS which can publish results within TFS.
Going one of the above routes will allow you to leverage the current CI environment and is a much better approach to emailing the results as they surface.