I have created a CodedUI project in VS2017, i am able to execute the tests on my machine directly in VS2017.
I can easily bind test methods to VSTS online Test case. However, when i build and deploy the project and the assemblies on VSTS online and try to lauchn them from VSTS, i always got errors related to Deploying a Test agent on target machine(s) to be able to execute the test.
Just for information, the web pages we want to test are on a on-premise website, so not accessible from internet.
Do someone know how to make this work, ie how to set up the test machine, how to set up the Test agent on that machine (the microsoft documentation is not reallly clear about that), and what we need to make VSTS be able to communicate with our on-premise test machine.
The easy way is that you can setup a private build agent that can access test machine, then build the project through this private agent: Deploy an agent on Windows.
If you want to use Hosted agent, your test machine need to be accessible from internet.
Related
I have configured a load test using VS 2013 ultimate to performance test a REST Api. We use TFS 2015 for source control and CI. Tests are pointing to a local (with in company's intranet) REST service endpoint. I want these test to run against every build and configured a build definition in TFS. TFS provides a build step called "Cloud-based Load Test" and thats is not going to help me as I am not planning to run tests on the cloud. what is the best approach to run *.loadtest
files ? Has anyone done this ? Is command line my only option ?
Command line is the only option. You need to install VS/mstest on your build agent machine, then add a Command Line task in your build definition. In this task, specify the mstest tool path and add agrument /TestContainer:LoadTest1.loadtest to run the loadtest:
I've wrote a lot of integration tests for an API I'm working on.
If I run the integration tests locally I want to self host the API and run the integration tests against it.
However I have a build definition on VS Team services that runs the integration test after the API was deployed, I want to run the tests against the deployed API, but to do that I have to change my code so it no longer self host it and instead tested against the deployed API.
Is there a way to detect at runtime where my tests are being run?
Thanks
Just probe for one of the VSTS predefined build variables. For example:
if (Environment.GetEnvironmentVariable("SYSTEM_DEFINITIONID") != null)
{
Console.WriteLine("Running from VSTS...");
}
If you are using VSTS Hosted Build agent to run the build and test, the machine name of these agents has a format like "TASKAGENTX-XXXX". You can use it (Environment.MachineName) to determine if the test is ran on VSTS or your local machine.
By the way, what is the difference between the testing run on your local machine and VSTS? Is it just a different URL? If yes, you can add a config file to your project and update your tests to read the URL in the config file for testing. And then use SlowCheetah to perform a transformation to switch the URL when build and test from VSTS.
I am completely new to Azure and PowerShell but have been tasked with setting up a build and deploy solution for several app services.
We currently have a build server (Azure VM) that is running CruiseControl.NET to build and test some C# .NET solutions that should be deployed in Azure.
This build server currently handles the following tasks:
Pulling code from source control when commits happen
Building the projects
Running some unit test cases
Copying output/binaries to an output location
However, as it exists now, developers of each of our app services need to 'Publish' their services manually from their development machines by clicking the button in Visual Studio once they have verified that the build and test cases have passed in the test environment on the server.
As I am hoping for a completely automated solution, I expect I need to use something like PowerShell or the Azure Cross Platform CLI (npm) to do this?
I'm extremely confused with the Azure Service Management vs Azure Resource Management versions with the new Azure Powershell 1.0. All of our services appear to be the newer Resource Management versions, not 'classic'.
The eventual goal is to have the build server do the following
Pulling code from source control when commits happen
Building the projects
Running some unit test cases
Copying output/binaries to an output location
If the build and test cases are successful, update the service in azure to the latest build
I am hoping there is a way to set up these projects, or take the existing binaries that result from the builds, and have them be deployed into Web Apps using the new Azure Resource Management Powershell features.
Any advice or resources for more information about how this can be done?
Hopefully this makes some sense. Please let me know if I am going about this completely the wrong way or direct me to a more correct forum.
Thanks!
have you consider to use Azure App Service? where you can get those build infrastructure for free. e.g https://azure.microsoft.com/en-us/documentation/articles/web-sites-publish-source-control/
Once you setup continues deployment, you will get below three when there is push event (if you are using git)
Pulling code from source control when commits happen
Building the projects
Copying output/binaries to an output location
and to "Running some unit test cases", you can create your own batch or powershell script with post deployment hook https://github.com/projectkudu/kudu/wiki/Post-Deployment-Action-Hooks
I'm trying to make the Visual Studio Online build service run my nSpec tests. I've downloaded the nSpec test adapter (which works fine locally), unzipped the DLLs and uploaded those to a separate TFS repository. I've configured the hosted build controller and set the reference to this repo.
As far as I know this should be sufficient, but my build simply doesn't find any unit tests. If I try to edit the build definition -> Click Process, it downloads custom assemblies it looks like, but it just writes this message to the console:
No assemblies were found in the custom assembly path. The assemblies may not exist or you may not have permissions to read them. Contact your Team Foundation Administrator for more information.
I've tried the same procedure with mSpec as well, but exactly the same happens here. nUnit seems to work though, but if I'm not entirely wrong, that might actually be installed already on hosted build server.
Unfortunately this requires the nSpec plugin to be installed on the build server and the TFS Hosted Build servers and you can only use the plugins provided. I believe that the servers only currently support MSTest, nUnit, and xUnit.
To get this working you will need to create your own custom build server. You can install a build server on Azure and install the tools you need. Then you can run your own build configuration there.
In my current project we are using a TFS Build server for continuous integration (build + run unit tests). We also have a set of automated acceptance tests written as SpecFlow features.
However, these are not integrated into the continuous integration workflow. Today, the application is deployed manually and the acceptance tests are invoked manually.
We would like to automate this in the form of a script/console application or some kind of existing CI tool.
This is what we would like to do periodically, e.g. once every hour:
Ask TFS if there are any new builds
If yes: get the latest successful build from TFS
Deploy the application to our test machine
Execute the SpecFlow tests against the deployed build
Collect the result and present it on some form of web page
Are there any existing tools or frameworks for this? I have read about existing CI servers but they doesn't seem to fit my description. If not, any advice on how to achieve step 1, 2 and 5 programatically or by using command line tools?
In my humble opinion TFS is capable of doing everything that you listed without involving any additional tools. What you might need to do is to setup a Lab Environment and use specific Workflow build definition to achieve it. You need also Test controller and test agents.
The easiest way might be to setup Standard Lab environment which might act in this way -
Build - Deploy - Test workflow
Build got triggered, then got deployed into the lab environment (might be a bunch of either physical or VM machines with installed test agents on them and connected to the Test controller), after that all test are executed and the result is consolidated as a part of the build results.
Hope this helps a bit!
-Rado