I have created some Coded UI test using Page Object Model(no recordings ) and i have installed Test Agent as well as Test Controller. Test Agent Seems to show only the status, agent name,controller name and currently running test. I was told that test agent can be used to run tests, can anyone provide a link or some steps to run the tests
Thanks in advance
Arjun Menon
The Agent executes the test as directed by the controller. The controller is sent tests to execute based on your settings in the TestSettings file. Good information here. In essence, though, you would select the settings file that has your desired test environment specified, and then click "run". You cannot debug in a remote environment.
If you simply want to execute the tests on the machine the agent is installed, execute from command line via mstest.exe. More info regarding this process here.
Related
We are setting up our automation to run remotely so we can start incorporating them into the builds (you know, the whole CI/CD thing). These are a handful of important automated GUI tests that for obvious reasons, need an active VM to run. These are not browser tests, they are actually automated tests for a windows application so any support that Selenium brings to the table is off for us.
So now on to the challenge - how can I keep the VMs up and running without having to log into them using the Remote Desktop Connection to allow them to run the tests properly. Currently, I have to connect to them from my local machine and then minimize it and then I can kick off the builds. As soon as I exit however, the virtual machine is locked again.
I want the VMs to work completely independently from my machine, so I was skeptical about this approach because it seemed like it would still be tied to my machine only. Pretty much anyone in the company can log into the VMs from their machine using their credentials. What I would like to do is to programatically connect to the VM during my global TestStartup and then disconnect at TearDown. Is this possible to do? Has anyone had success or ran into similar situations with their automation integration process? We use a tool called LeanFT and NUnit as our test runner. .
Your idea to log in as part of the test is a bit fragile and prone to instabilities.
Here is the setup that works for every UI automation tool I've used for Windows
set up your VM to not lock / sleep /hibernate, etc.
Avoid using RDC (turn that feature off, even for admins if you can)
Only use the console viewer for your vm server
Limit access to those systems using the permissions in the VM server so that only you and your team can interact with them.
Here is why this works. You have already discovered that when you disconnect the RDP connection, the session locks and your automation fails. By using the vm console viewer, it's essentially like turning on/off the monitor connected to the system. By keeping them on all the time and not sleeping, they are always available for running tests.
We are using LeanFT and to encourage the stability of our tests, we have setup tasks to check the running processes to kill any stray leanft runtimes that didn't get closed cleanly from a prior run, as well as any stray applications that were not closed properly after a testing run.
These kind of issues are really annoying for UI automation.
In the end I found a solution. Not quite well but it works. All I did is created a Docker container and used it in UI automation job.
The container is composed by SSHD, Xvfb and xfreerdp, which let you connect to massive remote RDP, and because it use xvfb, a virtual display tool, it costs low resource.
Here's the image I created for your reference.
https://hub.docker.com/repository/docker/ariyuan/ubuntu1604_ssh_rdp
Before your UI automation start you just need to tell the container to open remote RDP connection to the machine where your UI automation hosted. In this case your display for UI automation will be kept all the time during the execution.(You can do it all by Jenkins with parameters to connect to different remote machine)
I have a CI server using Jenkins which builds the code, starts the server, and runs the application tests. I would like to monitor the logs during this process to check for errors (e.g., "ERROR"). I see a Log Parser Plugin for jenkins but that only seems to monitor the console logs.
I would like something that goes against my application log. In my particular case I'm using a .NET Server (IIS/C#/ASP). Maybe I just need a utility specific to my architecture and run a command line interface in Jenkins. Or should I just be using a cloud service for this to monitor my application logs?
The Text-finder plugin can parse any files and change the build status if your search matches.
I'm not sure it will solve your issue if you want to do some live monitoring on running applications.
But if you launch this job every minutes, it will do the job :)
I have a WCF application in VS2012. We have also added a testing project for doing unit tests.
This is all working well
In addition to this, I need to add another test project for doing system tests, the idea here is that the web service would start, and the test project would fire Http Requests, as kind of an end to end test of the service in the environment it is in.
I am not sure if there is actually a way of doing this using the built in visual studio tests.
If I add a normal test class, the website that runs the web service doesn't get started, and the test fails.
The code I am trying to run is like:
[TestMethod]
public void CreateLogin()
{
HttpClient client = new HttpClient();
client.BaseAddress = new Uri("http://localhost:61886/");
HttpResponseMessage response =
client.GetAsync("ssoauthenticationservice/createssologin").Result;
}
I feel like I am missing something very obvious here. So my questions are:
1) Can I test my web service in this manner - is there some way that I can set the project properties so the web service starts first, then the test project runs
or:
2) Is this approach completely wrong, and what I should really be doing is putting my tests into an executable that I can just run in whatever environment we deploy to as a quick post-deployment smoke test.
You can most definitely test your wcf service using this method. Whether or not its good depends on your point of view. it also depends on whether or not your business implementation is tightly coupled with the .svc code behind.
To do this, you will most likely need to create a service host in a TestInitialize or ClassInitialize and tear it down after your test are run.
You can see more about self hosting here http://msdn.microsoft.com/en-us/library/ms731758.aspx
Should also note that how well this method works is dependent on the environment that the unit test is run. It works great from a local machine, but requires admin permissions. Because of this, using this method seems to be a bad idea in a continuous integration environment.
My problem is, I have multiple test projects/suites which are build to test different applications. I need to trigger the tests from a web based application by choosing which suite to run - like a one click triggering. I need to know if I can use a test controller or any other method to trigger the test suites/projects from a web application to execute the tests in a remote machine and give me the results.
I also heard that test agents and controllers can only work for one project collection and it cant share same physical space for another test project collection. Is there anyway to configure them for my problem statement.
OR
I have even explored the MSTest.exe and VSTest.Console.exe methods of triggering the test cases from the web page, it works fine in the local machine, but when I publish and host the website on IIS, it say "To run tests that interact with the desktop, you must set up the test agent to run as an interactive process. For more information, see "How to: Set Up Your Test Agent to Run Tests That Interact with the Desktop""
I am stuck here and need some pointers on how to go about this.Any sort of help be really helpful and appreciable.
I added some simple WatiN tests to our app today to check that a cookie value is stored correctly.
The tests pass locally on all machines in the team. However, when CruiseControl runs the tests on our Build server these new tests fail on the line containing
browser.GetCookie(url, cookieName)
The error given in the CruiseControl log is the old chestnut of:
Object reference not set to an instance of an object.
I have logged on to the Build server with Remote Desktop, using the same user account that CruiseControl runs under, and run MbUnit manually, and the tests pass. So it can't be a problem with the permissions on the Build server to access cookies.
I have looked through all the WatiN documentation for help, but come up empty. I've restarted the CruiseControl service. I've tried everything I can think of and I'm now completely at a loss now as to what could be different in the way Cruise Control runs these tests.
Does anybody know what could be causing this and/or how to resolve it?
Try running CruiseControl as a application instead of as a service. WatiN tends to hang when run from a service (since windows services are not attached to a UI session, and WatiN requires one for handling dialog boxes in IE).
But besides that ... recall that IE cookies are stored in your user profile. Profiles are not loaded when services run -- they are daemon processes that run quietly in the background and don't actually run in the context of a logon session. I suspect that's the cause of your exception.