WatiN test using IE.GetCookie failing only from CruiseControl - c#

I added some simple WatiN tests to our app today to check that a cookie value is stored correctly.
The tests pass locally on all machines in the team. However, when CruiseControl runs the tests on our Build server these new tests fail on the line containing
browser.GetCookie(url, cookieName)
The error given in the CruiseControl log is the old chestnut of:
Object reference not set to an instance of an object.
I have logged on to the Build server with Remote Desktop, using the same user account that CruiseControl runs under, and run MbUnit manually, and the tests pass. So it can't be a problem with the permissions on the Build server to access cookies.
I have looked through all the WatiN documentation for help, but come up empty. I've restarted the CruiseControl service. I've tried everything I can think of and I'm now completely at a loss now as to what could be different in the way Cruise Control runs these tests.
Does anybody know what could be causing this and/or how to resolve it?

Try running CruiseControl as a application instead of as a service. WatiN tends to hang when run from a service (since windows services are not attached to a UI session, and WatiN requires one for handling dialog boxes in IE).
But besides that ... recall that IE cookies are stored in your user profile. Profiles are not loaded when services run -- they are daemon processes that run quietly in the background and don't actually run in the context of a logon session. I suspect that's the cause of your exception.

Related

Is it possible to run selenium as a windows service using C#? [duplicate]

So that we may perform front-to-back web UI testing, we are using Selenium and ChromeDriver to automate page loads/interaction as part of our testing pack.
This is behaving as expected during developer testing (on a developer's local machine), but we are struggling to perform these checks as part of our continuous integration build.
Our server plant is *NIX based, and all of our CI infrastructure runs on these machines. So that we may test Chrome under Windows (our delivery mechanism), we have configured a Selenium Grid. When the CI tests run, they access the grid, in order to locate a Windows node to run the tests on.
We have had a Windows desktop provisioned solely for the purpose of running these tests. This contains our standard enterprise build of Windows 7. This machine will be periodically rebooted in-line with the IT department's update policy.
In an effort to ensure the Selenium server is always running, we have added the Selenium Server (running in "node" mode) as a Windows service. The selenium Server is configured to start-up ChromeDriver to invoke the simulated user-interaction.
However, when running the tests from CI they fail due to timeout. Our working theory is, the system user that is running the service cannot create interactive windows. A web search has raised reference to the "Session 0" problem, but with little to no constructive advice on how to move forward.
Starting the Selenium Server process manually from an interactive session is not a viable solution, as this is leading to brittle tests - which are failing due to an infrastructure problem, rather than a genuine test regression.
How can we have an instance of Selenium Server started via a Windows Service whenever the system reboots, that is capable of launching Chrome instances?
It could be easily done with NSSM.
Installation of services looks like these:
nssm install seleniumhub java -jar C:\selenium\selenium-server-standalone-2.45.0.jar -role hub -hubConfig C:\selenium\hub.json
nssm install seleniumnode java -jar C:\selenium\selenium-server-standalone-2.45.0.jar -role node -nodeConfig C:\selenium\node.json
It provides easily way to remove service if needed:
nssm remove seleniumnode confirm
Add destination to nssm to your PATH variable and run from console as admin
UPDATE April 2021
NSSM is not supported for more than 3 years. So please consider other options like winsw or any other. WinSW does the same job as NSSM and allows to keep run configuration in xml.
You cannot run Selenium Grid as a windows service ever since Windows Vista. Microsoft calls it "Session 0 Isolation". You could do it in Windows 2000 or XP but since the time that Vista came out, Microsoft no longer will let Grid interact with the desktop (or any other UI programs for that matter). Regardless of the fact that you still see that "interact with desktop" checkbox, it is a red herring. So, you MUST run Selenium Grid in the foreground on that server in order for it to get access to the session. If it is running Windows Server, you could in theory have multiple sessions and leave Grid running in the foreground on one of the non-zero user sessions.
Right now you can't help it - it used to work fine in session 0 but for the past few days after chrome update only works for interactive sessions.
Related bugs:
https://code.google.com/p/selenium/issues/detail?id=8029
https://code.google.com/p/chromium/issues/detail?id=422218
My preferred solution to this problem (and my default choice for running Selenium Grid as a service) is to use a simple tool called AlwaysUp. It has a free 30 day trial to try it out.
What to do:
Download AlwaysUp
Configure AlwaysUp to start the Selenium Grid node on startup
Configure AlwaysUp to run the Selenium node as a specific user (not the default System user)
This way the the node will run as a service, survive machine restarts and work with the latest version Chrome.
If the user account you use to login to the machine is different from the user account you specify to run the node as a service then you will not see the browsers pop up on the desktop as they are running in a different user session. The end result is that it is almost identical to running as a normal service but gets round the Session 0 issue.
Yeah, you should use NSSM. Important is, that you add your windows account in the "Log on" tab, or any other valid account. If you run your node with the "Local System account" option, you will get the session 0 problem. With a normal user session, the nodes run smoothly invisible in the background :)
we don't use selenium GRID, we were disappointed with its stability. We use a "Jenkins Grid", that is jenkins slaves nodes on various servers.
The slaves are services with the interact with desktop flag. They run as services with NSSM, and the SERVICE_INTERACTIVE_PROCESS flag. Making sure that NoInteractiveProcess is set to 1 (cf https://learn.microsoft.com/en-us/windows/desktop/services/interactive-services).
We don't have the fancy features of the grid (that is balancing according the browser types slots). Instead, we have Jenkins balancing the test jobs using a slave node or another.
Initially we did not use the interact with desktop flag, having browsers to run without "real" display, but the behavior was not very stable (especially with resize commands).
Hope this helps.
As I explained on this thread, I found that using a small paid tool FireDaemon Pro saved me a lot of time from trying to configure NSSM and other free tools.
It works well in the background, and restarts Selenium along with the server, which was my main requirement for running Selenium Standalone Server as a Windows Service.
This free tool would probably do it:
http://yajsw.sourceforge.net/
For that to work, you need a wrapper.conf file and a script to run the YAJSW wrapper. I takes time to read the documentation, but it is a free solution.
I wrote an example shared here, that installs JBoss7 as a Windows service.
Of course, you can simplify my example by a lot.

Logging into a VM to run GUI tests at test startup

We are setting up our automation to run remotely so we can start incorporating them into the builds (you know, the whole CI/CD thing). These are a handful of important automated GUI tests that for obvious reasons, need an active VM to run. These are not browser tests, they are actually automated tests for a windows application so any support that Selenium brings to the table is off for us.
So now on to the challenge - how can I keep the VMs up and running without having to log into them using the Remote Desktop Connection to allow them to run the tests properly. Currently, I have to connect to them from my local machine and then minimize it and then I can kick off the builds. As soon as I exit however, the virtual machine is locked again.
I want the VMs to work completely independently from my machine, so I was skeptical about this approach because it seemed like it would still be tied to my machine only. Pretty much anyone in the company can log into the VMs from their machine using their credentials. What I would like to do is to programatically connect to the VM during my global TestStartup and then disconnect at TearDown. Is this possible to do? Has anyone had success or ran into similar situations with their automation integration process? We use a tool called LeanFT and NUnit as our test runner. .
Your idea to log in as part of the test is a bit fragile and prone to instabilities.
Here is the setup that works for every UI automation tool I've used for Windows
set up your VM to not lock / sleep /hibernate, etc.
Avoid using RDC (turn that feature off, even for admins if you can)
Only use the console viewer for your vm server
Limit access to those systems using the permissions in the VM server so that only you and your team can interact with them.
Here is why this works. You have already discovered that when you disconnect the RDP connection, the session locks and your automation fails. By using the vm console viewer, it's essentially like turning on/off the monitor connected to the system. By keeping them on all the time and not sleeping, they are always available for running tests.
We are using LeanFT and to encourage the stability of our tests, we have setup tasks to check the running processes to kill any stray leanft runtimes that didn't get closed cleanly from a prior run, as well as any stray applications that were not closed properly after a testing run.
These kind of issues are really annoying for UI automation.
In the end I found a solution. Not quite well but it works. All I did is created a Docker container and used it in UI automation job.
The container is composed by SSHD, Xvfb and xfreerdp, which let you connect to massive remote RDP, and because it use xvfb, a virtual display tool, it costs low resource.
Here's the image I created for your reference.
https://hub.docker.com/repository/docker/ariyuan/ubuntu1604_ssh_rdp
Before your UI automation start you just need to tell the container to open remote RDP connection to the machine where your UI automation hosted. In this case your display for UI automation will be kept all the time during the execution.(You can do it all by Jenkins with parameters to connect to different remote machine)

Powershell Permissions not working remotely

Have the same issue as here:
Run PowerShell script from ASP.NET
I am trying to run powershell scripts on the server through an asp.net webpage. It works on the local server but does not work remotely. Remotely it returns nothing as if the script worked.
I tried modifying the permissions with icacls.exe
icalc.exe c:\test.ps1 /grant "IIS AppPool\DefaultAppPool:(OI)(CI)F"
This had no effect. When I read what the permissions are:
icalcs c:\test.ps1
NT AUTHORITY\Authenticated Users:(I)(M)
NT AUTHORITY\SYSTEM:(I)(F)
BUILTIN\Administrators:(I)(F)
BUILTIN\Users:(I)(RX)
I always get the same return even after I try and modify it. Where is IIS AppPool\DefaultAppPool?
Update
I have been using a script that just opens notepad for testing. When I run this locally notepad pops up. Remotely nothing seems to happen, but then I noticed in task manager there were tons of instances of notepad running. So it would seem that it is working but not how I expected. The end function I am trying accomplish is to add minimal remote capabilities. I also play movies off of my webserver and it would be nice to be able to remote some functions through my existing web interface. So the scripts would have to run on the current user. I suppose this may be better suited using WCF or another type of architecture, but it would be nice just to use my web interface for everything.
Make sure that PowerShell script execution has been updated on the remote server to allow script execution. If the web server runs as 32-bit then fire up an x86 PowerShell console on the machine in elevated (admin) mode and execute:
Set-ExecutionPolicy RemoteSigned
If the web page runs as 64-bit then do the same using a 64-bit elevated PowerShell console.

Triggering test suites from web interface for multiples test projects

My problem is, I have multiple test projects/suites which are build to test different applications. I need to trigger the tests from a web based application by choosing which suite to run - like a one click triggering. I need to know if I can use a test controller or any other method to trigger the test suites/projects from a web application to execute the tests in a remote machine and give me the results.
I also heard that test agents and controllers can only work for one project collection and it cant share same physical space for another test project collection. Is there anyway to configure them for my problem statement.
OR
I have even explored the MSTest.exe and VSTest.Console.exe methods of triggering the test cases from the web page, it works fine in the local machine, but when I publish and host the website on IIS, it say "To run tests that interact with the desktop, you must set up the test agent to run as an interactive process. For more information, see "How to: Set Up Your Test Agent to Run Tests That Interact with the Desktop""
I am stuck here and need some pointers on how to go about this.Any sort of help be really helpful and appreciable.

My exe runs fine by itself, but does nothing when loaded by a service

Simple exe for a tray icon, that works fine independently
I call it using a windows service, and it seems to run(in task manager) but it dosnt seem to exec any code. ie no tray icon etc.
On Vista and Windows 2008, services run in a different session than the user -- any EXE that a service runs will run in the same session as the service. Before Vista, you need to check the "Allow Service to interact with desktop" box, otherwise the same thing applies.
This means that your tray icon EXE isn't able to interact with the user's desktop. You need to look at using CreateProcessAsUser to run the EXE in the correct session.
This blog post is aimed at people using ConfigMgr OS Deployment, but it contains a good list of the steps needed to run a process in another session. There are some non-obvious steps that you need to take or things fail in weird ways.

Categories