Run CodedUI test to automate actions - c#

Is there a way of running CodedUI steps outside a test project?
I want to use them to automate some actions in an application.

The program mstest.exe can be used to invoke Coded UI tests. Its /test:{test name}option allows a specific test (ie activity) to be executed, thus several different activities (ie tests) to be combined into one source file but only the desired activity is executed. Calling mstest.exe from a batch or Powershell script allows the activity to be executed without needing to type a long command each time.
If you already use Coded UI then there is no reason why it cannot be used for automating a series of GUI actions.
An example: For one project we needed to set a database from a backup before each series of tests. Manually that took 5 minutes and sometimes we did it wrong and so wasted time. With Coded UI it always worked and it ran quickly.

There's a significant amount of overhead involved in coded ui that you may not need in your automation task. To execute a coded ui test (and therefore run your automation), you'll need a full Visual Studio Professional or Test Controller/Test Agent installed on every machine that will be running the test/automation, and the machine will have to have a UI that is always available, I.E., a virtual machine configured so the desktop is always available and will not have interactions from another user.
Since your question was rather vague about what you want to automate, I can't really suggest anything in place of Coded UI, but it should be enough to say that you should use the tool that's best suited for the job at hand. Sure, you could use it to run your automation, but why would you want to? (insert imagery of a Corvette pulling a camper here)

Related

Why would I bother properly terminating all threads in a multi-thread process?

The company I work for uses Visual Studio to develop its website and all of its features, and there is also a separate site that's been developed for testing the site. This 'testing' site can run individual test cases against the website, and must be run for each possible case.
Everything is written in VB.NET and each time the program is run a single thread is created to run the test. However, at the 'end' of the test the thread seems to still lingers. The stop button in Visual Studio must be manually clicked in order to terminate the application. Also, a process icon lingers in the task bar long after the application has closed.
It appears to me that the program is not correctly terminating all threads run during the tests, but I'm not sure if this is an issue worth brining up in the office, so I ask the following question...
What is the purpose of properly closing an application and all threads running on it, and what are the consequences, if any, of not doing so?
Well it's probably a small problem now, but it's not a good practice, IMHO. Imagine what would happen if the same code was now being executed by a continuous integration server, for instance, TeamCity (or Jenkins, or...), and the unit tests are being run continuously and automatically, by said build server.
What would happen to the build status when those threads fail to close down cleanly? We often face this problem due to bad design decisions in threading, or due to simple (and possibly, idiotic) mistakes in our unit testing code. The net effect though, is a hung build process.
I've seen CI servers hang for almost half a day before someone (mercifully) killed the build process. Essentially, this indicates a problem in our code that may or may not become a huge issue. If this was server-side code, there is potential for this code to lead to a pretty bad situation. My advice would be to dig out your introspection toolkits (memory profiling, perf profiling, etc) and see what exactly is going on, and resolve it.
We had a similar problem with an application that is being called to index SPA pages on our application server. It was throwing an exception in some cases and threads were not closing. The biggest downside is that it will consume the servers memory which is bad
Another downside as it runs as a web application that it will consume available ports and stop running when it run out of available ports.
The code should be modified to peacefully kill the thread after finishing or on exceptions and of course report any.

Queueing Load Tests in Visual Studio 2012

I'm currently finishing up a project for University that requires us to build an enterprise application using techniques described in Fowlers "Patterns of Enterprise Architecture".
It's your bog standard ASP MVC application which talks to a service layer which talks to a data layer.. etc.
We've also been asked to run several load test scenarios, ranging from 1-25 users. I've created a load test per scenario (1User.loadtest, 5User.loadtest, 10User.loadtest etc..) and I was wondering if there was any way to queue these up and leave them running, rather than starting one, coming back a few minutes later, starting another.. etc.
TL;DR - Anybody know a way to queue load tests?
Load Tests are automatically queued. When running a load test, you can still add a new test. However, you still require to click N times...
another solution is to use the command line tool.
> mstest /TestContainer:LoadTest1.loadtest

Setting up Build Server to run NUnit Selenium Automated tests

I've been assigned a task of setting up a build server (jenkins) and running automated tests after the build agent completes the build.
We are using NUnit and selenium to run automated tests.
The main concern is wait time. Suppose several users check in their sources, a build is run and automated tests are run afterwards (there could be several hundred of these). What's the best way to set this up so that each user does NOT have to wait in queue for tests results. Also, I'm to consider things like test result reports etc.
Where do I start? What do I even google?
I'm very new at this stuff and any info on doing this would be greatly appreciated. thanks
The first thing you'll want to do is to separate your unit tests from your integration tests.
Unit tests should be fast. Integration tests will obviously be slower since you're interacting with external components.
As far as configuring your environment, to do what you're trying to do properly, you'll need to research using Jenkins in a Master/multiple-Slave configuration. This isn't terribly complex, but can take some time to set up.
What you'll likely end up doing is setting up a number of Jobs within Selenium to handle each part of your build process. ie, one job to do the compilation, at least one job to run the unit tests, and at least one job to run the integration tests (and then maybe packaging or deployment jobs depending on how far you want to take this..).
Depending on how slow your overall build process is, you could easily have one job for each component's integration tests and run these concurrently on different slave machines. A parent job could would then aggregate the results and determine whether or not the chick-in passed.
For reporting, you'll want to install the HTML Publisher Plugin, and the NUnit Plugin. These plugins will allow you to bundle the reports produced with the rest of the build artifacts.
In order to give feedback to your team, you'll also want to look at the Wall Display Plugin to display the status of the jobs.

Passing parameters to start as a console or GUI application?

I have a console application that will be kicked off with a scheduler. If for some reason, part of that file is not able to be built I need a GUI front end so we can run it the next day with specific input.
Is there as way pass parameters to the application entry point to start the console application or the GUI application based on the arguments passed.
It sounds like what you want is to either run as a console app or a windows app based on a commandline switch.
If you look at the last message in this thread, Jeffrey Knight posted code to do what you are asking.
However, note that many "hybrid" apps actually ship two different executables (look at visual studio- devenv.exe is the gui, devenv.com is the console). Using a "hybrid" approach can sometimes lead to hard to track down issues.
Go to your main method (Program.cs). You'll put your logic there, and determine what to do , and conditionally execute Application.Run()
I think Philip is right. Although I've been using the "hybrid" approach in a widely deployed commercial application without problems. I did have some issues with the "hybrid" code I started out with, so I ended up fixing them and re-releasing the solution.
So feel free to take advantage of it. It's actually quite simple to use. The hybrid system is on google code and updates an old codeguru solution of this technique and provides the source code and working example binaries.
Write the GUI output to a file that the console app checks when loading. This way your console app can do the repair operations and the normal operations in one scheduled operation.
One solution to this would be to have the console app write the config file for a GUI app (WinForms is simplest).
I like the Hybrid approach, the command line switch appears to be fluff.
It could be simpler to have two applications using the same engine for common functionality. The way to think of it is the console app is for computers to use while the GUI App is for humans to use. Since the CLI App will execute first then it can communicate it's data through the config file to the GUI App.
One side benefit would be the interface to the processing engine would be more concise thus easier to maintain in the future.
This would be the simplest, because the Config file mechanism is easily available and you do not have to write a bunch of formatting and parsing routines.
If you don't want to use the Config mechanism, you could directly write JSON or XML Serialization to a file to easily transfer data also

Has anyone found a way to run C# Selenium RC tests in parallel?

Has anyone found a way to run Selenium RC / Selenium Grid tests, written in C# in parallel?
I've currently got a sizable test suite written using Selenium RC's C# driver. Running the entire test suite takes a little over an hour to complete. I normally don't have to run the entire suite so it hasn't been a concern up to now, but it's something that I'd like to be able to do more regularly (ie, as part of an automated build)
I've been spending some time recently poking around with the Selenium Grid project whose purpose essentially is to allow those tests to run in parallel. Unfortunately, it seems that the TestDriven.net plugin that I'm using runs the tests serially (ie, one after another). I'm assuming that NUnit would execute the tests in a similar fashion, although I haven't actually tested this out.
I've noticed that the NUnit 2.5 betas are starting to talk about running tests in parallel with pNUnit, but I haven't really familiarized myself enough with the project to know for sure whether this would work.
Another option I'm considering is separating my test suite into different libraries which would let me run a test from each library concurrently, but I'd like to avoid that if possible since I'm not convinced this is a valid reason for splitting up the test suite.
I am working on this very thing and have found Gallio latest can drive mbUnit tests in parallel. You can drive them against a single Selenium Grid hub, which can have several remote control servers listening.
I'm using the latest nightly from Gallio to get the ParallelizableAttribute and DegreeOfParallelismAttribute.
Something things I've noticed is I cannot rely on TestSet and TestTeardown be isolated the parallel tests. You'll need the test to look something like this:
[Test] public void Foo(){
var s = new DefaultSelenium("http://grid", 4444, "*firefox",
"http://server-under-test");
s.Start();
s.Open("mypage.aspx");
// Continue
s.Stop();
}
Using the [SetUp] attribute to start the Selenium session was causing the tests to not get the remote session from s.Start().
I wrote PNUnit as an extension for NUnit almost three years ago and I'm happy to see it was finally integrated into NUnit.
We use it on a daily basis to test our software under different distros and combinations. Just to give an example: we've a test suite of heavy tests (long ones) with about 210 tests. Each of them sets up a server and runs a client in command line running several operations (up to 210 scenarios).
Well, we use the same suite to run the tests on different Linux combinations and windows variations, and also combined ones like a windows server with a linux client, windows xp, vista, then domain controller, out of domain, and so on. We use the same binaries and then just have "agents" launched at several boxes.
We use the same platform for: balancing load test load -> I mean, running in chunks faster. Running several combinations at the same time, and what I think is more interesting: defining multi client scenarios: two clients wait for the server to start up, then launch operations, synch with each other and so on. We also use PNUnit for load testing (hundreds of boxes against a single server).
So, if you have any questions about how to set it up (which is not simple yet, I'm afraid), don't hesitate to ask.
Also I wrote an article long ago about it at DDJ: http://www.ddj.com/architect/193104810
Hope it helps
I don't know if no answer counts as an answer but I'd say you have researched everything and you really came up with the 2 possible solutions...
Test Suite runs tests in parallel
Split the test suite up
I am at a loss for any thing else.

Categories