how to test winforms applications - c#

Our.NET application is based on Winforms. The application controls different instruments and most of the behavior is nicely stubbed to enable unit testing. One thing we come across many times is problems caused by (wrongly using) the GUI. Wrongly handling a controls event or accidentally selecting an item in a listbox that is not there. This would be the example I would like to have some help with:
We have a gridcontrol that users use to select an item. When the user selects a certain item, this should update the 'activeItem' in our model. Now we found out that there was a bug in here, since when the user used a shortcut to select the last item in the gridcontrol, it did not update the control. Of course this is caused by errors we make in the programming, but how could I test this GUI behavior. So that we are sure, if somebody changes the gridcontrol, it still works as expected. Could this be done with normal Unit Testing, or am I getting it completely wrong?
Thanks,
Erik

In our software we have some unit tests which instantiate the forms and perform actions on them and then then check the state of the model if it conforms to the expectations. So in that regards - yes you can do it with unit tests. However we also found it's easy to break those tests by changing the UI around and can involve quite a bit of maintenance. There are automated test frameworks around there which can make your life easier. I have had a brief look at White and NUnitForms but haven't done much with them yet. Feel free to share your experience.

As far as I've seen it this part of testing is usually handled by automated testing software such as TestComplete or QTP. Most automating softwares allow you to record a set of actions which should be done, then specify the expectations which will be checked when actions will be replayed.
But learning such a software is usually a separate skill than either developing a production code or manual software testing so in our company for example we have separate positions for automation QA engineers.

Related

WPF GUI testing without having to give a key/name to every component?

My management asked me if there was some GUI tests that we could put in place to validate the end user interface reaction.
I'm personally not fan of GUI testing and would prefer to invest more time in UnitTesting of our services/ViewModel. The issue is that most of our old code is not very Service Oriented, making them hard to Unit Test.
Also our management argue that doing GUI testing would allow to test the same exact binary we deliver to our customer.
I searched a bit, and until now, all engine seems to require that we name every of our components in order to have the tool able to identify which controls the UI tool has to interact with.
So my question, how would you start to make GUI testing with an existing set of applications in WPF?
Thank you very much

WPF UI Automation with .NET 4.5 with Prism and MVVM with Click Once App for ease of use for non developer [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have searched and tried a few things already namely this thread:
How to test a WPF user interface?
I have tried getting started with Systems.Windows.Automation and TestAutomationFX(3rd party tool). My opinion is that while good for simple things, TestAutomation kind of bombs when it goes multiple UI levels down(a usercontrol within a usercontrol from a loaded assembly) and I may have to manually tweak their code behind to get what I want. System.Windows.Automation seems old and I would have to do everything manually which may take more time than I want to devote as I am not full time on automation creation. I have also downloaded the Inspect.exe from the Windows SDK for Windows 7, which works great for reflecting objects in my UI. Both testers run fine for simple code behind but then when it gets a few layers down it seems a bit involved. I was also going to try the 'TestSTack/White' on GitHub that moved from the original Project White.
I was curious if anyone had experience recently in UI automation that a non developer could use in a QA position. I was thinking of getting VS 2013 Test Pro but that seems like overkill potentially and is more expense than the VS 2013 Pro from what I could see. Basically this is not load testing or verbose complex dynamic entity results changing, just function testing with hopefully ten or so runs of different variables. It is just a more confusing layout as we are combining the Prism method, Microsoft.Practices.Prism, with MVVM as well.
I do not mind developing something in VS 2013 and .NET 4.5 but I was hoping to get something that I am not developing a whole other set of projects for, but to save time. I am an extreme noob at unit testing projects but the end goal is really:
Give a non developer an exe or some environment to automatically run a Click Once UI written in WPF following some Prism and MVVM architecture.
Hopefully have some type of CSV, config, or other method he can change variables to run certain tests on.
Have it be able to input the exe of the click once app in a config or changeable manner(Click Once is funny finding it in my experience of opening Task Manager and then 'open location' of the click once app, which differs from box to box).
This may be a lot to ask, or it may be simple for those that do unit testing every day, I dunno. I am up to try 3rd party products, non .NET products to run .NET, and coding in C# in VS to make a project(s) to run my UI(as long as it can be ran on boxes not have VS).
Ideally, you wouldn't need to have many UI tests at all -- the bulk of your application's logic should be tested via unit tests. With MVVM, you can easily test your viewmodel to ensure that buttons are disabled/enabled when they're supposed to be, and so on.
Testing your core business logic via your UI is a recipe for disaster. Just don't do it. UI tests are very brittle and tend to need to be re-recorded or updated if your UI changes to any significant degree. If your tests fail for reasons unrelated to the core logic you're trying to test changing, you're less prone to trust that the tests validate what they're supposed to validate. If you don't trust the tests, you'll start to ignore failures. "Oh, that test fails sometimes, it's no big deal." If you can't trust your test to be accurate 100% of the time, why bother having the test?
So what you want to test via a UI testing tool is the very top-most level of the UI, just to make sure that your viewmodel is correctly bound to your view. This boils down to, really, just a handful of tests. For that, you can easily use Coded UI. The tricky part is making sure that all of your controls are automation-friendly, which does involve giving the controls proper names and making sure you're attaching the correct automation properties to your controls.
Coded UI is available in VS Premium and above, and you don't need to be using Microsoft Test Manager to manage and run the tests, although it's certainly easier.
It sounds like what you're really after is MTM, though. You want your manual testers to be able to record tests by interacting with your application, then play them back later. This is exactly what MTM was designed for, and what it excels at.
Sadly this answer sounds like too late for the stage you are at but I am happy with the basics of MVVM Light Toolkit:
Basically you start by setting up with some IOC container and dependency injection scheme and PRISM. Your Services can then provide design time, run time and test time implementations with mocking etc. There are some videos and tutorials but like most of wpf it is hard as a newcomer to sort through ancient obsolete stuff vs relevant best practices.
MVVM Light at least has a focus on enabling Blend to work at design time and smells like some kind of best practice.
Now for the part where this sadly does not answer your question: the idea is to be able to layout in Blend so you can see what things will look like without endless tweak-compile-run cycles. Testing is purely on the underlying ViewModel and Model. You then arm wave that the app works because the UI bindings are unlikely to be wrong / are relatively simple to manually run through and verify. That last part works for my project but may be deeply unsatisfying for yours.

How to data drive coded ui tests?

I am comfortable with recording Coded UI tests using the VS2010 Ultimate.
The problem I am running into is that some of the UI controls being tested in the Silverlight app that we have built require data that is machine specific.
If I run the tests on my machine they run great. However, my team mates also have to run the same tests on their own machines. Problem is that the Coded UI tests are recorded with my machine name as an input setting to certain text boxes in the application under test. Unless my team mates re-record the same test with their own machine names, those tests will fail.
After some digging I saw that you can associate a CSV, EXCEL, a database, or XML file to drive your coded ui tests. However all examples on MSDN and elsewhere only show pre-configured answer files, and most of them are in CSV format.
What goes on in the answer file, and how can I create one of my own in XML format to drive the values being inputted into the text-boxes when the coded ui test replays?
Any links and guidance would be greatly appreciated!
Disclaimer: Not a fan of CodedUI.
Link1 - Creating a data-driven CodedUI test
It should be possible to use record-n-replay to generate the CodedUI test. Then make the modifications listed above to drive it with inputs from an input-set.
However I'm not sure if re-recording the test would obliterate your modifications... You'd have to try this out and see. If you need this I'd advise using CodedUI in scripting mode (instead of record-n-replay).
Separate the business logic from the UI and you don't have the problem with any functionality/behavior testing of the UI bits. You will then have the data issues solved. As for testing the UI bits, there are a couple of ways of handling this. One relatively simple method is to bootstrap an IOC container with mocks and setting up UI tests on top of the mocked data.
If you want to get into more automated UAT testing, there are tools for that. Not sure about Silverlight/WPF per se (as I don't spend a huge time in either due to moving all business logic out of the UI bits), but I would imagine there has to be one.

Outside-in BDD (with Specflow)

I'm new to BDD, but I found it very interesting and want to develop my next project using BDD. After googling and watching screencasts I still have lots of questions about BDD in real life.
1. Declarative or Imperative scenarios?
Most of given-when-then scenarios I saw were written in terms of UI (imperative).
Scenario: Login
Given I am on the Login-page
When I enter 'AUser' in the textbox 'UserName'
And I enter 'APassword' in the textbox 'Password'
And I click the 'Login' button
Then I should see the following text 'You are logged in'
I found those tests extremely brittle and they tell nothing about business value of clicking on buttons. I think its nightmare to maintain. Why most of examples use imperative scenarios?
Scenario: Login (declarative)
Given I am not logged in
When I log in using valid credentials
Then I should be logged in
If you prefer declarative style, how do you describe such stuff like 'Home page' or 'Products page'?
Tips for writing good specifications
2. Exercise UI or not?
Most of steps implementations I saw used WatiN, White or something like that to implement scenarios from user point of view. Starting browser, clicking buttons. I think its extremely slow and brittle. Well, I can use something like Page object to make tests less brittle. But thats another amount of work. Especially for desktop applications with complex UI.
How do you implement scenarios in real-life projects - exercising UI, or via testing controllers/presenters?
Best way to apply BDD
3. Real database or not?
When Given part of scenario is implemented, often it needs some data to be in the system (e.g. some products for shop application). How do you implement that part - adding data to real database (full end-to-end testing), or providing repository stubs to controllers?
Waiting for experienced answers!
UPDATE: Added useful links on questions.
Declaritive is the proper way, IMO. If youre talking about page .aspx file names, you're doing it wrong. The purpose of the story is to facilitate communication between developers and non-develoeprs. Non-developers don't care about products.aspx, they care about a product listing. Your system does something the non-developers find value in. This is what you're trying to capture.
Well, the stories tell the high level features you need to implement. Its what your system must do. The only way to really tell if you've done this is to in fact exercise the UI. BDD SpecFlow stories to me don't replace unit tests, rather they're you're integration tests. If you break this, you've broken the value the business gets from your software. Unit tests are implementation details your users don't care about, and they only test each piece in isolation. That can't tell you if A and B actually work together all the time (in theory it should, in practice interesting [read: unexpected] things happen when you actually have the two parts playing with each other). Automated end to end tests can help with your QA as well. If a functional area breaks, you know about it, and they can spend their time in other areas of the application while you determine what broke the integration tests.
This is a tricky one. We've done a hybrid approach. We do use the database (integrate tests after all test the system functioning as one thing, rather than the individual components), but rather than resetting configurations all the time we use Deleporter to replace our repositories with Moqs when we need to. Seems to work ok, but there are certainly pros and cons either way. I think we're still largely figuring this out ourselves.
Edit: I found this article just now describing the concept of limiting yourself to talking only about specific domains to avoid brittle scenarios.
His main point is that the minimum number of domains you can talk about are the problem domain, and the solution domain. If you're talking about anything outside those two domains then you involve too many stakeholders, you introduce too much noise, and you make your scenarios brittle.
He also mentions an absolute "declarative" or "imperative" model being a myth. He talks about a cognative model called "chunking", saying that at any level of your abstraction, you can "chunk up" or "chunk down". This means you can get more explicit (how?) or more meta (what or why?). You chunk up from an imperative model by asking "what are we doing?" You chunk down by saying "how will we do this?" So I guess I wouldn't get too hung up on declarative vs imperative - it won't get you anywhere as far as this problem goes.
What will get you somewhere is figuring out which domains each term belongs in, possibly by identifying which stakeholder is the expert for the domain that term belongs in. Once you've identified all the domains, you can either pick related terms that are in one of the scenario's most prominent domains, or remove non-fitting statements entirely. If that isn't sufficient, you can split up, further specify, or move the scenario so it can satisfy these requirements.
BTW, he also uses the scenario of logging in on a UI, so you've got concrete guidance :)
Before Edit: (some of this still applies. The "DB or no DB" and "UI or no UI" questions are unrelated)
1 - Declarative or Imperative scenarios?
Declarative when you can, though imperative has some value (at some points in a project lifecycle).
Imperative is an easier way to think for testers and business analysts who aren't as familiar with information theory and design. It is also easier to think about earlier on in a project, before you've nailed down your problem domain and workflows. It can be useful for exploratory thinking.
Declarative is less subject to change over time. Since a GUI is the part of an application most subject to churn at a whim, this is extremely valuable. It is easier to think about once you've nailed down your problem domain and workflows, and are more focused on relational concepts. It is a more stable and more generally applicable model.
If you write test cases with a generic and declarative model, you could implement them using any combination of full app GUI automation, integration tests, or unit tests.
how do you describe such stuff like 'Home page' or 'Products page'?
I'm not sure I would at the base level of features and requirements. You might make sub-features and sub-requirements that describe implementation details, like specific UI workflows. If you're describing a piece of a UI, then you should be defining a UI feature/requirement.
2 - Exercise UI or not?
Both.
I think its extremely slow and brittle
Yes, it is. Perform every high level scenario/requirement with the UI and full DB integration, but don't exercise every single code path with end to end UI automation, and certainly not edge cases. If you do, you'll spend more time getting them working, and a lot less time actually testing your application.
You can architecture your application so you can do lower cost integration tests, including single-piece UI based tests. Unit tests are also valuable.
But the fewer integration tests you do, the more forehead-slapping bugs you're going to miss. It may be easier to write unit tests, and they will certainly be less brittle, but you'll be testing less of your application, by definition.
3 - Real database or not?
Both.
High level end-to-end integration tests must be done with the full system in place. This includes a real DB, running your tests with each system on a different server, etc.
The lower level you get, the more I advocate mock objects.
Unit tests should only test individual classes
Mid-level integration tests should avoid expensive, brittle, and impactful dependencies such as the file system, databases, the network, etc. Try to test the implementation of those brittle and impactful dependencies with unit tests and end-to-end tests only.
Instead of mentioning a page by name, describe what it represents, e.g.
Scenario: Customers logs in successfully
When I log in
Then I should see an overview of ACME's top selling products
You can test directly against underlying APIs or models, but the more you do this, the more you risk not catching an integration issue. One approach is to balance things with a small number of full-stack tests, and a larger number which test between two layers only.

System Testing a desktop application

I've got a desktop application written in C# created using VS2008 Pro and unit tested with Nunit framework and Testdriven.net plugin for VS2008. I need to conduct system testing on the application.
I've previously done web based system tests using Bad Boy and Selenium plugin for Firefox, but I'm new to Visual Studio and C#.
I would appreciate if someone could share their advice regarding this.
System testing will likely need to be done via the UI. This gives you two options:
1) You can manually conduct the test cases by clicking on elements.
2) You can automate the test cases by programming against the UI. There are plenty of commercial tools to do this or you can use a programming framework like the Microsoft UI Automation Framework. These tend to use the accessibility APIs built into Windows to access your UI.
Whether you go the manual or automated route depends on how many times you will be running the tests. If you are just going to run them once or twice, don't spend the time automating. You will never earn it back. If you are going to run them often, automating can be very handy.
A word of caution: Automating the UI isn't hard, but it is very brittle. If the application is changing a lot, the tests will require a lot of maintenance.
As Thomas Owens commented on your question, first you must decide what kind of system testing you want to do. But assuming you want start with Functional System Tests. Prepare use cases you want to automate. Than you must find proper tool.
Just for start:
AtoIT – is not test atomization tool but it lets automate some tasks. So you could record/script use cases. Not really recommended, but can be done.
HP QuickTestPro – easily can be done with this tool via recording/scripting but it is expensive, so maybe not worth it for personal use.
IBM Robot – as HP QTP.
Powershell – you could write scripts in powershell and execute them. If you would use dedicated ide-like tools for powershell you could record test also. I did some web automation via powershell and it worked. With a bit of work probably you could script around your desktop app.
And the best would be to try different tools, and use one that suits you best. Try this link and this link.
System tests usually have use cases, end to end scenarios and other scripted functions that real people execute. These are the tests that don't lend themselves well to automation as they are asking your unit-tested cogs to work with each other. You might have great unit tests for your "nuts" and your "wrenches" but only a comprehensive system test will let you know if you have the right sized wrench for the nut at hand, how to select/return it from/to the drawer, etc.
In short - manual tests.
If you're willing to put money down, you could look at something like TestComplete.
Although I haven't really used it yet (our company just bought it), it seems quite nice. You can record clicks and keypresses and stuff, define success criteria, and it will replay the test for you later. It appears to be quite smart about UI changes - it remembers which button you clicked, not just the (x,y) of each click.
It's scriptable, or drag-and-drop programmable.
I'm not affiliated in any way, and this is not an endorsement, because I haven't really formed an opinion of it yet.
Perhaps NUnitForms could be useful for you?

Categories