I am comfortable with recording Coded UI tests using the VS2010 Ultimate.
The problem I am running into is that some of the UI controls being tested in the Silverlight app that we have built require data that is machine specific.
If I run the tests on my machine they run great. However, my team mates also have to run the same tests on their own machines. Problem is that the Coded UI tests are recorded with my machine name as an input setting to certain text boxes in the application under test. Unless my team mates re-record the same test with their own machine names, those tests will fail.
After some digging I saw that you can associate a CSV, EXCEL, a database, or XML file to drive your coded ui tests. However all examples on MSDN and elsewhere only show pre-configured answer files, and most of them are in CSV format.
What goes on in the answer file, and how can I create one of my own in XML format to drive the values being inputted into the text-boxes when the coded ui test replays?
Any links and guidance would be greatly appreciated!
Disclaimer: Not a fan of CodedUI.
Link1 - Creating a data-driven CodedUI test
It should be possible to use record-n-replay to generate the CodedUI test. Then make the modifications listed above to drive it with inputs from an input-set.
However I'm not sure if re-recording the test would obliterate your modifications... You'd have to try this out and see. If you need this I'd advise using CodedUI in scripting mode (instead of record-n-replay).
Separate the business logic from the UI and you don't have the problem with any functionality/behavior testing of the UI bits. You will then have the data issues solved. As for testing the UI bits, there are a couple of ways of handling this. One relatively simple method is to bootstrap an IOC container with mocks and setting up UI tests on top of the mocked data.
If you want to get into more automated UAT testing, there are tools for that. Not sure about Silverlight/WPF per se (as I don't spend a huge time in either due to moving all business logic out of the UI bits), but I would imagine there has to be one.
Related
My management asked me if there was some GUI tests that we could put in place to validate the end user interface reaction.
I'm personally not fan of GUI testing and would prefer to invest more time in UnitTesting of our services/ViewModel. The issue is that most of our old code is not very Service Oriented, making them hard to Unit Test.
Also our management argue that doing GUI testing would allow to test the same exact binary we deliver to our customer.
I searched a bit, and until now, all engine seems to require that we name every of our components in order to have the tool able to identify which controls the UI tool has to interact with.
So my question, how would you start to make GUI testing with an existing set of applications in WPF?
Thank you very much
I am currently doing my write-up for my third year degree project. I created a system using C# which used a Microsoft Access as the back end database. The system does not connect to the internet, nor does it use the local network for any connectivity.
I am asking for the best method to test an application such as this, so that it is of sufficient testing.
You should impelement the Repository Pattern, which will abstract the database code so that you can test the business logic, while faking out the database calls.
I don't know what exactly you're looking for and how loosely coupled your application is, but in my case, most of the code (roughly 90%) is written so that it can be tested in unit tests, without any need to run the ui. The MVVM pattern is a good starter for that, since it forces you to move code out of your UI into separate classes like ViewModels, Commands, which can be unit tested.
That assures a lot already, and if you need to do automated UI testing, take a look at the Coded UI Tests available in Visual Studio 2010 (Premium and Ultimate only). They allow you to fully automate / simulate user interaction. In simulation, you can do what Justin proposed : detach your application from the database and work with a repository.
You have to keep in mind that in order to write really testable code, you have to design your code testable. In my experience it is next to impossible to write Unit Tests for code written without testing intent right from the start. The probably best thing you can do in that case is write integration tests.
But in order to give more clear advice, we need more input.
Cheers
Our.NET application is based on Winforms. The application controls different instruments and most of the behavior is nicely stubbed to enable unit testing. One thing we come across many times is problems caused by (wrongly using) the GUI. Wrongly handling a controls event or accidentally selecting an item in a listbox that is not there. This would be the example I would like to have some help with:
We have a gridcontrol that users use to select an item. When the user selects a certain item, this should update the 'activeItem' in our model. Now we found out that there was a bug in here, since when the user used a shortcut to select the last item in the gridcontrol, it did not update the control. Of course this is caused by errors we make in the programming, but how could I test this GUI behavior. So that we are sure, if somebody changes the gridcontrol, it still works as expected. Could this be done with normal Unit Testing, or am I getting it completely wrong?
Thanks,
Erik
In our software we have some unit tests which instantiate the forms and perform actions on them and then then check the state of the model if it conforms to the expectations. So in that regards - yes you can do it with unit tests. However we also found it's easy to break those tests by changing the UI around and can involve quite a bit of maintenance. There are automated test frameworks around there which can make your life easier. I have had a brief look at White and NUnitForms but haven't done much with them yet. Feel free to share your experience.
As far as I've seen it this part of testing is usually handled by automated testing software such as TestComplete or QTP. Most automating softwares allow you to record a set of actions which should be done, then specify the expectations which will be checked when actions will be replayed.
But learning such a software is usually a separate skill than either developing a production code or manual software testing so in our company for example we have separate positions for automation QA engineers.
I have an application that I'm building that has had concurrency problems in the past.
This was before implementing any LINQ error handling.
Now, I have some LINQ error handling added to my code, and I was wondering if you could give me tips about how to stress test the hell out of my application. It is super important that everything works when I deploy this thing, so your input would help.
I have two boxes set up at my desk right now to simulate two users doing whatever.
Edit:
Environment: Windows XP
App Type: WinForm
User Count: 15
I strongly suggest using CHESS, a free tool from Microsoft for finding and reproducing concurrency bugs. I have found it invaluable, indispensible, and generally life-saving.
A lot of this depends on the way your application is set up. Do you have unit tests defined? Is it a multi-layer (data/UI/business/whatever) app where you don't need direct human interaction to test the functions that you care about?
There are third part testing applications that let you set up and run scripts for testing your applications (often by recording steps that you do and then running them). WinRunner is but one example. These test systems are usually configurable to some degree.
You can also create a test project (this will be a lot easier if you have a multi-tiered application), and effectively "roll your own" test application. That may give you more control over your tests - you can either choose to do tests with predictable output in a predictable order, or generate random data and test things in random orders at random intervals, or some combination thereof.
I've got a desktop application written in C# created using VS2008 Pro and unit tested with Nunit framework and Testdriven.net plugin for VS2008. I need to conduct system testing on the application.
I've previously done web based system tests using Bad Boy and Selenium plugin for Firefox, but I'm new to Visual Studio and C#.
I would appreciate if someone could share their advice regarding this.
System testing will likely need to be done via the UI. This gives you two options:
1) You can manually conduct the test cases by clicking on elements.
2) You can automate the test cases by programming against the UI. There are plenty of commercial tools to do this or you can use a programming framework like the Microsoft UI Automation Framework. These tend to use the accessibility APIs built into Windows to access your UI.
Whether you go the manual or automated route depends on how many times you will be running the tests. If you are just going to run them once or twice, don't spend the time automating. You will never earn it back. If you are going to run them often, automating can be very handy.
A word of caution: Automating the UI isn't hard, but it is very brittle. If the application is changing a lot, the tests will require a lot of maintenance.
As Thomas Owens commented on your question, first you must decide what kind of system testing you want to do. But assuming you want start with Functional System Tests. Prepare use cases you want to automate. Than you must find proper tool.
Just for start:
AtoIT – is not test atomization tool but it lets automate some tasks. So you could record/script use cases. Not really recommended, but can be done.
HP QuickTestPro – easily can be done with this tool via recording/scripting but it is expensive, so maybe not worth it for personal use.
IBM Robot – as HP QTP.
Powershell – you could write scripts in powershell and execute them. If you would use dedicated ide-like tools for powershell you could record test also. I did some web automation via powershell and it worked. With a bit of work probably you could script around your desktop app.
And the best would be to try different tools, and use one that suits you best. Try this link and this link.
System tests usually have use cases, end to end scenarios and other scripted functions that real people execute. These are the tests that don't lend themselves well to automation as they are asking your unit-tested cogs to work with each other. You might have great unit tests for your "nuts" and your "wrenches" but only a comprehensive system test will let you know if you have the right sized wrench for the nut at hand, how to select/return it from/to the drawer, etc.
In short - manual tests.
If you're willing to put money down, you could look at something like TestComplete.
Although I haven't really used it yet (our company just bought it), it seems quite nice. You can record clicks and keypresses and stuff, define success criteria, and it will replay the test for you later. It appears to be quite smart about UI changes - it remembers which button you clicked, not just the (x,y) of each click.
It's scriptable, or drag-and-drop programmable.
I'm not affiliated in any way, and this is not an endorsement, because I haven't really formed an opinion of it yet.
Perhaps NUnitForms could be useful for you?