Help understanding how different kinds of tests are called - c#

I'm making an application similar to Auto Hot Key, which reads some pixels in the screen, makes some computations and then performs some mouse clicks.
I have 3 different kinds of tests which I intend to do:
I know I'll make use of Unit Tests,
for testing indivudal classes to
make sure each one of those makes
what it's supposed to do.
Now, I'll also want to test that all
the logic my program has to do is
correct. In this kind of test, I'll
want to fake the mouse input and
also fake the class that performs
the mouse click on the system. What
would this kind of tests be called?
I'll also want a third kind of test,
where I check that it actually does
everything it should in a real
system (i.e., it actually will read
real system's screen pixels, it will
actually perform mouse clicks on my
computer, etc). How is this kind of
test called? Full Integration Test?
System Test?
Thanks

Yes UI (or GUI) testing falls under system testing. See http://en.wikipedia.org/wiki/System_testing for more.
I can provide some help on how to do those tests if they are web-based. You could use
Selinium
YUI Unit test framework etc.
Best of luck...

The second and third are both System Tests. There is a step in between Unit Testing and System Testing called Integration Testing where you combine multiple different parts of the system to test how they communicate, but you don't actually put the whole system together yet.
If you want to do the second step on the GUI, what you call it depends on what it is. If you're only testing the GUI alone with a screen reader and/or a mouse clicker, that's a Unit Test. If you have the whole system put together, it's a System Test. I don't see where you would use these tools for Integration Testing.
I think the important thing isn't what it's called, though. The important thing is that you are thorough and make sure that your application works as intended. To me, this involves taking personal responsibility for putting together a test suite that doesn't overlook any aspect of the system.

Related

How do I unit test a feature that prints to a physical printer?

How can I automate the testing of the printing functionality of my WPF application, including changing common printing properties like paper size, tray, landscape/portrait etc?
I'm thinking about using some sort of virtual printer driver like EmfPrinter, but I still have no idea how to verify, for example, that a paper tray has been set correctly.
Your testing problem may contain unit-testing aspects, but from the description it appears to (at least also) address integration testing aspects. For example you ask
how to verify, for example, that a paper tray has been set correctly
I read this in the following way: You want to be sure that the printing functionality of your software achieves that on the real physical printers the paper tray has been set correctly. If this understanding is correct, then this goes beyond what you can achieve with unit-testing:
With unit-testing you try to find the bugs in small, isolated software pieces. Isolation is achieved by replacing components that the software-under-test (SUT) depends on. These replacement components are often called doubles, mocks or stubs. With such doubles, however, you can not find those bugs, which are related to misconceptions about how the interaction of the SUT with the depended-on components should work: Since you implement the doubles, you implement them based on your potential misconceptions.
For the printer functionality this means, if you double the paper tray selection function, then you can test certain aspects of the unit, but you will still not know if it interacts correctly with the real paper tray selection function. This is not a flaw of unit-testing: It is just the reason why in addition to unit-testing you also need integration and system testing.
To address all aspects of your testing problem you can make your life easier: Design your code such that you try to separate computations from interactions. You would then test the units (functions, methods, ...) that contain computations with unit-testing. Those parts that contain the interactions you would test with integration testing. There will likely remain code parts where such a separation is difficult and you are left with computations mixed with interactions. Here you may have to do the unit-testing with doubles (ensuring by design that you can inject them - see dependency injection and inversion of control). But, this strategy in general can save you a lot of effort for creation of doubles.

Testing WPF applications

I am currently doing my write-up for my third year degree project. I created a system using C# which used a Microsoft Access as the back end database. The system does not connect to the internet, nor does it use the local network for any connectivity.
I am asking for the best method to test an application such as this, so that it is of sufficient testing.
You should impelement the Repository Pattern, which will abstract the database code so that you can test the business logic, while faking out the database calls.
I don't know what exactly you're looking for and how loosely coupled your application is, but in my case, most of the code (roughly 90%) is written so that it can be tested in unit tests, without any need to run the ui. The MVVM pattern is a good starter for that, since it forces you to move code out of your UI into separate classes like ViewModels, Commands, which can be unit tested.
That assures a lot already, and if you need to do automated UI testing, take a look at the Coded UI Tests available in Visual Studio 2010 (Premium and Ultimate only). They allow you to fully automate / simulate user interaction. In simulation, you can do what Justin proposed : detach your application from the database and work with a repository.
You have to keep in mind that in order to write really testable code, you have to design your code testable. In my experience it is next to impossible to write Unit Tests for code written without testing intent right from the start. The probably best thing you can do in that case is write integration tests.
But in order to give more clear advice, we need more input.
Cheers

Which testing method to use?

First of all, I'm new to testing software. I suppose I'm rather green. Still for this project I need it to make sure all the calculations are done right. For the moment I'm using unit tests to test each module in C# to make sure it does what it should do.
I know about some different testing methods (Integration, unit testing), but I can't seemingly find any good book or reference material on testing in general. Most of em are to the point in one specific area (like unit testing). This sort of makes it really hard (for me) to get a good grasp on how to test your software the best, and which method to use.
For example, I'm using GUI elements as well, should I test those? Should I just test it by using it and then visually confirm all is in order? I do know it depends on whether it's critical or not, but at which point do you say "Let's not test that, because it's not really relevant/important"?
So a summary: How to choose which testing method is best, and where do you draw the line when you use the method to test your software?
There are several kind of tests but imho the most important are unit test, component test, function test and end-to-end test.
Unit tests checks if your classes works as intended.(JUnit). These tests are the base of your testing environment as these tells whether your methods works. Your goal is 100% coverage here as these are the most important part of your tests.
Component tests checks how several class works together in your code. Component can be a lot of stuff in the code, but its basically more than unit testing, and less than function testing. The goal coverage is around 75%.
Function tests are testing the actual functionalities you implement. For example, if you want a button that saves some input data to a database, that is a functionality of the program. That is what you test. The goal coverage here is around 50%.
End-to-end tests are testing your whole application. These can be quite robust, and you likely cant and don't want to test everything, this test is here to check if the whole system works. The goal coverage is around 25%.
This is the order of importance too.
There is no such thing in these as "better" though. Any test you can run to check if your code works as intended is equally good.
You probably want most of your test automated: so you can test while you having a coffee break or your servers can test everything while you are away from work, then check the results in the morning.
GUI testing considered the hardest part of testing, and there are several tools that help you with that, for example Selenium for browser-GUI testing.
Several architectural patterns like Model-View-Presenter tries to separate the GUI part of the application because of this and work with as dumb GUI as possible to avoid errors in there. If you can successfully separate your graphic, you will be able to mock out the gui part of the application and simply left it out from most of the testing process.
For reference I suggest "Effective Software Testing" from Elfriede Dustin, but I'm not familiar with the books on the subject; there could be better ones.
It realy depends on the software you need to test. If you are mostly interested in the logic behind (calculations) then you should write some integration tests that will test not just separate methods, but the workflow from start to finish, so try to emulate what typical user would do. I can hardly imagine user calling one particular method - most likely some specific sequence of methods. If the calculations are ok - then the chance is low that GUI will display it wrong. Besides automating GUI is time consuming process and it will require a lot of skill and energy to build it and maintain, as every simple change might brake everything. My advice is to start with writing integration tests with different values that will cover the most common scenarios.
This answer is specific to the type of application you are involved in - WinForms
For GUI use MVC/MVP pattern. Avoid writing any code in the code behind file. This will allow you to unit test your UI code. When I say UI code it means you will be able to test your code which will be invoked when a button is clicked or any action that needs to be taken on an UI event occurrence.
You should be able to unit test each of your class files (except UI code). This will mainly focus on state based testing.
Also, you will be able to write interaction test cases for tests involving multiple class files. This should cover most of your flows.
So two things to focus on State based testing and Interaction testing.

Automated testing for an "untestable" application in C#. Unit/Integration/Functional/System Testing - How To?

I am having a hard time introducing automated testing to a particular application. Most of the material I've found focus on unit testing. This doesn't help with this application for several reasons:
The application is multi-tiered
Not written with testing in mind (lots of singletons hold application level settings, objects not mockable, "state" is required many places)
Application is very data-centric
Most "functional processes" are end to end (client -> server -> database)
Consider this scenario:
var item = server.ReadItem(100);
var item2 = server.ReadItem(200);
client.SomethingInteresting(item1, item2);
server.SomethingInteresting(item1, item2);
Definition of server call:
Server::SomethingInteresting(Item item1, Item item2)
{
// Semi-interesting things before going to the server.
// Possibly stored procedure call doing some calculation
database.SomethingInteresting(item1, item2);
}
How would one set up some automated tests for that, where there is client interaction, server interaction then database interaction?
I understand that this doesn't constitute a "unit". This is possibly an example of functional testing.
Here are my questions:
Is testing an application like this a lost cause?
Can this be accomplished using something like nUnit or MS Test?
Would the "setup" of this test simply force a full application start up and login?
Would I simply re-read item1 and item2 from the database to validate they have been written correctly?
Do you guys have any tips on how to start this daunting process?
Any assistance would be greatly appreciated.
If this is a very important application in your company that is expected to last several years, then I would suggest start small. Start looking for little pieces that you can test or test with small modifications to the code to make it testable.
It's not ideal, but in a year or so you'll probably much better than you are today.
You should look into integration test runners like Cucumber or Fitnesse. Depending on your scenario, an integration test runner will either act as a client to your server and make the appropriate calls into the server, or it will share some domain code with your existing client and run that code. You will give your scenarios a sequence of calls that should be made and verify that the correct results are output.
I think you'll be able to make at least some parts of your application testable with a little work. Good luck!
Look at Pex and Moles. Pex can analyse your code and generate unit tests that will test all the borderline conditions etc. You can start introducing a mocking framework to start stubbing out some of the complext bit.
To take it further, you have to start hitting the database with integration tests. To do this you have to control the data in the database and clean up after the test is done. Few ways of doing this. You can run sql scripts as part of the set up/tear down for each test. You can use a compile time AOP framework such as Postsharp to run the sql scripts for by just specifying them in attributes. Or if your project doesn't want to use Postsharp, you can use a built in dot net feature called "Context Bound Objects" to inject code to run the Sql Scripts in an AOP fashion. More details here - http://www.chaitanyaonline.net/2011/09/25/improving-integration-tests-in-net-by-using-attributes-to-execute-sql-scripts/
Basically, the steps I would suggest,
1) Install Pex and generate some unit tests. This will be the quickest, least effort way to generate a good set of Regression tests.
2) Analyse the unit tests and see if there are other tests or permutations of the test you could use. If so write them by hand. Introduce a mocking framework if you require.
3) Start doing end to end integration testing. Write sql scripts to insert and delete data into the database. Write a unit tests that will run the insert sql scripts. Call some very high level method in your application front end, such as "AddNewUser" and make sure it makes it all the way to the backend. The front end, high level method may call any number of intermediate methods in various layers.
4) Once you find yourself writing a lot of integration tests in this pattern "Run sql script to insert test data" -> Call high level functional method -> "Run Sql script to clean up test data", you can clean up the code and encapsulate this pattern using AOP techniques such as Postsharp or Context Bound Methods.
Ideally you would refactor your code over time so you can make it more testable. Ideally any incremental improvement in this area will allow you to write more unit tests, which should drastically increase your confidence in your existing code, and your ability to write new code without breaking existing tested code.
I find such refactoring often gives other benefits too, such as a more flexible and maintainable design. These benefits are good to keep in mind when trying to justify the work.
Is testing an application like this a lost cause?
I've been doing automated integration testing for years, and found it to be very rewarding, both for myself, and for the companies I've worked for :) I only recently started understanding how to make an application fully unit testable within the last 3 years or so, and I was doing full integration tests (with custom implemented/hack test hooks) before that.
So, no, it is not a lost cause, even with the application architected the way it is. But if you know how to do unit testing, you can get a lot of benefits from refactoring the application: stability and maintainability of tests, ease of writing them, and isolation of failures to the specific area of code that caused the failure.
Can this be accomplished using something like nUnit or MS Test?
Yes. There are web page/UI testing frameworks that you can use from .Net unit testing libraries, including one built into Visual Studio.
You might also get a lot of benefit by calling the server directly rather than through the UI (similar benefits to those you get if you refactor the application). You can also try mocking the server, and testing the GUI and business logic of the client application in isolation.
Would the "setup" of this test simply force a full application start up and login?
On the client side, yes. Unless you want to test login, of course :)
On the server side, you might be able to leave the server running, or do some sort of DB restore between tests. A DB restore is painful (and if you write it wrong, will be flaky) and will slow down the tests, but it will help immensely with test isolation.
Would I simply re-read item1 and item2 from the database to validate they have been written correctly?
Typical integration tests are based around the concept of a Finite State Machine. Such tests treat the code under test as if it is a set of state transitions, and make assertions on the final state of the system.
So you'd set the DB to a known state before hand, call a method on the server, then check the DB afterwards.
You should always do state-based assertions on a level below the level of code you are exercising. So if you exercise web service code, validate the DB. If you exercise the client and are running against a real version of the service, validate at the service level or at the DB level. You should never exercise a web service and perform assertions on that same web service (for example). Doing so masks bugs, and can give you no real confidence that you're actually testing the full integration of all the components.
Do you guys have any tips on how to start this daunting process?
Break up your work. Identify all the components of the system (every assembly, each running service, etc). Try to piece those apart into natural categories. Spend some time designing test cases for each category.
Design your test cases with priority in mind. Examine each test case. Think of a hypothetical scenario that could cause it to fail, and try to anticipate whether it would be a show stopping bug, if the bug could sit around a while, or if it could be punted to another release. Assign priority on your test cases based off these guesses.
Identify a piece of your application (possibly a specific feature) that has as few dependencies as possible, and write only the highest priority test cases for it (ideally, they should be the simplest/most basic tests). Once you are done, move on to the next piece of the application.
Once you have all the logical pieces of your application (all assemblies, all features) covered by your highest priority test cases, do what you can to get those test cases run every day. Fix test failures in those cases before working on additional test implementation.
Then get code coverage running on your application under test. See what parts of the application you have missed.
Repeat with each successive higher priority set of test cases, until the whole application is tested at some level, and you ship :)

Unit testing database application with business logic performed in the UI

I manage a rather large application (50k+ lines of code) by myself, and it manages some rather critical business actions. To describe the program simple, I would say it's a fancy UI with the ability to display and change data from the database, and it's managing around 1,000 rental units, and about 3k tenants and all the finances.
When I make changes, because it's so large of a code base, I sometimes break something somewhere else. I typically test it by going though the stuff I changed at the functional level (i.e. I run the program and work through the UI), but I can't test for every situation. That is why I want to get started with unit testing.
However, this isn't a true, three tier program with a database tier, a business tier, and a UI tier. A lot of the business logic is performed in the UI classes, and many things are done on events. To complicate things, everything is database driven, and I've not seen (so far) good suggestions on how to unit test database interactions.
How would be a good way to get started with unit testing for this application. Keep in mind. I've never done unit testing or TDD before. Should I rewrite it to remove the business logic from the UI classes (a lot of work)? Or is there a better way?
I would start by using some tool that would test the application through the UI. There are a number of tools that can be used to create test scripts that simulate the user clicking through the application.
I would also suggest that you start adding unit tests as you add pieces of new functionality. It is time consuming to create complete coverage once the appliction is developed, but if you do it incrementally then you distribute the effort.
We test database interactions by having a separate database that is used just for unit tests. In that way we have a static and controllable dataset so that requests and responses can be guaranteed. We then create c# code to simulate various scenarios. We use nUnit for this.
I'd highly recommend reading the article Working Effectively With Legacy Code. It describes a workable strategy for what you're trying to accomplish.
One option is this -- every time a bug comes up, write a test to help you find the bug and solve the problem. Make it such that the test will pass when the bug is fixed. Then, once the bug is resolved you have a tool that'll help you detect future changes that might impact the chunk of code you just fixed. Over time your test coverage will improve, and you can run your ever-growing test suite any time you make a potentially far-reaching change.
TDD implies that you build (and run) unit tests as you go along. For what you are trying to do - add unit tests after the fact - you may consider using something like Typemock (a commercial product).
Also, you may have built a system that does not lend itself to be unit tested, and in this case some (or a lot) of refactoring may be in order.
First, I would recommend reading a good book about unit testing, like The Art Of Unit Testing. In your case, it's a little late to perform Test Driven Development on your existing code, but if you want to write your unit tests around it, then here's what I would recommend:
Isolate the code you want to test into code libraries (if they're not already in libraries).
Write out the most common Use Case scenarios and translate them to an application that uses your code libraries.
Make sure your test program works as you expect it to.
Convert your test program into unit tests using a testing framework.
Get the green light. If not, then your unit tests are faulty (assuming your code libraries work) and you should do some debugging.
Increase the code and scenario coverage of your unit tests: What if you entered unexpected results?
Get the green light again. If the unit test fails, then it's likely that your code library does not support the extended scenario coverage, so it's refactoring time!
And for new code, I would suggest you try it using Test Driven Development.
Good luck (you'll need it!)
I'd recommend picking up the book Working Effectively with Legacy Code by Michael Feathers. This will show you many techniques for gradually increasing the test coverage in your codebase (and improving the design along the way).
Refactoring IS the better way. Even though the process is daunting you should definitely separate the presentation and business logic. You will not be able to write good unit tests against your biz logic until you make that separation. It's as simple as that.
In the refactoring process you will likely find bugs that you didn't even know existed and, by the end, be a much better programmer!
Also, once you refactor your code you'll notice that testing your db interactions will become much easier. You will be able write tests that perform actions like: "add new tenant" which will involve creating a mock tenant object and saving "him" to the db. For you next test you would write "GetTenant" and try and get that tenant that you just created from the db and into your in-memory representation... Then compare your first and second tenant to make sure all fields match values. Etc. etc.
I think it is always a good idea to separate your business logic from UI. There several benefits to this including easier unit testing and expandability. You might also want to refer to pattern based programming. Here is a link http://en.wikipedia.org/wiki/Design_pattern_(computer_science) that will help you understand design patterns.
One thing you could do for now, is within your UI classes isolate all the business logic and different business bases functions and than within each UI constructor or page_load have unit test calls that test each of the business functions. For improved readability you could apply #region tag around the business functions.
For your long term benefit, you should study design patterns. Pick a pattern that suits your project needs and redo your project using the design pattern.
It depends on the language you are using. But in general start with a simple testing class that uses some made up data(but still something 'real') to test your code with. Make it simulate what would happen in the app. If you are making a change in a particular part of the app write something that works before you change the code. Now since you have already written the code getting testing up is going to be quite a challenge when trying to test the entire app. I would suggest start small. But now as you write code, write unit testing first then write your code. You might also considering refactoring but I would weigh the costs of refactoring vs rewriting a little as you go unit testing along the way.
I haven't tried adding test for legacy applications since it is really a difficult chore. If you are planning to move some of the business logic out of the UI and in a separate layer, You may add your initial Test units here(refactoring and TDD). Doing so will give you an introduction for creating unit test for your system. It is really a lot of work but I guess it is the best place to start. Since it is a database driven application, I suggest that you use some mocking tools and DBunit tools while creating your test to simulate the database related issues.
There's no better way to get started unit testing than to try it - it doesn't take long, it's fun and addictive. But only if you're working on testable code.
However, if you try to learn unit testing by fixing an application like the one you've described all at once, you'll probably get frustrated and discouraged - and there's a good chance you'll just think unit testing is a waste of time.
I recommend downloading a unit testing framework, such as NUnit or XUnit.Net.
Most of these frameworks have online documentation that provides a brief introduction, like the NUnit Quick Start. Read that, then choose a simple, self-contained class that:
Has few or no dependencies on other classes - at least not on complex classes.
Has some behavior: a simple container with a bunch of properties won't really show you much about unit testing.
Try writing some tests to get good coverage on that class, then compile and run the tests.
Once you get the hang of that, start looking for opportunities to refactor your existing code, especially when adding new features or fixing bugs. When those refactorings lead to classes that meet the criteria above, write some tests for them. Once you get used to it, you can start by writing tests.

Categories