Testing WPF applications - c#

I am currently doing my write-up for my third year degree project. I created a system using C# which used a Microsoft Access as the back end database. The system does not connect to the internet, nor does it use the local network for any connectivity.
I am asking for the best method to test an application such as this, so that it is of sufficient testing.

You should impelement the Repository Pattern, which will abstract the database code so that you can test the business logic, while faking out the database calls.

I don't know what exactly you're looking for and how loosely coupled your application is, but in my case, most of the code (roughly 90%) is written so that it can be tested in unit tests, without any need to run the ui. The MVVM pattern is a good starter for that, since it forces you to move code out of your UI into separate classes like ViewModels, Commands, which can be unit tested.
That assures a lot already, and if you need to do automated UI testing, take a look at the Coded UI Tests available in Visual Studio 2010 (Premium and Ultimate only). They allow you to fully automate / simulate user interaction. In simulation, you can do what Justin proposed : detach your application from the database and work with a repository.
You have to keep in mind that in order to write really testable code, you have to design your code testable. In my experience it is next to impossible to write Unit Tests for code written without testing intent right from the start. The probably best thing you can do in that case is write integration tests.
But in order to give more clear advice, we need more input.
Cheers

Related

How do I UnitTest my Website?

Sorry to ask such a simple question. I am a newbie.
I am setting up my website. I try to create a UnitTest Project for it, but I simply cannot not create a reference for it. Even though I have created a namespace for my class method, Visual Studio doesn't allow me to use the method and to see the namespace at all.
Is UnitTesting website completely different from UnitTesting normal programs?
Any reference online available?
Thank you
You might want to consider using the new Fakes support in the Visual Studio 2012. In short, you can detour the call to the web site API and let the call routed to your supplied delegate. This allows you to isolate testing your code in a modualized fashion.
You can find more detail at http://msdn.microsoft.com/en-us/library/hh549175.aspx
Regards,
Patrick Tseng
Visual Studio ALM team.
use this article.if u are user mvc use this question.and this stie
You've probably heard the old adage, "Better, faster, cheaper, pick any two." If you want something good and fast, it isn't going to be cheap, and if you want something fast and cheap, it isn't going to be very good. Cheaper, faster, and better means that we need to write more code more quickly, right? If only it were that simple. Learning to type faster might satisfy two of the requirements, but it isn't going to make the software you develop any better. So how do we make software better? What does "better" mean?
"Better" means producing flexible, maintainable software with a low number of defects; better software is all about long-term maintainability. To achieve this, a key design decision is to ensure that components are loosely coupled. Loosely coupled software has many benefits. One that stands out is that it improves our ability to test solutions. If we write software that can be broken down easily into small parts, it becomes easier to test. It sounds simple when you word it that way, but the amount of software in use today that is difficult to test or maintain shows that it is not as straightforward as we might like to think. Software needs to be coupled to do anything useful, but developers need tools and techniques to decrease coupling so that solutions are easier to test.
I assume that your website code is in one project, and your unit test code is in another project in the same solution? In order for code in one project to access the code of another project, you have to add a reference to the other project:
Right-click on the unit test project and click 'Add Reference...'
Open the 'Projects' tab, select the website project and click 'OK'.
You should not unit test your website. You should unit test your business layer code. Testing your website would probably be UI testing, you can use tools for this like Selenium. I would recommend reading up on unit testing, and unit testing frameworks (MSTest, NUnit etc). You should layer your application so you can test your classes, objects, data access etc, all in absolute isolation.
Example:
A Manage Users page may show an administrator a list of all the users, filtered by a filter criteria.
The page should only handle the UI side of the problem, as in, it should only be concerned about displaying the data. It should send off a request to another object to get a list of users. Maybe something like a UserRepository which has a method called GetUsers which takes in a filter criteria.
This is where unit testing comes in. You would test (by mocking, perhaps) that when given a very specific list of users in the database, and a certain filter criteria, that the GetUsers method returns the list of users you expect.

Unit testing WCF calls, is it possible and how?

I'm working on a tool that checks some large application functionality. All of my calls to large application are made using WCF, application is too big to create Mock out of it. I want to create Unit Tests for my tool so i won't break functionality while re-factoring or extending functionality.
Is it possible and how ?
You can, but they won't be unit tests in the normal sense of the word, they will be more like automated regression tests. A typical test method might contain the following:
Write expected test data into the database
Make the WCF call
Read data out of the database and assert that it's what you expect
Reset the database if necessary
It takes a lot of care to get this right, but it does work.
The phrase "application is too big to create Mock out of it" raised a warning sign. I do not mean to sound harsh but if your application logic cannot be broken down into smaller unit-testable parts then there's something wrong with the architecture of the application.
I've been unit testing WCF powered Silverlight ViewModels for years using Moq with great results. A concept which can be applied to ASP.Net/MVC as well.
I guess it depends whether you're looking at unit testing the classes behind the service or functionally testing the service itself.
For functional testing of WCF services I use WCF Storm which could do with a little polish, but works great. You can build and run functional tests as well as performance tests. Very useful little tool.
http://www.wcfstorm.com/

Automated testing for an "untestable" application in C#. Unit/Integration/Functional/System Testing - How To?

I am having a hard time introducing automated testing to a particular application. Most of the material I've found focus on unit testing. This doesn't help with this application for several reasons:
The application is multi-tiered
Not written with testing in mind (lots of singletons hold application level settings, objects not mockable, "state" is required many places)
Application is very data-centric
Most "functional processes" are end to end (client -> server -> database)
Consider this scenario:
var item = server.ReadItem(100);
var item2 = server.ReadItem(200);
client.SomethingInteresting(item1, item2);
server.SomethingInteresting(item1, item2);
Definition of server call:
Server::SomethingInteresting(Item item1, Item item2)
{
// Semi-interesting things before going to the server.
// Possibly stored procedure call doing some calculation
database.SomethingInteresting(item1, item2);
}
How would one set up some automated tests for that, where there is client interaction, server interaction then database interaction?
I understand that this doesn't constitute a "unit". This is possibly an example of functional testing.
Here are my questions:
Is testing an application like this a lost cause?
Can this be accomplished using something like nUnit or MS Test?
Would the "setup" of this test simply force a full application start up and login?
Would I simply re-read item1 and item2 from the database to validate they have been written correctly?
Do you guys have any tips on how to start this daunting process?
Any assistance would be greatly appreciated.
If this is a very important application in your company that is expected to last several years, then I would suggest start small. Start looking for little pieces that you can test or test with small modifications to the code to make it testable.
It's not ideal, but in a year or so you'll probably much better than you are today.
You should look into integration test runners like Cucumber or Fitnesse. Depending on your scenario, an integration test runner will either act as a client to your server and make the appropriate calls into the server, or it will share some domain code with your existing client and run that code. You will give your scenarios a sequence of calls that should be made and verify that the correct results are output.
I think you'll be able to make at least some parts of your application testable with a little work. Good luck!
Look at Pex and Moles. Pex can analyse your code and generate unit tests that will test all the borderline conditions etc. You can start introducing a mocking framework to start stubbing out some of the complext bit.
To take it further, you have to start hitting the database with integration tests. To do this you have to control the data in the database and clean up after the test is done. Few ways of doing this. You can run sql scripts as part of the set up/tear down for each test. You can use a compile time AOP framework such as Postsharp to run the sql scripts for by just specifying them in attributes. Or if your project doesn't want to use Postsharp, you can use a built in dot net feature called "Context Bound Objects" to inject code to run the Sql Scripts in an AOP fashion. More details here - http://www.chaitanyaonline.net/2011/09/25/improving-integration-tests-in-net-by-using-attributes-to-execute-sql-scripts/
Basically, the steps I would suggest,
1) Install Pex and generate some unit tests. This will be the quickest, least effort way to generate a good set of Regression tests.
2) Analyse the unit tests and see if there are other tests or permutations of the test you could use. If so write them by hand. Introduce a mocking framework if you require.
3) Start doing end to end integration testing. Write sql scripts to insert and delete data into the database. Write a unit tests that will run the insert sql scripts. Call some very high level method in your application front end, such as "AddNewUser" and make sure it makes it all the way to the backend. The front end, high level method may call any number of intermediate methods in various layers.
4) Once you find yourself writing a lot of integration tests in this pattern "Run sql script to insert test data" -> Call high level functional method -> "Run Sql script to clean up test data", you can clean up the code and encapsulate this pattern using AOP techniques such as Postsharp or Context Bound Methods.
Ideally you would refactor your code over time so you can make it more testable. Ideally any incremental improvement in this area will allow you to write more unit tests, which should drastically increase your confidence in your existing code, and your ability to write new code without breaking existing tested code.
I find such refactoring often gives other benefits too, such as a more flexible and maintainable design. These benefits are good to keep in mind when trying to justify the work.
Is testing an application like this a lost cause?
I've been doing automated integration testing for years, and found it to be very rewarding, both for myself, and for the companies I've worked for :) I only recently started understanding how to make an application fully unit testable within the last 3 years or so, and I was doing full integration tests (with custom implemented/hack test hooks) before that.
So, no, it is not a lost cause, even with the application architected the way it is. But if you know how to do unit testing, you can get a lot of benefits from refactoring the application: stability and maintainability of tests, ease of writing them, and isolation of failures to the specific area of code that caused the failure.
Can this be accomplished using something like nUnit or MS Test?
Yes. There are web page/UI testing frameworks that you can use from .Net unit testing libraries, including one built into Visual Studio.
You might also get a lot of benefit by calling the server directly rather than through the UI (similar benefits to those you get if you refactor the application). You can also try mocking the server, and testing the GUI and business logic of the client application in isolation.
Would the "setup" of this test simply force a full application start up and login?
On the client side, yes. Unless you want to test login, of course :)
On the server side, you might be able to leave the server running, or do some sort of DB restore between tests. A DB restore is painful (and if you write it wrong, will be flaky) and will slow down the tests, but it will help immensely with test isolation.
Would I simply re-read item1 and item2 from the database to validate they have been written correctly?
Typical integration tests are based around the concept of a Finite State Machine. Such tests treat the code under test as if it is a set of state transitions, and make assertions on the final state of the system.
So you'd set the DB to a known state before hand, call a method on the server, then check the DB afterwards.
You should always do state-based assertions on a level below the level of code you are exercising. So if you exercise web service code, validate the DB. If you exercise the client and are running against a real version of the service, validate at the service level or at the DB level. You should never exercise a web service and perform assertions on that same web service (for example). Doing so masks bugs, and can give you no real confidence that you're actually testing the full integration of all the components.
Do you guys have any tips on how to start this daunting process?
Break up your work. Identify all the components of the system (every assembly, each running service, etc). Try to piece those apart into natural categories. Spend some time designing test cases for each category.
Design your test cases with priority in mind. Examine each test case. Think of a hypothetical scenario that could cause it to fail, and try to anticipate whether it would be a show stopping bug, if the bug could sit around a while, or if it could be punted to another release. Assign priority on your test cases based off these guesses.
Identify a piece of your application (possibly a specific feature) that has as few dependencies as possible, and write only the highest priority test cases for it (ideally, they should be the simplest/most basic tests). Once you are done, move on to the next piece of the application.
Once you have all the logical pieces of your application (all assemblies, all features) covered by your highest priority test cases, do what you can to get those test cases run every day. Fix test failures in those cases before working on additional test implementation.
Then get code coverage running on your application under test. See what parts of the application you have missed.
Repeat with each successive higher priority set of test cases, until the whole application is tested at some level, and you ship :)

Unit testing database application with business logic performed in the UI

I manage a rather large application (50k+ lines of code) by myself, and it manages some rather critical business actions. To describe the program simple, I would say it's a fancy UI with the ability to display and change data from the database, and it's managing around 1,000 rental units, and about 3k tenants and all the finances.
When I make changes, because it's so large of a code base, I sometimes break something somewhere else. I typically test it by going though the stuff I changed at the functional level (i.e. I run the program and work through the UI), but I can't test for every situation. That is why I want to get started with unit testing.
However, this isn't a true, three tier program with a database tier, a business tier, and a UI tier. A lot of the business logic is performed in the UI classes, and many things are done on events. To complicate things, everything is database driven, and I've not seen (so far) good suggestions on how to unit test database interactions.
How would be a good way to get started with unit testing for this application. Keep in mind. I've never done unit testing or TDD before. Should I rewrite it to remove the business logic from the UI classes (a lot of work)? Or is there a better way?
I would start by using some tool that would test the application through the UI. There are a number of tools that can be used to create test scripts that simulate the user clicking through the application.
I would also suggest that you start adding unit tests as you add pieces of new functionality. It is time consuming to create complete coverage once the appliction is developed, but if you do it incrementally then you distribute the effort.
We test database interactions by having a separate database that is used just for unit tests. In that way we have a static and controllable dataset so that requests and responses can be guaranteed. We then create c# code to simulate various scenarios. We use nUnit for this.
I'd highly recommend reading the article Working Effectively With Legacy Code. It describes a workable strategy for what you're trying to accomplish.
One option is this -- every time a bug comes up, write a test to help you find the bug and solve the problem. Make it such that the test will pass when the bug is fixed. Then, once the bug is resolved you have a tool that'll help you detect future changes that might impact the chunk of code you just fixed. Over time your test coverage will improve, and you can run your ever-growing test suite any time you make a potentially far-reaching change.
TDD implies that you build (and run) unit tests as you go along. For what you are trying to do - add unit tests after the fact - you may consider using something like Typemock (a commercial product).
Also, you may have built a system that does not lend itself to be unit tested, and in this case some (or a lot) of refactoring may be in order.
First, I would recommend reading a good book about unit testing, like The Art Of Unit Testing. In your case, it's a little late to perform Test Driven Development on your existing code, but if you want to write your unit tests around it, then here's what I would recommend:
Isolate the code you want to test into code libraries (if they're not already in libraries).
Write out the most common Use Case scenarios and translate them to an application that uses your code libraries.
Make sure your test program works as you expect it to.
Convert your test program into unit tests using a testing framework.
Get the green light. If not, then your unit tests are faulty (assuming your code libraries work) and you should do some debugging.
Increase the code and scenario coverage of your unit tests: What if you entered unexpected results?
Get the green light again. If the unit test fails, then it's likely that your code library does not support the extended scenario coverage, so it's refactoring time!
And for new code, I would suggest you try it using Test Driven Development.
Good luck (you'll need it!)
I'd recommend picking up the book Working Effectively with Legacy Code by Michael Feathers. This will show you many techniques for gradually increasing the test coverage in your codebase (and improving the design along the way).
Refactoring IS the better way. Even though the process is daunting you should definitely separate the presentation and business logic. You will not be able to write good unit tests against your biz logic until you make that separation. It's as simple as that.
In the refactoring process you will likely find bugs that you didn't even know existed and, by the end, be a much better programmer!
Also, once you refactor your code you'll notice that testing your db interactions will become much easier. You will be able write tests that perform actions like: "add new tenant" which will involve creating a mock tenant object and saving "him" to the db. For you next test you would write "GetTenant" and try and get that tenant that you just created from the db and into your in-memory representation... Then compare your first and second tenant to make sure all fields match values. Etc. etc.
I think it is always a good idea to separate your business logic from UI. There several benefits to this including easier unit testing and expandability. You might also want to refer to pattern based programming. Here is a link http://en.wikipedia.org/wiki/Design_pattern_(computer_science) that will help you understand design patterns.
One thing you could do for now, is within your UI classes isolate all the business logic and different business bases functions and than within each UI constructor or page_load have unit test calls that test each of the business functions. For improved readability you could apply #region tag around the business functions.
For your long term benefit, you should study design patterns. Pick a pattern that suits your project needs and redo your project using the design pattern.
It depends on the language you are using. But in general start with a simple testing class that uses some made up data(but still something 'real') to test your code with. Make it simulate what would happen in the app. If you are making a change in a particular part of the app write something that works before you change the code. Now since you have already written the code getting testing up is going to be quite a challenge when trying to test the entire app. I would suggest start small. But now as you write code, write unit testing first then write your code. You might also considering refactoring but I would weigh the costs of refactoring vs rewriting a little as you go unit testing along the way.
I haven't tried adding test for legacy applications since it is really a difficult chore. If you are planning to move some of the business logic out of the UI and in a separate layer, You may add your initial Test units here(refactoring and TDD). Doing so will give you an introduction for creating unit test for your system. It is really a lot of work but I guess it is the best place to start. Since it is a database driven application, I suggest that you use some mocking tools and DBunit tools while creating your test to simulate the database related issues.
There's no better way to get started unit testing than to try it - it doesn't take long, it's fun and addictive. But only if you're working on testable code.
However, if you try to learn unit testing by fixing an application like the one you've described all at once, you'll probably get frustrated and discouraged - and there's a good chance you'll just think unit testing is a waste of time.
I recommend downloading a unit testing framework, such as NUnit or XUnit.Net.
Most of these frameworks have online documentation that provides a brief introduction, like the NUnit Quick Start. Read that, then choose a simple, self-contained class that:
Has few or no dependencies on other classes - at least not on complex classes.
Has some behavior: a simple container with a bunch of properties won't really show you much about unit testing.
Try writing some tests to get good coverage on that class, then compile and run the tests.
Once you get the hang of that, start looking for opportunities to refactor your existing code, especially when adding new features or fixing bugs. When those refactorings lead to classes that meet the criteria above, write some tests for them. Once you get used to it, you can start by writing tests.

Applying Test Driven Development to a tightly coupled architecture

I've recently been studying TDD, attended a conference and have dabbled in few tests and already I'm 100% sold, I absolutely love it TDD.
As a result I've raised this with my seniors and they are prepared to give it a chance, so they have tasked me with coming up with a way to implement TDD in the development of our enterprise product.
The problem is our system has evolved since the days of VB6 to .NET and implements a lot of legacy technology and some far from best practice development techniques i.e. a lot of business logic in the ASP.NET code behind and client script. The largest problem however is how our classes are tightly coupled with database access; properties, methods, constructors - usually has some database access in some form or another.
We use an in-house data access code generator tool that creates sqlDataAdapters that gives us all the database access we could ever want, which helps us develop extremely quickly, however, classes in our business layer are very tightly coupled to this data layer - we aren't even close to implementing some form of repository design. This and the issues above have created me all sorts of problems.
I have tried to develop some unit tests for some existing classes I've already written but the tests take A LOT longer to run since db access is required, not to mention since we use the MS Enterprise Caching framework I am forced to fake a httpcontext for my tests to run successfully which isn't practical. Also, I can't see how to use TDD to drive the design of any new classes I write since they have to be so tightly coupled to the database ... help!
Because of the architecture of the system it appears I can't implement TDD without some real hack which in my eyes just defeats the aim of TDD and the huge benefits that come with.
Does anyone have any suggestions how I could implement TDD with the constraints I'm bound to? Or do I need to push the repository design pattern down my seniors throats and tell them we either change our architecture/development methodology or forget about TDD altogether? :)
Thanks
Just to clarify, Test driven development and unit testing are not the same thing.
TDD = Write your tests before your code.
Unit Testing = Writing tests that confirm a small unit of code works.
Integration Testing = Writing tests that confirm blocks of code work together.
TDD, by definition, can't be done on existing code. You've already developed the code, so you aren't going to develop it again. TDD means you first write a failing test, then you write just enough code to pass the test. You've already written the code, so you can't do TDD.
You can write unit tests for existing code but this isn't the same as doing TDD.
The tests you have described (accessing the database etc) are technically integration tests. Integration tests do usually take ages to run. A true unit test would purely test your DA layer code without actually accessing the database. True unit testing requires interfaces to test against so you can isolate units from the surrounding units.
It's very hard to properly unit test existing code, unless it's been well designed with interfaces and abstraction in mind.
I know I'm being slightly picky with terminology here, but it's important when it comes to what approach you can take. My advice would be that with existing code that isn't well abstracted you should gradually build up a suite of automated integration tests. When you come to write something new (Which may not be a whole project, it may just be a new part to the existing app), consider approaching it in a TDD style. To do this you will find that you need to write some interfaces and abstractions to allow you to do TDD on your new code without triggering too much of the existing code. You will then be able to write some more integration tests that test your new code working with the old code.
To cut it short - don't try and change methodology for existing code. Just do new code better.
Nitpick: you can't do Test-Driven Design on existing code, but I do realize that you wish to use TDD against new functionality you implement on the existing code base.
The best thing you can do is to gradually introduce seams into the code base, shielding you from the tightly coupled code already in existence.
The book Working Effectively with Legacy Code contains much good advice on how to turn a legacy application into a testable application.
I think the mere fact that you are using ASP.NET Web Forms your road to TDD will be challenged. It's just a nightmare to mock the Session/ViewState/HTTPContext in Web Forms so test-driven is almost a hindrance. There is of course the ASP.NET MVC alternative, but you're way down the road already it seems. Another promising option for the Web Forms paradigm is ASP.NET Web Forms MVP, but that project is really not fully mature yet. I'm moving everything I can to MVC; the other option for TDD with Web Forms, Selenium, isn't really TDD as it should be.
For adding tests to your existing code, you might check out Working Effectively With Legacy Code. "Legacy code" is defined as code that does not have tests in it.

Categories