I manage a rather large application (50k+ lines of code) by myself, and it manages some rather critical business actions. To describe the program simple, I would say it's a fancy UI with the ability to display and change data from the database, and it's managing around 1,000 rental units, and about 3k tenants and all the finances.
When I make changes, because it's so large of a code base, I sometimes break something somewhere else. I typically test it by going though the stuff I changed at the functional level (i.e. I run the program and work through the UI), but I can't test for every situation. That is why I want to get started with unit testing.
However, this isn't a true, three tier program with a database tier, a business tier, and a UI tier. A lot of the business logic is performed in the UI classes, and many things are done on events. To complicate things, everything is database driven, and I've not seen (so far) good suggestions on how to unit test database interactions.
How would be a good way to get started with unit testing for this application. Keep in mind. I've never done unit testing or TDD before. Should I rewrite it to remove the business logic from the UI classes (a lot of work)? Or is there a better way?
I would start by using some tool that would test the application through the UI. There are a number of tools that can be used to create test scripts that simulate the user clicking through the application.
I would also suggest that you start adding unit tests as you add pieces of new functionality. It is time consuming to create complete coverage once the appliction is developed, but if you do it incrementally then you distribute the effort.
We test database interactions by having a separate database that is used just for unit tests. In that way we have a static and controllable dataset so that requests and responses can be guaranteed. We then create c# code to simulate various scenarios. We use nUnit for this.
I'd highly recommend reading the article Working Effectively With Legacy Code. It describes a workable strategy for what you're trying to accomplish.
One option is this -- every time a bug comes up, write a test to help you find the bug and solve the problem. Make it such that the test will pass when the bug is fixed. Then, once the bug is resolved you have a tool that'll help you detect future changes that might impact the chunk of code you just fixed. Over time your test coverage will improve, and you can run your ever-growing test suite any time you make a potentially far-reaching change.
TDD implies that you build (and run) unit tests as you go along. For what you are trying to do - add unit tests after the fact - you may consider using something like Typemock (a commercial product).
Also, you may have built a system that does not lend itself to be unit tested, and in this case some (or a lot) of refactoring may be in order.
First, I would recommend reading a good book about unit testing, like The Art Of Unit Testing. In your case, it's a little late to perform Test Driven Development on your existing code, but if you want to write your unit tests around it, then here's what I would recommend:
Isolate the code you want to test into code libraries (if they're not already in libraries).
Write out the most common Use Case scenarios and translate them to an application that uses your code libraries.
Make sure your test program works as you expect it to.
Convert your test program into unit tests using a testing framework.
Get the green light. If not, then your unit tests are faulty (assuming your code libraries work) and you should do some debugging.
Increase the code and scenario coverage of your unit tests: What if you entered unexpected results?
Get the green light again. If the unit test fails, then it's likely that your code library does not support the extended scenario coverage, so it's refactoring time!
And for new code, I would suggest you try it using Test Driven Development.
Good luck (you'll need it!)
I'd recommend picking up the book Working Effectively with Legacy Code by Michael Feathers. This will show you many techniques for gradually increasing the test coverage in your codebase (and improving the design along the way).
Refactoring IS the better way. Even though the process is daunting you should definitely separate the presentation and business logic. You will not be able to write good unit tests against your biz logic until you make that separation. It's as simple as that.
In the refactoring process you will likely find bugs that you didn't even know existed and, by the end, be a much better programmer!
Also, once you refactor your code you'll notice that testing your db interactions will become much easier. You will be able write tests that perform actions like: "add new tenant" which will involve creating a mock tenant object and saving "him" to the db. For you next test you would write "GetTenant" and try and get that tenant that you just created from the db and into your in-memory representation... Then compare your first and second tenant to make sure all fields match values. Etc. etc.
I think it is always a good idea to separate your business logic from UI. There several benefits to this including easier unit testing and expandability. You might also want to refer to pattern based programming. Here is a link http://en.wikipedia.org/wiki/Design_pattern_(computer_science) that will help you understand design patterns.
One thing you could do for now, is within your UI classes isolate all the business logic and different business bases functions and than within each UI constructor or page_load have unit test calls that test each of the business functions. For improved readability you could apply #region tag around the business functions.
For your long term benefit, you should study design patterns. Pick a pattern that suits your project needs and redo your project using the design pattern.
It depends on the language you are using. But in general start with a simple testing class that uses some made up data(but still something 'real') to test your code with. Make it simulate what would happen in the app. If you are making a change in a particular part of the app write something that works before you change the code. Now since you have already written the code getting testing up is going to be quite a challenge when trying to test the entire app. I would suggest start small. But now as you write code, write unit testing first then write your code. You might also considering refactoring but I would weigh the costs of refactoring vs rewriting a little as you go unit testing along the way.
I haven't tried adding test for legacy applications since it is really a difficult chore. If you are planning to move some of the business logic out of the UI and in a separate layer, You may add your initial Test units here(refactoring and TDD). Doing so will give you an introduction for creating unit test for your system. It is really a lot of work but I guess it is the best place to start. Since it is a database driven application, I suggest that you use some mocking tools and DBunit tools while creating your test to simulate the database related issues.
There's no better way to get started unit testing than to try it - it doesn't take long, it's fun and addictive. But only if you're working on testable code.
However, if you try to learn unit testing by fixing an application like the one you've described all at once, you'll probably get frustrated and discouraged - and there's a good chance you'll just think unit testing is a waste of time.
I recommend downloading a unit testing framework, such as NUnit or XUnit.Net.
Most of these frameworks have online documentation that provides a brief introduction, like the NUnit Quick Start. Read that, then choose a simple, self-contained class that:
Has few or no dependencies on other classes - at least not on complex classes.
Has some behavior: a simple container with a bunch of properties won't really show you much about unit testing.
Try writing some tests to get good coverage on that class, then compile and run the tests.
Once you get the hang of that, start looking for opportunities to refactor your existing code, especially when adding new features or fixing bugs. When those refactorings lead to classes that meet the criteria above, write some tests for them. Once you get used to it, you can start by writing tests.
Related
I'm working on a asp.net Webforms project.
The project isn't covered by any tests at all, besides from the few unit tests I wrote myself to cover a small static method of mine. I would like to write unit tests for the entire project so that I can refactor, and add code to the project more efficiently (I have the support from the CTO about this), but I've run into a problem.
The web application mostly consists of linq queries to the database, with no abstraction between the database and the code, in other words we don't use something like a repository instead we just type the linq queries where ever we need them.
From my understanding, unit tests should not call a database directly as that would be to slow, so I would like to decouple them from the database. I first tried that with a mocking framework called MOQ, it seemed to work but then I read that LINQ-queries to a database has a different behaviour to LINQ-queries to objects so the tests passing doesn't mean that my methods work
This made me consider the repository pattern, hiding the linq queries in repository classes and then mock these classes in my tests. There are (as far as I can tell) two problems with this.
Changing the project to using the repository pattern is a massive job, and I'm not sure that the repository pattern is the right tool work the job. So I would like to know beforehand if it is suitable or not.
The database schema isn't very good, It was made for us by another company, and they simply took a generic schema and added a few rows in a few tables to "customize it" for us. Many of the tables and columns are poorly named, it can't contain all the data we need it to contain, and the logical structure doesn't suit us very well, which require complicated queries. I'm pretty sure that the correct solution is to rewrite the schema to suit us, but the CTO wants it to remain the same so the project can be more easily modified by the company who made the database and project if we want them to, so that's not an option.
Assuming that a repository pattern is what I should use, my question is, how do I deal with the poor database schema. I assume that I need to hide it behind an abstraction that handles the complicated quering for me, maybe draw an er-diagram to figure out how the schema should look like, but I could be wrong, and I'm not sure about the details.
I would love if you could give me links to examples, and tutorials, and tell me if my assumtions are wrong.
Thank you in advance for your patience and help.
If this was me I would not focus on unit tests for this. I would first try and get a suite of End-To-End tests which characterize the behaviour of the system as it stands. Then as you refactor parts of the system you have some confidence that things are no more broken that they were before.
As you point out, different linq providers have different behaviour so having the end to end tests will ensure that you are actually testing the the system works.
I can recommend SpecFlow as a great tool for building your behaviour based tests, and I can recommend watching this video on pluralsight for a great overview of SpecFlow and a good explanation of why you might be better with end to end tests than having unit tests.
You'll also get a lot out of reading 'Working effectively with legacy code' and reading some of the links and comments here might be useful as well.
You'll notice that some of the comments linked above point out that you need to write unit tests, but often you need to refactor before you can write the tests *as the code isn't currently testable), but that this isn't safe without unit tests. Catch-22. Writing End-To-End tests can often get you out of this catch-22, at the expense of having a slow running test suite.
Database schema should not matter when we are writing unit tests because we are going to block the calls to database by mocking it. There are number of things you can do
If the database calls are already in their own project like DAL then
best thing to do is create interface and implement by the DAL class,
e.g. CustomerClass : ICustomerClass this will help you alot in the
mocking.
If time and budget allows it then repository pattern is the way to
go but write integration tests first which will cover the whole
system. Once integration tests are written then you can refactor the
code. you can always verify the refactored code by running
integration tests. once the class has been refactored write unit
tests for it. you can/should always write scenarios which will help
understanding the business as well by using SPECFLOW for
integration/unit tests.
do not dive in the code and re-factor it, do some analysis work and make a document, believe me this doc will come in handy.
I am currently doing my write-up for my third year degree project. I created a system using C# which used a Microsoft Access as the back end database. The system does not connect to the internet, nor does it use the local network for any connectivity.
I am asking for the best method to test an application such as this, so that it is of sufficient testing.
You should impelement the Repository Pattern, which will abstract the database code so that you can test the business logic, while faking out the database calls.
I don't know what exactly you're looking for and how loosely coupled your application is, but in my case, most of the code (roughly 90%) is written so that it can be tested in unit tests, without any need to run the ui. The MVVM pattern is a good starter for that, since it forces you to move code out of your UI into separate classes like ViewModels, Commands, which can be unit tested.
That assures a lot already, and if you need to do automated UI testing, take a look at the Coded UI Tests available in Visual Studio 2010 (Premium and Ultimate only). They allow you to fully automate / simulate user interaction. In simulation, you can do what Justin proposed : detach your application from the database and work with a repository.
You have to keep in mind that in order to write really testable code, you have to design your code testable. In my experience it is next to impossible to write Unit Tests for code written without testing intent right from the start. The probably best thing you can do in that case is write integration tests.
But in order to give more clear advice, we need more input.
Cheers
First of all, I'm new to testing software. I suppose I'm rather green. Still for this project I need it to make sure all the calculations are done right. For the moment I'm using unit tests to test each module in C# to make sure it does what it should do.
I know about some different testing methods (Integration, unit testing), but I can't seemingly find any good book or reference material on testing in general. Most of em are to the point in one specific area (like unit testing). This sort of makes it really hard (for me) to get a good grasp on how to test your software the best, and which method to use.
For example, I'm using GUI elements as well, should I test those? Should I just test it by using it and then visually confirm all is in order? I do know it depends on whether it's critical or not, but at which point do you say "Let's not test that, because it's not really relevant/important"?
So a summary: How to choose which testing method is best, and where do you draw the line when you use the method to test your software?
There are several kind of tests but imho the most important are unit test, component test, function test and end-to-end test.
Unit tests checks if your classes works as intended.(JUnit). These tests are the base of your testing environment as these tells whether your methods works. Your goal is 100% coverage here as these are the most important part of your tests.
Component tests checks how several class works together in your code. Component can be a lot of stuff in the code, but its basically more than unit testing, and less than function testing. The goal coverage is around 75%.
Function tests are testing the actual functionalities you implement. For example, if you want a button that saves some input data to a database, that is a functionality of the program. That is what you test. The goal coverage here is around 50%.
End-to-end tests are testing your whole application. These can be quite robust, and you likely cant and don't want to test everything, this test is here to check if the whole system works. The goal coverage is around 25%.
This is the order of importance too.
There is no such thing in these as "better" though. Any test you can run to check if your code works as intended is equally good.
You probably want most of your test automated: so you can test while you having a coffee break or your servers can test everything while you are away from work, then check the results in the morning.
GUI testing considered the hardest part of testing, and there are several tools that help you with that, for example Selenium for browser-GUI testing.
Several architectural patterns like Model-View-Presenter tries to separate the GUI part of the application because of this and work with as dumb GUI as possible to avoid errors in there. If you can successfully separate your graphic, you will be able to mock out the gui part of the application and simply left it out from most of the testing process.
For reference I suggest "Effective Software Testing" from Elfriede Dustin, but I'm not familiar with the books on the subject; there could be better ones.
It realy depends on the software you need to test. If you are mostly interested in the logic behind (calculations) then you should write some integration tests that will test not just separate methods, but the workflow from start to finish, so try to emulate what typical user would do. I can hardly imagine user calling one particular method - most likely some specific sequence of methods. If the calculations are ok - then the chance is low that GUI will display it wrong. Besides automating GUI is time consuming process and it will require a lot of skill and energy to build it and maintain, as every simple change might brake everything. My advice is to start with writing integration tests with different values that will cover the most common scenarios.
This answer is specific to the type of application you are involved in - WinForms
For GUI use MVC/MVP pattern. Avoid writing any code in the code behind file. This will allow you to unit test your UI code. When I say UI code it means you will be able to test your code which will be invoked when a button is clicked or any action that needs to be taken on an UI event occurrence.
You should be able to unit test each of your class files (except UI code). This will mainly focus on state based testing.
Also, you will be able to write interaction test cases for tests involving multiple class files. This should cover most of your flows.
So two things to focus on State based testing and Interaction testing.
We have a project that is starting to get large, and we need to start applying Unit Tests as we start to refactor. What is the best way to apply unit tests to a project that already exists? I'm (somewhat) used to doing it from the ground up, where I write the tests in conjunction with the first lines of code onward. I'm not sure how to start when the functionality is already in place. Should I just start writing test for each method from the repository up? Or should I start from the Controllers down?
update:
to clarify on the size of the project.. I'm not really sure how to describe this other than to say there's 8 Controllers and about 167 files that have a .cs extension, all done over about 7 developer months..
As you seem to be aware, retrofitting testing into an existing project is not easy. Your method of writing tests as you go is the better way. Your problem is one of both process and technology- tests must be required by everyone or no one will use them.
The recommendation I've heard and agree with is that you should not attempt to wrap tests around an existing codebase all at once. You'll never finish. Start by working testing into your bugfix process- every fixed bug gets a test. This will start to work testing into your existing code over time. New code must always have tests, of course. Eventually, you'll get the coverage up to a reasonable percentage, but it will take time.
One good book I've had recommended to me is Working Effectively With Legacy Code by Michael C. Feathers. The title doesn't really demonstrate it, but working testing into an existing codebase is a major subject of the book.
There are lots of approaches to fitting tests around an existing codebase. Unit tests are not necessarily the most productive way to start. If you have a large amount of code written then you might want to think about functional and integration tests before you work down to the level of unit tests. Those higher level tests will help give you broad assurance that your product continues to work while you make changes to improve the structure and retrofit unit tests.
One of the practices that non-test-first organizations use that I recommend highly in your situation is this: Have someone other than the author of the original code section write the unit tests for that section. This gets you some level of cross-training and sanity checking, and it also helps ensure that you don't preserve assumptions which will do damage to your code overall.
Other than that, I'll second the recommendation for Michael Feathers' book.
For a legacy project with a decently sized code base, unit testing everything may not be a justifiable effort spend due to budgetary constraints etc. Based on my reading on this subject, I would suggest:
Every bug which has been leaked to QA, Release or Production environment is a candidate for writing unit test case(s) along with fixing the bug.
Use source control to find out which sections/files of your code base are changing more frequently than others. Bring those sections/files under unit test coverage.
New story development should have meaningful unit test case written against them.
Keep monitoring the unit test coverage to observe any downward trend in particular area of the code base. This area needs you to zoom-in and review if unit test coverage is loosing its effectiveness or not.
P.S.: I have added Michael Feathers book to my reading list, thanks for suggesting it.
I've recently been studying TDD, attended a conference and have dabbled in few tests and already I'm 100% sold, I absolutely love it TDD.
As a result I've raised this with my seniors and they are prepared to give it a chance, so they have tasked me with coming up with a way to implement TDD in the development of our enterprise product.
The problem is our system has evolved since the days of VB6 to .NET and implements a lot of legacy technology and some far from best practice development techniques i.e. a lot of business logic in the ASP.NET code behind and client script. The largest problem however is how our classes are tightly coupled with database access; properties, methods, constructors - usually has some database access in some form or another.
We use an in-house data access code generator tool that creates sqlDataAdapters that gives us all the database access we could ever want, which helps us develop extremely quickly, however, classes in our business layer are very tightly coupled to this data layer - we aren't even close to implementing some form of repository design. This and the issues above have created me all sorts of problems.
I have tried to develop some unit tests for some existing classes I've already written but the tests take A LOT longer to run since db access is required, not to mention since we use the MS Enterprise Caching framework I am forced to fake a httpcontext for my tests to run successfully which isn't practical. Also, I can't see how to use TDD to drive the design of any new classes I write since they have to be so tightly coupled to the database ... help!
Because of the architecture of the system it appears I can't implement TDD without some real hack which in my eyes just defeats the aim of TDD and the huge benefits that come with.
Does anyone have any suggestions how I could implement TDD with the constraints I'm bound to? Or do I need to push the repository design pattern down my seniors throats and tell them we either change our architecture/development methodology or forget about TDD altogether? :)
Thanks
Just to clarify, Test driven development and unit testing are not the same thing.
TDD = Write your tests before your code.
Unit Testing = Writing tests that confirm a small unit of code works.
Integration Testing = Writing tests that confirm blocks of code work together.
TDD, by definition, can't be done on existing code. You've already developed the code, so you aren't going to develop it again. TDD means you first write a failing test, then you write just enough code to pass the test. You've already written the code, so you can't do TDD.
You can write unit tests for existing code but this isn't the same as doing TDD.
The tests you have described (accessing the database etc) are technically integration tests. Integration tests do usually take ages to run. A true unit test would purely test your DA layer code without actually accessing the database. True unit testing requires interfaces to test against so you can isolate units from the surrounding units.
It's very hard to properly unit test existing code, unless it's been well designed with interfaces and abstraction in mind.
I know I'm being slightly picky with terminology here, but it's important when it comes to what approach you can take. My advice would be that with existing code that isn't well abstracted you should gradually build up a suite of automated integration tests. When you come to write something new (Which may not be a whole project, it may just be a new part to the existing app), consider approaching it in a TDD style. To do this you will find that you need to write some interfaces and abstractions to allow you to do TDD on your new code without triggering too much of the existing code. You will then be able to write some more integration tests that test your new code working with the old code.
To cut it short - don't try and change methodology for existing code. Just do new code better.
Nitpick: you can't do Test-Driven Design on existing code, but I do realize that you wish to use TDD against new functionality you implement on the existing code base.
The best thing you can do is to gradually introduce seams into the code base, shielding you from the tightly coupled code already in existence.
The book Working Effectively with Legacy Code contains much good advice on how to turn a legacy application into a testable application.
I think the mere fact that you are using ASP.NET Web Forms your road to TDD will be challenged. It's just a nightmare to mock the Session/ViewState/HTTPContext in Web Forms so test-driven is almost a hindrance. There is of course the ASP.NET MVC alternative, but you're way down the road already it seems. Another promising option for the Web Forms paradigm is ASP.NET Web Forms MVP, but that project is really not fully mature yet. I'm moving everything I can to MVC; the other option for TDD with Web Forms, Selenium, isn't really TDD as it should be.
For adding tests to your existing code, you might check out Working Effectively With Legacy Code. "Legacy code" is defined as code that does not have tests in it.