TDD, What are your techniques for finding good tests? - c#

I am writing a simple web app using Linq to Sql as my datalayer as I like Linq2Sql very much. I have been reading a bit about DDD and TDD lately and wanted to give it a shot.
First and foremost it strikes me that Linq2Sql and DDD don't go along too great. My other problem is finding tests, I actually find it very hard to define good tests so I wanted to ask, What is your best techniques for discovering good test cases.

Test Case discovery is more of an art than a science. However simple guidelines include:
Code that you know to be frail / weak / likely to break
Follow the user scenario (what your user will be doing) and see how it will touch your code (often this means Debugging it, other times profiling, and other times it simply means thinking about the scenario) - whatever points in your code get touched by the user, those are the highest priority to write tests against.
During your own development the tests you ran that resulted in bugs you found - write tests to avoid the code regressing again with the same behavior.
There are several books on how to write test cases out there, but unless you are working in a large organization that requires documented test cases, your best bet is to think of all the parts in your code that you don't like (that aren't "pure") and make sure you can test those modules thoroughly.

Well, going by the standard interpretation of TDD is that the tests drive your development. So, in essence you start with the test. It will fail, and you will write code until that test passes. So it's kind of driven by your requirements, however you go about gathering those. You decide what your app/feature needs to do, write the test, then code until it passes. Of course, there are many other techniques but this is just a brief statement about what is typically thought in the TDD world.

Think. Read the code. Question yourself: e.g. can this pointer never be NULL here ? What happens if this method is called before the initialization ?
Don't make assumptions such as "this file will always be there". Test.
Think about edge cases, boundaries, negative values, overflows ...
Bug are often grouped by cluster. Look around when you find one. Also look for the same kind of bug in other locations.

Set your mind to the actual goal of testing: Finding bugs.
Be creative at imagining what could make your program fail.
Your tests must find bugs, not confirm that your program is OK.

I regularly write tests for third-party APIs. That way, when the API updates, I know whether I'm going to break or not.

I think this is a useful technique:
Using contracts and boolean queries to improve quality of automatic test generation
Reference: Lisa (Ling) Liu, Bertrand Meyer and
Bernd Schoeller, Using contracts and boolean queries to improve the
quality of automatic test generation, in proceedings of TAP: Tests
And Proofs, ETH Zurich, 5-6 February 2007, eds. Yuri Gurevich and
Bertrand Meyer, Lecture Notes in Computer Science, Springer-
Verlag, 2007.

Related

TDD and Unit Testing for Short Projects with Small Teams?

Is TDD good approach for small and short projects done by small teams up to 4 people ? Is that really a profitable effort ?
What about Unit Testing ? TDD implies Unit Testing but no conversely. Wouldn't be Unit Testing sufficient, done from time to time during the project development life cycle, until reasonable code coverage ?
For me, it doesn't boil down to whether the project is small or short. TDD, done correctly, is about being able to quickly run a set of tests that provide full confidence in the code. There also has been a lot written about TDD helping to drive out the appropriate design for projects as well.
So, you could argue that TDD is best on small and short projects because you end up only writing the code that you need to make the tests pass and nothing else. Therefore, keeping the cost down. Then, there is the added benefit of having confidence in the tests and code when you make changes later.
The next small point I would make is that a lot of projects start small and short. These interim solutions have a way of becoming strategic platforms for development (if successful).
Recently I've read nice blog post from Szymon Pobiega, titled If you are not doing TDD…. I really encourage to take a look on this. It's because I'm a bit skeptical about TDD and placing it into great habits of development life cycle, as an ultimate solution for your projects safety, apart from the nature of the project. I like this fragment:
TDD is most useful when you have absolutely no idea about the shape of
the code. In such case you better not hack the code as fast as you
can. Rather then make one small step at a time, each time validating
the outcome. Ideally, do it in a pair. Each step you make allows you
to discover more tests that need to be written. I love the analogy Dan
used in his presentation: it’s just like swimming versus walking in
1.50 meter deep pool. You can walk all the way through ensuring every time one of you feet is on the ground or you can just swim which is
more risky (no contact with ground) but makes you move a lot faster.
TDD is like walking.
He is referencing to Dan North session from NDC 2012.
The answer is YES! And for the following reason.
Picture yourself the following scenario. You and your project team have created a great application and decide to put it in production. After a while a user comes to you saying: Hey guys I found a bug in your application can you fix it please? Of course you say yes fix the bug and the user is happy. Only after a while he comes back saying, great that you fixed the problem but now something that did work doesn't anymore can you repair it?
If you had used TDD and had made unit tests for the application the scenario wouldn't be like this. This is because you have tests made for your entire application. After solving a bug, you simply run the unit tests again to see if your "fix" din't break any other things in your application.
This answer is more aimed at the use of unit tests. The following is abbout TDD itself.
What TDD makes you as an individual do is think about what am I going to code in the following (object, class, function and so on). Also what are the edges of my code what if/else statements do I need what do I need to check for. If you are by yourself or in a project team of 20 people, this doesn't really matter. The thing with TDD is the thinking process of you not the other people in your project.
If you are doing UnitTesting its only a small step to adopt TDD. But you will benefit much more from it.
Just one example: TDD defines that you implement your tests first. This implies that you think about the goal that you want to achieve before you start to implement. This leads to better code.
TDD and Unit Testing are not alternatives to each other. In every project you should test your code, this is where you do Unit Testing and Integration and System Testing.
TDD is however, a development model. Like Waterfall and other development methods, TDD is a method too. In TDD you have some basic requirements and you write unit tests to ensure that the requirements are implemented and working. But when you are writing unit tests you realize that in order to achieve the major requirements, you need to implement more functions and classes. So unit tests in this context makes other requirements clearer.
Suppose that you need to write an application that prints the name of the computer the application is running on. First you write a unit test:
[Test]
public void ProducedMessage_IsCorrect()
{
AreEqual(BusinessLibrary.ProduceMessage(), System.Environment.MachineName);
}
and then you realize you need to implement the ProduceMessage function. You implement it as simple as it gets so that the unit test passes. You write the method like this:
public string ProduceMessage()
{
return "MyComputer";
}
Then you run the test and it passes. After that you submit your code and other members of the team gets the code. When they run the test the test fails because you hardcoded the name of your computer in the code.
So some wise member of your team changes the code to the correct form and you keep going.
It is all about choosing developers who have TDD experience. At least some of them should be experienced TDD developers I think.

Outside-in BDD (with Specflow)

I'm new to BDD, but I found it very interesting and want to develop my next project using BDD. After googling and watching screencasts I still have lots of questions about BDD in real life.
1. Declarative or Imperative scenarios?
Most of given-when-then scenarios I saw were written in terms of UI (imperative).
Scenario: Login
Given I am on the Login-page
When I enter 'AUser' in the textbox 'UserName'
And I enter 'APassword' in the textbox 'Password'
And I click the 'Login' button
Then I should see the following text 'You are logged in'
I found those tests extremely brittle and they tell nothing about business value of clicking on buttons. I think its nightmare to maintain. Why most of examples use imperative scenarios?
Scenario: Login (declarative)
Given I am not logged in
When I log in using valid credentials
Then I should be logged in
If you prefer declarative style, how do you describe such stuff like 'Home page' or 'Products page'?
Tips for writing good specifications
2. Exercise UI or not?
Most of steps implementations I saw used WatiN, White or something like that to implement scenarios from user point of view. Starting browser, clicking buttons. I think its extremely slow and brittle. Well, I can use something like Page object to make tests less brittle. But thats another amount of work. Especially for desktop applications with complex UI.
How do you implement scenarios in real-life projects - exercising UI, or via testing controllers/presenters?
Best way to apply BDD
3. Real database or not?
When Given part of scenario is implemented, often it needs some data to be in the system (e.g. some products for shop application). How do you implement that part - adding data to real database (full end-to-end testing), or providing repository stubs to controllers?
Waiting for experienced answers!
UPDATE: Added useful links on questions.
Declaritive is the proper way, IMO. If youre talking about page .aspx file names, you're doing it wrong. The purpose of the story is to facilitate communication between developers and non-develoeprs. Non-developers don't care about products.aspx, they care about a product listing. Your system does something the non-developers find value in. This is what you're trying to capture.
Well, the stories tell the high level features you need to implement. Its what your system must do. The only way to really tell if you've done this is to in fact exercise the UI. BDD SpecFlow stories to me don't replace unit tests, rather they're you're integration tests. If you break this, you've broken the value the business gets from your software. Unit tests are implementation details your users don't care about, and they only test each piece in isolation. That can't tell you if A and B actually work together all the time (in theory it should, in practice interesting [read: unexpected] things happen when you actually have the two parts playing with each other). Automated end to end tests can help with your QA as well. If a functional area breaks, you know about it, and they can spend their time in other areas of the application while you determine what broke the integration tests.
This is a tricky one. We've done a hybrid approach. We do use the database (integrate tests after all test the system functioning as one thing, rather than the individual components), but rather than resetting configurations all the time we use Deleporter to replace our repositories with Moqs when we need to. Seems to work ok, but there are certainly pros and cons either way. I think we're still largely figuring this out ourselves.
Edit: I found this article just now describing the concept of limiting yourself to talking only about specific domains to avoid brittle scenarios.
His main point is that the minimum number of domains you can talk about are the problem domain, and the solution domain. If you're talking about anything outside those two domains then you involve too many stakeholders, you introduce too much noise, and you make your scenarios brittle.
He also mentions an absolute "declarative" or "imperative" model being a myth. He talks about a cognative model called "chunking", saying that at any level of your abstraction, you can "chunk up" or "chunk down". This means you can get more explicit (how?) or more meta (what or why?). You chunk up from an imperative model by asking "what are we doing?" You chunk down by saying "how will we do this?" So I guess I wouldn't get too hung up on declarative vs imperative - it won't get you anywhere as far as this problem goes.
What will get you somewhere is figuring out which domains each term belongs in, possibly by identifying which stakeholder is the expert for the domain that term belongs in. Once you've identified all the domains, you can either pick related terms that are in one of the scenario's most prominent domains, or remove non-fitting statements entirely. If that isn't sufficient, you can split up, further specify, or move the scenario so it can satisfy these requirements.
BTW, he also uses the scenario of logging in on a UI, so you've got concrete guidance :)
Before Edit: (some of this still applies. The "DB or no DB" and "UI or no UI" questions are unrelated)
1 - Declarative or Imperative scenarios?
Declarative when you can, though imperative has some value (at some points in a project lifecycle).
Imperative is an easier way to think for testers and business analysts who aren't as familiar with information theory and design. It is also easier to think about earlier on in a project, before you've nailed down your problem domain and workflows. It can be useful for exploratory thinking.
Declarative is less subject to change over time. Since a GUI is the part of an application most subject to churn at a whim, this is extremely valuable. It is easier to think about once you've nailed down your problem domain and workflows, and are more focused on relational concepts. It is a more stable and more generally applicable model.
If you write test cases with a generic and declarative model, you could implement them using any combination of full app GUI automation, integration tests, or unit tests.
how do you describe such stuff like 'Home page' or 'Products page'?
I'm not sure I would at the base level of features and requirements. You might make sub-features and sub-requirements that describe implementation details, like specific UI workflows. If you're describing a piece of a UI, then you should be defining a UI feature/requirement.
2 - Exercise UI or not?
Both.
I think its extremely slow and brittle
Yes, it is. Perform every high level scenario/requirement with the UI and full DB integration, but don't exercise every single code path with end to end UI automation, and certainly not edge cases. If you do, you'll spend more time getting them working, and a lot less time actually testing your application.
You can architecture your application so you can do lower cost integration tests, including single-piece UI based tests. Unit tests are also valuable.
But the fewer integration tests you do, the more forehead-slapping bugs you're going to miss. It may be easier to write unit tests, and they will certainly be less brittle, but you'll be testing less of your application, by definition.
3 - Real database or not?
Both.
High level end-to-end integration tests must be done with the full system in place. This includes a real DB, running your tests with each system on a different server, etc.
The lower level you get, the more I advocate mock objects.
Unit tests should only test individual classes
Mid-level integration tests should avoid expensive, brittle, and impactful dependencies such as the file system, databases, the network, etc. Try to test the implementation of those brittle and impactful dependencies with unit tests and end-to-end tests only.
Instead of mentioning a page by name, describe what it represents, e.g.
Scenario: Customers logs in successfully
When I log in
Then I should see an overview of ACME's top selling products
You can test directly against underlying APIs or models, but the more you do this, the more you risk not catching an integration issue. One approach is to balance things with a small number of full-stack tests, and a larger number which test between two layers only.

How to approach unit testing in a large project

We have a project that is starting to get large, and we need to start applying Unit Tests as we start to refactor. What is the best way to apply unit tests to a project that already exists? I'm (somewhat) used to doing it from the ground up, where I write the tests in conjunction with the first lines of code onward. I'm not sure how to start when the functionality is already in place. Should I just start writing test for each method from the repository up? Or should I start from the Controllers down?
update:
to clarify on the size of the project.. I'm not really sure how to describe this other than to say there's 8 Controllers and about 167 files that have a .cs extension, all done over about 7 developer months..
As you seem to be aware, retrofitting testing into an existing project is not easy. Your method of writing tests as you go is the better way. Your problem is one of both process and technology- tests must be required by everyone or no one will use them.
The recommendation I've heard and agree with is that you should not attempt to wrap tests around an existing codebase all at once. You'll never finish. Start by working testing into your bugfix process- every fixed bug gets a test. This will start to work testing into your existing code over time. New code must always have tests, of course. Eventually, you'll get the coverage up to a reasonable percentage, but it will take time.
One good book I've had recommended to me is Working Effectively With Legacy Code by Michael C. Feathers. The title doesn't really demonstrate it, but working testing into an existing codebase is a major subject of the book.
There are lots of approaches to fitting tests around an existing codebase. Unit tests are not necessarily the most productive way to start. If you have a large amount of code written then you might want to think about functional and integration tests before you work down to the level of unit tests. Those higher level tests will help give you broad assurance that your product continues to work while you make changes to improve the structure and retrofit unit tests.
One of the practices that non-test-first organizations use that I recommend highly in your situation is this: Have someone other than the author of the original code section write the unit tests for that section. This gets you some level of cross-training and sanity checking, and it also helps ensure that you don't preserve assumptions which will do damage to your code overall.
Other than that, I'll second the recommendation for Michael Feathers' book.
For a legacy project with a decently sized code base, unit testing everything may not be a justifiable effort spend due to budgetary constraints etc. Based on my reading on this subject, I would suggest:
Every bug which has been leaked to QA, Release or Production environment is a candidate for writing unit test case(s) along with fixing the bug.
Use source control to find out which sections/files of your code base are changing more frequently than others. Bring those sections/files under unit test coverage.
New story development should have meaningful unit test case written against them.
Keep monitoring the unit test coverage to observe any downward trend in particular area of the code base. This area needs you to zoom-in and review if unit test coverage is loosing its effectiveness or not.
P.S.: I have added Michael Feathers book to my reading list, thanks for suggesting it.

Unit testing database application with business logic performed in the UI

I manage a rather large application (50k+ lines of code) by myself, and it manages some rather critical business actions. To describe the program simple, I would say it's a fancy UI with the ability to display and change data from the database, and it's managing around 1,000 rental units, and about 3k tenants and all the finances.
When I make changes, because it's so large of a code base, I sometimes break something somewhere else. I typically test it by going though the stuff I changed at the functional level (i.e. I run the program and work through the UI), but I can't test for every situation. That is why I want to get started with unit testing.
However, this isn't a true, three tier program with a database tier, a business tier, and a UI tier. A lot of the business logic is performed in the UI classes, and many things are done on events. To complicate things, everything is database driven, and I've not seen (so far) good suggestions on how to unit test database interactions.
How would be a good way to get started with unit testing for this application. Keep in mind. I've never done unit testing or TDD before. Should I rewrite it to remove the business logic from the UI classes (a lot of work)? Or is there a better way?
I would start by using some tool that would test the application through the UI. There are a number of tools that can be used to create test scripts that simulate the user clicking through the application.
I would also suggest that you start adding unit tests as you add pieces of new functionality. It is time consuming to create complete coverage once the appliction is developed, but if you do it incrementally then you distribute the effort.
We test database interactions by having a separate database that is used just for unit tests. In that way we have a static and controllable dataset so that requests and responses can be guaranteed. We then create c# code to simulate various scenarios. We use nUnit for this.
I'd highly recommend reading the article Working Effectively With Legacy Code. It describes a workable strategy for what you're trying to accomplish.
One option is this -- every time a bug comes up, write a test to help you find the bug and solve the problem. Make it such that the test will pass when the bug is fixed. Then, once the bug is resolved you have a tool that'll help you detect future changes that might impact the chunk of code you just fixed. Over time your test coverage will improve, and you can run your ever-growing test suite any time you make a potentially far-reaching change.
TDD implies that you build (and run) unit tests as you go along. For what you are trying to do - add unit tests after the fact - you may consider using something like Typemock (a commercial product).
Also, you may have built a system that does not lend itself to be unit tested, and in this case some (or a lot) of refactoring may be in order.
First, I would recommend reading a good book about unit testing, like The Art Of Unit Testing. In your case, it's a little late to perform Test Driven Development on your existing code, but if you want to write your unit tests around it, then here's what I would recommend:
Isolate the code you want to test into code libraries (if they're not already in libraries).
Write out the most common Use Case scenarios and translate them to an application that uses your code libraries.
Make sure your test program works as you expect it to.
Convert your test program into unit tests using a testing framework.
Get the green light. If not, then your unit tests are faulty (assuming your code libraries work) and you should do some debugging.
Increase the code and scenario coverage of your unit tests: What if you entered unexpected results?
Get the green light again. If the unit test fails, then it's likely that your code library does not support the extended scenario coverage, so it's refactoring time!
And for new code, I would suggest you try it using Test Driven Development.
Good luck (you'll need it!)
I'd recommend picking up the book Working Effectively with Legacy Code by Michael Feathers. This will show you many techniques for gradually increasing the test coverage in your codebase (and improving the design along the way).
Refactoring IS the better way. Even though the process is daunting you should definitely separate the presentation and business logic. You will not be able to write good unit tests against your biz logic until you make that separation. It's as simple as that.
In the refactoring process you will likely find bugs that you didn't even know existed and, by the end, be a much better programmer!
Also, once you refactor your code you'll notice that testing your db interactions will become much easier. You will be able write tests that perform actions like: "add new tenant" which will involve creating a mock tenant object and saving "him" to the db. For you next test you would write "GetTenant" and try and get that tenant that you just created from the db and into your in-memory representation... Then compare your first and second tenant to make sure all fields match values. Etc. etc.
I think it is always a good idea to separate your business logic from UI. There several benefits to this including easier unit testing and expandability. You might also want to refer to pattern based programming. Here is a link http://en.wikipedia.org/wiki/Design_pattern_(computer_science) that will help you understand design patterns.
One thing you could do for now, is within your UI classes isolate all the business logic and different business bases functions and than within each UI constructor or page_load have unit test calls that test each of the business functions. For improved readability you could apply #region tag around the business functions.
For your long term benefit, you should study design patterns. Pick a pattern that suits your project needs and redo your project using the design pattern.
It depends on the language you are using. But in general start with a simple testing class that uses some made up data(but still something 'real') to test your code with. Make it simulate what would happen in the app. If you are making a change in a particular part of the app write something that works before you change the code. Now since you have already written the code getting testing up is going to be quite a challenge when trying to test the entire app. I would suggest start small. But now as you write code, write unit testing first then write your code. You might also considering refactoring but I would weigh the costs of refactoring vs rewriting a little as you go unit testing along the way.
I haven't tried adding test for legacy applications since it is really a difficult chore. If you are planning to move some of the business logic out of the UI and in a separate layer, You may add your initial Test units here(refactoring and TDD). Doing so will give you an introduction for creating unit test for your system. It is really a lot of work but I guess it is the best place to start. Since it is a database driven application, I suggest that you use some mocking tools and DBunit tools while creating your test to simulate the database related issues.
There's no better way to get started unit testing than to try it - it doesn't take long, it's fun and addictive. But only if you're working on testable code.
However, if you try to learn unit testing by fixing an application like the one you've described all at once, you'll probably get frustrated and discouraged - and there's a good chance you'll just think unit testing is a waste of time.
I recommend downloading a unit testing framework, such as NUnit or XUnit.Net.
Most of these frameworks have online documentation that provides a brief introduction, like the NUnit Quick Start. Read that, then choose a simple, self-contained class that:
Has few or no dependencies on other classes - at least not on complex classes.
Has some behavior: a simple container with a bunch of properties won't really show you much about unit testing.
Try writing some tests to get good coverage on that class, then compile and run the tests.
Once you get the hang of that, start looking for opportunities to refactor your existing code, especially when adding new features or fixing bugs. When those refactorings lead to classes that meet the criteria above, write some tests for them. Once you get used to it, you can start by writing tests.

We have migrated VB6 code to C# in .net

The code was migrated using a third party tool. what ever the tool couldnt do, was done by the .net developers, so that all compile issues were fixed. My question is, for such migration activities, do we not bother running unit tests for the functions.
Secondly, Could anyone suggest if we should use some tool in VSTS 10 to create a UML model of this code to minimize risks of issues that the client might find. How cumbersome is it.
Are there any other suggestions for how quality migrated code can be delivered, in light of the fact that the functionality of the original VB6 application is unknown to us.
for such migration activities, do we not bother running unit tests for the functions.
I wouldn't trust freshly translated code (mechanical or otherwise) at all. Absolutely it needs testing.
the functionality of the original VB6 application is unknown to us.
That will make regression testing quite... challenging. If you don't know how it is meant to behave, how do you know when you've finished it?
Of course, you could decide not to unit test the translated code, then you won't know how the new code works either - not sure that "unknown = unknown" counts as a "pass", though.
In my experience, the vast majority of applications provide a great deal of "unknown" functionality. After all the reason we write software is to help us manage information in ways that immeasurably exceed our abilities as mere morals. Over time, the size and complexity of our software grows, and grows, and grows until it contains a vast amount of "unknown" functionality. The unknown functionality was probably known and verified as "correct" at one time and it was captured in detail by the source code. However, as time passes no one fully remembers/knows what all the functionality is or even why it is "correct". The full functionality is only "remembered/known" by the source code, teams "test what they change" and the rest is assumed correct unless a problem shows up. This is particularly true of systems that have been extended and changed by many people over many years. Of course this creates risk, and we can do better, process like TDD and tools to automate unit testing are helping, but for many older systems lack of system understanding and incomplete testing are facts of life. The technical idealist in me does not like this, but the business realist in me accepts it.
All that said, this presents a major problem for migration teams. In theory these teams are "changing everything". In a VB6-to-.NET migration, "Test what we changed" means test it all. Ouch. Also the functional requirements for a migration often are "just make it do what it does now, but on the new platform." Not very useful when people do not know/remember everything the system does let alone how to verify that it does it correctly. I am working with several customers that have huge VB6 apps containing 100s of thousands of LOC organized into hundreds or forms and classes and several thousand methods, properties, and event handlers. I am sure these apps contain 10s of thousands of function points. I like to ask migration teams how long it would take them to find the error if I went into the VB6 and "broke" one little thing somewhere. I rarely get an answer...
This is why I advocate using a tool-assisted rewrite methodology. One of the most critical inputs to this process is the production-tested source code. We assume this code is "correct" since you or your customers are running their business on it. The source code is an extremely detailed, formal, and complete answer to the question: what does the system do? In our approach, the migration team iteratively customizes, calibrates, and verifies the automatic, systematic translation and re-engineering of the VB6 source to a complete .NET source. We translate, test, tune, and repeat; each time improving the quality of the translation in terms of functional correctness and conformance to .NET coding standards. Verifying and refining what the tool does is central to the methodology.
In order to verify code quality, we use code reviews and "side-by-side" testing. Code reviews are done by inspecting the .NET code using eyes, and other tools such as the .NET compiler, FXCop, NDepends, etc. We also do a lot of comparing successive generations of the translated codes using a product like BeyondCompare to verify that each translation tuning change has the desired effect and no undesired side-effect. Side-by-side testing is just what it sounds like: the general idea is to run the legacy and .NET apps in side-by-side test environments and make sure their results and behaviors match. There are at least a couple challenges here:
what do you do when you "run the app"; and
how do you make sure the results and behaviors match?
The first question is typically answered in terms of test data, use cases and automated unit tests; the second question is answered in terms of looking at the application UI, and the results (data, web pages, reports) from both systems and comparing (aka approval-based testing). Of course testing tools can go a long way to increase the efficiency. A large-scale migration is a very good time to have a discussion about starting to use testing tools.
If you are planning to migrate a large complex codebase, you need to plan to be very smart about testing. If done properly, the tool-assisted approach delivers production ready code very efficiently, and this will free up resources to produce QC artifacts and improve QC processes that will endure long after the migration.
Disclaimer: I work for Great Migrations.
From the tone of your question it sounds like you know the answer! I would say anything other than a complete set of regression tests would be a recipe for disaster! Ideally, you would want to run the same set of tests against both the old and new versions, although it sounds like you might not be able to do that...
My honest answer - make sure you've got plenty of support/maintenance developers ready to work round the clock fixing support issues!

Categories