I am in the process of retrofitting unit tests for a asp.net solution written in VB.Net and c#.
The unit tests need to verify the current functionality and act as a check for future breaking changes.
The solution comprises of:
1 MVC web project
written in vb.net (don't ask, it's a legacy thing)
10 other supporting projects each containing logically grouped functionality
written in C#, each project contains repositories and DAL
All the classes are tightly coupled as there is no inversion of control (IOC) implemented anywhere, yet.
currently to test a controller there is the following stack:
controller
repository
dal
logging
First question, to unit test this correctly would I setup 1 test project and run all tests from it, or should I setup 1 test project for each project to test the functionality of that DLL only?
Second question, do I need to implement IOC to be able to use MOQ?
Third question, is it even possible to refactor IOC into a huge solution like this?
Forth question, what other options are available to get this done asap?
I am in the process of retrofitting unit tests for a asp.net solution written in VB.Net and c#. The unit tests need to verify the current functionality and act as a check for future breaking changes.
When working with a large code base that doesn't have unit tests and hasn't been written with testing in mind, there is a good chance that in order to write a useful set of unit tests you will have to modify the code, hence you're going to be triggering the event that you're planning on writing the unit tests to support. This is obviously risky, but may not be any riskier than what you're already doing on a day to day basis.
There are a number of approaches that you could take (and there's a good chance that this question will be closed as too broad). One approach is to create a good set of integration tests ensure that the core functionality is working. These tests won't be as fast to run as unit tests, but they will be further decoupled from the legacy code base. This will give you a good safety net for any changes that you need to make as part of introducing unit testing.
If you have an appropriate version of visual studio, then you may also be able to use shims (or if you have funds, typemock may be an option) to isolate elements of your application when writing your initial tests. So, you could for example create shims of your dal to isolate the rest of your code from the db.
First question, to unit test this correctly would i setup 1 test project and run all tests from it, or should i setup 1 test project for each project to test the functionality of that dll only?
Personally, I prefer think of each assembly as a testable unit, so I tend to create at least one test project for each assembly containing production code. Whether or not that makes sense though, depends a bit on what's contained in each of the assemblies... I'd also tend to have at least one test project for integration tests of the top level project.
Second question, do i need to implement IOC to be able to use MOQ?
The short answer is no, but it depends what your classes do. If you want to test using Moq, then it's certainly easier to do so if your classes support dependency injection, although you don't need to use an IOC container to achieve this. Hand rolled injection either through constructors like below, or through properties can form a bridge to allow testing stubs to be injected.
public SomeConstructor(ISomeDependency someDependency = null) {
if(null == someDependency) {
someDependency = new SomeDependency();
}
_someDependency = someDependency;
}
Third question, is it even possible refactor IOC into a huge solution like this?
Yes it's possible. The bigger question is it worth it? You appear to be suggesting a big bang approach to the migration. If you have a team of developers that don't have much experience in this area, this seems awfully risky. A safer approach might be to target a specific area of the application and migrate that section. If your assemblies are discrete then they should form fairly easy split points in your application. Learn what works and what doesn't, along with what benefits and unexpected pain you're feeling. Use that to inform your decision about how and when to migrate the rest of the code.
Forth question, what other options are available to get this done asap?
As I've said above, I'm not sure that ASAP is really the right approach to take. Working towards unit-testing can be done as a slow migration, adding tests as you actually change the code due to business requirements. This helps to ensure that testers are also allocated to catch any errors that you introduce as part of the refactoring that might need to take place to support the testing.
Related
Every time, when I deal with writing Tests in .NET (C#), I read, you can test DLLs only. But my app hasn't any DLLs and I want test some controllers and behaviours too. How it is possible, without creating an DLL project? Is it possible?
Hints are welcome.
Your unit test project(s) can still reference application projects like any other class library. It may be considered less ideal, but there's nothing preventing it from working. As the logic of your system grows, even a little bit, you'll certainly want to consider moving the business logic into Class Library projects and keeping the application layer as thin as possible.
I read, you can test DLLs only.
This was an oversimplification. What they probably meant was that you can test discrete functionality only. Regardless of what kind of project is hosting that functionality, the functionality itself has to be well defined, with separated concerns, and individually testable. If that's causing a problem in your design, then you have more work to do in order to unit test your code.
But to get started, simply create your unit test project and reference your other project. Then start writing tests for your individual units of functionality. Each test should be testing only one discrete thing, and should consist of the simple steps of:
Arrange
Act
Assert
Unit tests shouldn't require any additional setup, nor should they produce any side effects. This is where the "discrete" part comes in, each one should be individual and not depend on others.
I am developing a Rest api with ServiceStack. I'm doing a tdd aproach, and write tests with each new service I implement.
My DAL is pretty thin, with my repositories consisting of only crud operations. Moreover, the repos inherit from a C# generics repository, with 12 out of my 14 not needing any customization.
For each service I build a test bed, and go through all the possible error/success scenarios that can ocurr.
Is it correct in this scenario to only produce tests for repositories? In what situations should I consider testing other system components?
Thanks
TTD involves using many tests to keep you code clean and functional. It sound to me that so far you have implemented unit tests. Unit tests are tests that only look at a single class and attempt to determine if the class is working correctly. Nothing more, nothing less.
If you want to test a little more broadly, you can implement integration tests. Integration tests tend to test a full system to see if everything together is capable of doing what it is supposed to.
Unit test are all supposed to be very fast, so that you can run them all every few minutes without much slowdown.
Integration tests are allowed to take longer, because you might only run them every few hours to see if everything is still integrating well.
A combination of both types of tests help drive a TTD approach.
I have runned into to the issue regarding unit test of dataprovider's. What are the best way to implement that.
One solution would be to insert something into the database and read it to make sure that it's as expected. And then removing it again. But this requires more coding.
The other solution is to have an extra database, which i could test against. This also requires alot of work to implement it.
What are the correct way to implement it?
As others have pointed out, what you are describing is called integration testing. Integration testing is something you should definitely do but it's good to understand the differences.
A unit test tests an individual piece of code without any dependencies. Dependencies are things like a database, file system or a web service but also other internal classes that are complex and require their own unit tests. Unit tests are made to run very fast. Especially when performing test driven development (TDD) you want your unit tests to execute in the order of milliseconds.
Integration tests are used to test how different components work together. If you have made sure through unit tests that your business logic is correct, your integration tests only have to make sure that all connections between different elements are in place. Integration tests can take a long time but you have fewer of them than unit tests.
I wrote a blog post on this some time ago that explains the differences and shows you ways to remove external dependencies while unit testing: Unit Testing, hell or heaven?.
Now regarding, your question. When running integration tests against a database you have a couple of options:
Use delta testing. This means that at the beginning of your test you record the current state of your database. For example, you store that are now 3 people in the people table. Then in your test you add one person and verify that there are now 4 people. in the database. This can be used quite effectively in simple scenarios. However, when your project grows more complex this is probably not the way to go.
Use a transaction around your unit tests. This is an easy way to make sure that your tests don't leave any data behind. Just start a new transaction (using the TransactionScope class in the .NET Framework) at the beginning of the test. As long as you don't complete the transaction, all changes will be rolled back automatically.
Use a new database for each test. Using localdb support in Visual Studio 2012 and higher, this can be done relatively fast.
I've chosen for the transaction scope a couple of times before and it worked quite well. One thing that's very important when writing integration tests like this is to make sure that your tests don't depend upon eachother. They need to run in whatever order the test runner decides on.
You should also make sure to avoid any 'magic numbers'. For example, maybe you know that your database contains 3 people so in your test you add one person and then assert that there are four in the database. For readers of your tests (which will be you in a couple of days, weeks or months) this is very hard to understand. Make sure that your tests are self explaining and that you don't depend on external state that's not obvious from the test.
You cannot unit test external dependencies like database connections. There is a good post here about why this is the case. In short: external dependencies should be tested, but that's integration tests, not unit tests.
Normally you do write intergration test when you call your database from code. If you want to write unittest, you should have a look at mocking frameworks.
I am looking for ideas to lead me to implement component testing for my application. Sure I use Unit Testing to test my single methods by utilizing TestMethods in a separate project but at this point I am more interested in testing in a higher level. Say that I have a class for Caching and I wrote unit tests for each and every method. They all contain their own instance of the class. And it works fine when I run the test, it initiates an object from that class and does one thing on it. But this doesnt cover the real life scenario in which the method is called by other methods and so on. I want to be able to test the entire caching component. How should I do it?
It sounds like you are talking about integration testing. Unit testing, as you say, does a great job of testing classes and methods in isolation but itegration testing tests that several components work together as expected.
One way to do this is to pick a top (or high) level object, create it with all of its dependencies as "real" objects as well and test that the public methods all produce the expected result.
In most cases you'll probably have to substitute stubs of the lowest level classes, like DB or file access classes and instrument them for the tests, but most objects would be the real thing.
Of course, like most testing efforts, all this is achieved much easier if your classes have been designed with some sort of dependency injection and paying attention to good design patterns like separation of concern.
All of this can be done using the same unit testing tools you've been using.
I would download the NUNIT-Framework
http://www.nunit.org/
It's free and simple
I'm currently investigating how we should perform our testing in an upcoming project. In order to find bugs early in the development process, the developers will write unit tests before the actual code (TDDish). The unit tests will focus, as they should, on the unit (a method in this case) in isolation so dependencies will be mocked etc etc.
Now, I also would like to test these units when they interact with other units and I was thinking that there should be a effective best practice to do this since the unit tests have already been written. My thought is that the unit tests will be reused but the mocked objects will be removed and replaced with real ones. The diffent ideas I have right now is:
Use a global flag in each test class that decides if mock objects should be used or not. This approach will require several if statements
Use a factory class that either creates a "instanceWithMocks" or "instanceWithoutMocks". This approach might be troublesome for the new developers to use and requires some extra classes
Separate the integration tests from the unit tests in different classes. This will however require a lot of redundant code and maintaining the test cases will be twice the work
The way I see it all of these approaches have pros and cons. Which of these would be preferred and why? And is there a better way to effective transition from unit testing to integration testing? Or is this usually done in some other way?
I would go for the third option
Seperate the integration tests from the unit tests in different
classes. This will however require alot of redundant code and
maintaining the test cases will be twice the work
This is because unit tests and integration tests have different purposes. A unit test shows that an individual piece of functionality works in isolation. An integration test shows that different pieces of functionality still work when they interact with each other.
So for a unit test you want to mock things so that you are only testing the one piece of functionality.
For an integration test mock as little as possible.
I would have them in separate projects. What works well at my place is to have a unit test project using NUnit and Moq. This is written TDD as the code is written. The integration tests are Specflow/Selenium and the feature files are written with the help of the product owner in the planning session so we can verify that we are delivering what the owner wants.
This does create extra work in the short term but leads to fewer bugs, easier maintenance, and delivery matching requirements.
I agree to most other answers, that unittesting should be seperate from integrationtesting (option 3).
But i do not agree to your contra arguments:
[...] This (seperating unit from integration testing) will however
require a lot of redundant code and maintaining the test cases will be twice the work.
Generating objects with test data can be a lot of work but this can be refactored to test-helper clases aka ObjectMother that can be used from
unit and integration testing so there is no need for redundancy there
In unit tests you check different conditions of the class under tests.
For integration testing it is not neccessary to re-check every of these special cases.
Instead you check that the components work together.
Example
You may have unit-tests for 4 different situations where an exception is thrown.
For the integration it is not neccessary to re-test all 4 conditions
One exception-related integration test is enough to verify that the integrated system can handle exceptions.
An IoC container like Ninject/Autofac/StructureMap may be of use to you here. The unit tests can resolve the system-under-test through the container, and it is simply a matter of registration whether you have mocks or real objects registered. Similar to your factory approach, but the IoC container is the factory. New developers would need a little training to understand, but that's the case with any complex system. The disadvantage to this is that the registration scenarios can become fairly complicated, but it's hard to say for any given system whether they'd be too complicated without trying it out. I suspect this is the reason you haven't found any answers that seem definitive.
The integration tests should be different classes than your unit tests since you are testing a different behavior. The way that I think of integration tests is that they are the ones that you execute when trying to make sure that everything works together. They would be using inputs to portions of the application and making sure that the expected output is returned.
I think you are messing up the purpose of unit testing and integration testing.
Unit testing is for testing a single class - this is low level API.
Integration testing is testing how classes cooperate. This is another higher level API.
Normally, you can not reuse unit tests in integration testing because they represent different level of system view.
Using spring context may help with setting up environment for integration testing.
I'm not sure reusing your unit tests with real objects instead of mocks is the right approach to implementing integration tests.
The purpose of a unit test is to verify the basic correctness of an object in isolation from the outside world. Mocks are there to ensure that isolation. If you substitute them for real implementations, you'll actually end up testing something completely different - the correctness of large portions of the same chain of objects, and you're redundantly testing it many times.
By making integration tests distinct from unit tests, you'll be able to choose the portions of your system you want to verify - generally, it's a good idea to test parts that imply configuration, I/O, interation with third-party systems, the UI or anything else that unit tests have a hard time covering.