Is it possible to use [TestMethod] attribute outside of the test project? - c#

I feel that test methods should be placed right under the methods they are supposed to test. But in tutorials I found so far they only placed in [TestClass]es inside Unit Test Projects. Why is that necessary?

Why would you want to use [TestMethod] outside the Unit Test project? The idea of [TestMethod] is to mark it as a method to be unit tested.
Normal best practice is to have your Unit Tests in a separate project. I believe it was Roy Osherove who recommended that you setup your unit tests like this:
Each project has a unit test project called YourProjectName.Tests (can further be broken into YourProjectName.UnitTests and YourProjectName.IntegrationTests if desired)
Each class or unit of work to be tested should have its own file in the unit test project named something like YourClassNameUnitTests
Each method or unit of work to be tested needs to be labelled with [TestMethod] or similar and you should use descriptive names like public void MethodName_ScenarioUnderTest_ExpectedBehaviour()
To specifically answer your question, if you have [TestMethod] under the method itself you will make things very difficult to manage because:
When you have 100's of tests you will have to look all over the place to find them
Your tests will get mixed up in your production code instead of being separate (when they're a separate project you can release your production code without a ton of unit tests in them)
Someone who comes along after you to maintain the tests will much appreciate being able to look at one file for a class and see all the tests instead of having to scroll a production class full of methods > test methods > more methods > more test methods.
This also makes the unit tests very hard to maintain. If you ever need to move unit tests for any reason, imagine how difficult it will be if your unit tests aren't in one file? If you do it how you describe, you will have to go through tests one by one cutting and pasting because you can't just select a bunch at once.
Hope that helps.

Related

How do I distinguish between Unit Tests and Integration Tests inside a test class?

My question is similar to this one: Junit: splitting integration test and Unit tests. However, my question regards NUnit instead of: JUnit. What is the best way to distinguish between Unit Tests and Integration Tests inside a test class? I was hoping to be able to do something like this:
[TestFixture]
public class MyFixture
{
[IntegrationTest]
[Test]
public void MyTest1()
{
}
[UnitTest]
[Test]
public void MyTest1()
{
}
}
Is there a way to do this with NUnit? Is there a better way to dot this?
Personally I've found it better to keep them in separate assemblies. You can use a convention, such as name.Integration.Tests and name.Tests (or whatever your team prefers).
Either assemblies or attributes work fine for CI servers like TeamCity. The pain with the attribute approach tends to show up in IDE test runners. I want to be able to quickly run only my unit tests. With separate assemblies, it's easy - select the appropriate test project and run tests.
The Category Attribute might help you do this.
https://github.com/nunit/docs/wiki/Category-Attribute
namespace NUnit.Tests
{
using System;
using NUnit.Framework;
[TestFixture]
public class SuccessTests
{
[Test]
[Category("Unit")]
public void VeryLongTest()
{ /* ... */ }
}
This answer shares some details with a few other answers, but I'd like to put the question in a slightly different perspective.
The design of TestFixtures is such that every test gets the same setup. To use TestFixtures correctly, you should divide your tests in such a way that all the tests with the same setup end up in the same test class. This is how almost every xunit framework is designed to be used and you always get better results when you use software as it is designed to be used.
Since Integration and Unit tests are not likely to share the same setup, this would naturally lead to putting them in a separate class. By doing that, you can group all integration tests under a namespace that makes them easy to run independently.
Even better, as another answer suggests, put them in a separate assembly. This works much better with most CI builds, since failure of an integration test may be more easily distinguished from failure of an integration test. Also, use of a separate assembly eliminates all the complication of using categories or special attributes.
Do not have them in the same class, either split them down into folders within your test assembly or split them into two separate test assemblies.
In the long run this will be far easier to manage especially if you use tools like NCrunch.

Effective transition from unit testing to integration testing

I'm currently investigating how we should perform our testing in an upcoming project. In order to find bugs early in the development process, the developers will write unit tests before the actual code (TDDish). The unit tests will focus, as they should, on the unit (a method in this case) in isolation so dependencies will be mocked etc etc.
Now, I also would like to test these units when they interact with other units and I was thinking that there should be a effective best practice to do this since the unit tests have already been written. My thought is that the unit tests will be reused but the mocked objects will be removed and replaced with real ones. The diffent ideas I have right now is:
Use a global flag in each test class that decides if mock objects should be used or not. This approach will require several if statements
Use a factory class that either creates a "instanceWithMocks" or "instanceWithoutMocks". This approach might be troublesome for the new developers to use and requires some extra classes
Separate the integration tests from the unit tests in different classes. This will however require a lot of redundant code and maintaining the test cases will be twice the work
The way I see it all of these approaches have pros and cons. Which of these would be preferred and why? And is there a better way to effective transition from unit testing to integration testing? Or is this usually done in some other way?
I would go for the third option
Seperate the integration tests from the unit tests in different
classes. This will however require alot of redundant code and
maintaining the test cases will be twice the work
This is because unit tests and integration tests have different purposes. A unit test shows that an individual piece of functionality works in isolation. An integration test shows that different pieces of functionality still work when they interact with each other.
So for a unit test you want to mock things so that you are only testing the one piece of functionality.
For an integration test mock as little as possible.
I would have them in separate projects. What works well at my place is to have a unit test project using NUnit and Moq. This is written TDD as the code is written. The integration tests are Specflow/Selenium and the feature files are written with the help of the product owner in the planning session so we can verify that we are delivering what the owner wants.
This does create extra work in the short term but leads to fewer bugs, easier maintenance, and delivery matching requirements.
I agree to most other answers, that unittesting should be seperate from integrationtesting (option 3).
But i do not agree to your contra arguments:
[...] This (seperating unit from integration testing) will however
require a lot of redundant code and maintaining the test cases will be twice the work.
Generating objects with test data can be a lot of work but this can be refactored to test-helper clases aka ObjectMother that can be used from
unit and integration testing so there is no need for redundancy there
In unit tests you check different conditions of the class under tests.
For integration testing it is not neccessary to re-check every of these special cases.
Instead you check that the components work together.
Example
You may have unit-tests for 4 different situations where an exception is thrown.
For the integration it is not neccessary to re-test all 4 conditions
One exception-related integration test is enough to verify that the integrated system can handle exceptions.
An IoC container like Ninject/Autofac/StructureMap may be of use to you here. The unit tests can resolve the system-under-test through the container, and it is simply a matter of registration whether you have mocks or real objects registered. Similar to your factory approach, but the IoC container is the factory. New developers would need a little training to understand, but that's the case with any complex system. The disadvantage to this is that the registration scenarios can become fairly complicated, but it's hard to say for any given system whether they'd be too complicated without trying it out. I suspect this is the reason you haven't found any answers that seem definitive.
The integration tests should be different classes than your unit tests since you are testing a different behavior. The way that I think of integration tests is that they are the ones that you execute when trying to make sure that everything works together. They would be using inputs to portions of the application and making sure that the expected output is returned.
I think you are messing up the purpose of unit testing and integration testing.
Unit testing is for testing a single class - this is low level API.
Integration testing is testing how classes cooperate. This is another higher level API.
Normally, you can not reuse unit tests in integration testing because they represent different level of system view.
Using spring context may help with setting up environment for integration testing.
I'm not sure reusing your unit tests with real objects instead of mocks is the right approach to implementing integration tests.
The purpose of a unit test is to verify the basic correctness of an object in isolation from the outside world. Mocks are there to ensure that isolation. If you substitute them for real implementations, you'll actually end up testing something completely different - the correctness of large portions of the same chain of objects, and you're redundantly testing it many times.
By making integration tests distinct from unit tests, you'll be able to choose the portions of your system you want to verify - generally, it's a good idea to test parts that imply configuration, I/O, interation with third-party systems, the UI or anything else that unit tests have a hard time covering.

Do I need duplicate test methods; 1 for unit and 1 for integration tests?

I'm newer to unit testing and it seems like most of the information I find is on the unit testing side of things. I'm getting a good grasp around this and am planning on using MS Test Framework with Moq so I don't have to hand roll any mocks for my unit test dependencies.
Let's say I have the following unit test method:
[TestMethod]
public void GetCustomerByIDUnitTest()
{
//Uses Moq for dependency for getting customer to make sure
//ID I set up is same one returned to test in Assertion
}
Do I have to create another identical test that instead uses the actual Entity Framework and Database call to make an integration test?
[TestMethod]
public void GetCustomerByIDIntegrationTest()
{
//Uses actual repository interface for EF and DB to do integration testing
}
For the purpose of this question please leave topics about TDD or BDD out; I'm simple trying to determine if I physically need (2) separate tests and the manner of organizing these tests. Is this a requirement when doing both unit and integration testing?
Thanks!
In my opinion, it is somewhat situational. If I am working on a small personal project, then no, I just do the unit tests.
If it is a corporate / enterprise project then I do tend to do both unit and integration tests. However, keep unit and integration tests separated. Developers should be able to run unit tests frequently and quickly. Integration tests can be run less frequently because they usually take a long time to run. Usually I just run integration tests once before a commit, whereas I run unit tests much more frequently.
As an additional note, make your test names explain what should be happening. The test name GetCustomerByIDUnitTest really doesn't tell me much. Better would be something like: GetCustomerByID_ReturnsTheCorrectUser_WhenAValidIdIsPassed and conversely GetCustomerByID_ReturnsNull_WhenNonExistantIdIsPassed
I tend to favor a What_Does_When naming convention, but that too is a personal preference. In general, the more explanatory the better though.
Hmm, I hope I do not fail you by mentioning things you rather have unmentioned. But here my 2 cents on it. One disclaimer up-front. I use nunit with RhinoMocks, so syntax could be different, concepts are the same though.
Yes, you need separate tests. You can debate if you want to store the tests in the same test class, and tag them with [Category("integrationtest")] so that you can easily run your unit tests without running integration tests, and the other way around. With your TDD practices (oops, I know you don't want me talking about that :)) you need your unit tests to be completed as fast as possible.
To look at this from a slightly different angle; you are not really duplicating your tests. Your integration tests validate the functionality, while your unit tests validate a method in isolation. So they can very well have completely different names. As long as they make sense to you (or if you develop something with a team: as long as it makes sense for your team).
I think the most important thing is that you find a way that works for you. There isn't really a right or wrong. I think it's a big plus that you are writing both unit tests and integration tests. How you organize them is kinda up to you. I had different approaches in different projects I participated in:
Project A:
1 test class for integration tests
1 test class for unit tests
That helped to create meaningful names for the test classes, they could capture the actual feature we are testing. As for the unit tests, the test class had the same name as the class that we are testing.
Project B:
Mixed up integration tests with unit tests in one test class.
This worked fine as well, although we sometimes did have trouble finding an integration test. But tbh, with resharper at your side, how hard can it be :).
As I know you should have separate project for UnitTesting and IntegrationTesting.
The suggestion of This book is to create two projects and name them like ProjectName.UnitTests and ProjectName.IntegrationTests.
developers must run each of them separately and easily.
You can find many interesting topics and videos about testing here

How to organize unit tests and do not make refactoring a nightmare?

My current way of organizing unit tests boils down to the following:
Each project has its own dedicated project with unit tests. For a project BusinessLayer, there is a BusinessLayer.UnitTests test project.
For each class I want to test, there is a separate test class in the test project placed within exactly the same folder structure and in exactly the same namespace as the class under test. For a class CustomerRepository from a namespace BusinessLayer.Repositories, there is a test class CustomerRepositoryTests in a namespace BusinessLayerUnitTests.Repositories.
Methods within each test class follow simple naming convention MethodName_Condition_ExpectedOutcome. So the class CustomerRepositoryTests that contains tests for a class CustomerRepository with a Get method defined looks like the following:
[TestFixture]
public class CustomerRepositoryTests
{
[Test]
public void Get_WhenX_ThenRecordIsReturned()
{
// ...
}
[Test]
public void Get_WhenY_ThenExceptionIsThrown()
{
// ...
}
}
This approach has served me quite well, because it makes locating tests for some piece of code really simple. On the opposite site, it makes code refactoring really more difficult then it should be:
When I decide to split one project into multiple smaller ones, I also need to split my test project.
When I want to change namespace of a class, I have to remember to change a namespace (and folder structure) of a test class as well.
When I change name of a method, I have to go through all tests and change the name there, as well. Sure, I can use Search & Replace, but that is not very reliable. In the end, I still need to check the changes manually.
Is there some clever way of organizing unit tests that would still allow me to locate tests for a specific code quickly and at the same time lend itself more towards refactoring?
Alternatively, is there some, uh, perhaps Visual Studio extension, that would allow me to somehow say that "hey, these tests are for that method, so when name of the method changes, please be so kind and change the tests as well"? To be honest, I am seriously considering to write something like that myself :)
After working a lot with tests, I've come to realize that (at least for me) having all those restrictions bring a lot of problems in the long run, rather than good things. So instead of using "Names" and conventions to determine that, we've started using code. Each project and each class can have any number of test projects and test classes. All the test code is organized based on what is being tested from a functionality perspective (or which requirement it implements, or which bug it reproduced, etc...). Then for finding the tests for a piece of code we do this:
[TestFixture]
public class MyFunctionalityTests
{
public IEnumerable<Type> TestedClasses()
{
// We can find the tests for a class, because the test cases references in some special method.
return new []{typeof(SomeTestedType), typeof(OtherTestedType)};
}
[Test]
public void TestRequirement23423432()
{
// ... test code.
this.TestingMethod(someObject.methodBeingTested); //We do something similar for methods if we want to track which methods are being tested (we usually don't)
// ...
}
}
We can use tools like resharper "usages" to find the test cases, etc... And when that's not enough, we do some magic by reflection and LINQ by loading all the test classes, and running something like allTestClasses.where(testClass => testClass.TestedClasses().FindSomeTestClasses());
You can also use the TearDown to gather information about which methods are tested by each method/class and do the same.
One way to keep class and test locations in sync when moving the code:
Move the code to a uniquely named temporary namespace
Search for references to that namespace in your tests to identify the tests that need to be moved
Move the tests to the proper new location
Once all references to the temporary namespace from tests are in the right place, then move the original code to its intended target
One strength of end-to-end or behavioral tests is the tests are grouped by requirement and not code, so you avoid the problem of keeping test locations in sync with the corresponding code.
Regarding VS extensions that associate code to tests, take a look at Visual Studio's Test Impact. It runs the tests under a profiler and creates a compact database that maps IL sequence points to unit tests. So in other words, when you change the code Visual Studio knows which tests need to be run.
One unit test project per project is the way to go. We have tried with a mega unit test project but this increased the compile time.
To help you refactor use a product like resharper or code rush.
Is there some clever way of organizing unit tests that would still
allow me to locate tests for a specific code quickly
Resharper have some good short cuts that allows you to search file or code
As you said for class CustomerRepository their is a test CustomerRepositoryTests
R# shortcut shows inpput box for what you find in you case you can just input CRT and it will show you all the files starting with name have first as Capital C then R and then T
It also allow you do search by wild cards such as CR* will show you the list of file CustomerRepository and CustomerRepositoryTests

Writing standards for unit testing

I plan to introduce a set of standards for writing unit tests into my team. But what to include?
These two posts (Unit test naming best practices and Best practices for file system dependencies in unit/integration tests) have given me some food for thought already.
Other domains that should be covered in my standards should be how test classes are set up and how to organize them. For example if you have class called OrderLineProcessor there should be a test class called OrderLineProcessorTest. If there's a method called Process() on that class then there should be a test called ProcessTest (maybe more to test different states).
Any other things to include?
Does your company have standards for unit testing?
EDIT: I'm using Visual Studio Team System 2008 and I develop in C#.Net
Have a look at Michael Feathers on what is a unit test (or what makes unit tests bad unit tests)
Have a look at the idea of "Arrange, Act, Assert", i.e. the idea that a test does only three things, in a fixed order:
Arrange any input data and processing classes needed for the test
Perform the action under test
Test the results with one or more asserts. Yes, it can be more than one assert, so long as they all work to test the action that was performed.
Have a Look at Behaviour Driven Development for a way to align test cases with requirements.
Also, my opinion of standard documents today is that you shouldn't write them unless you have to - there are lots of resources available already written. Link to them rather than rehashing their content. Provide a reading list for developers who want to know more.
You should probably take a look at the "Pragmatic Unit Testing" series. This is the C# version but there is another for Java.
With respect to your spec, I would not go overboard. You have a very good start there - the naming conventions are very important. We also require that the directory structure match the original project. Coverage also needs to extend to boundary cases and illegal values (checking for exceptions). This is obvious but your spec is the place to write it down for that argument that you'll inevitably have in the future with the guy who doesn't want to test for someone passing an illegal value. But don't make the spec more than a few pages or no one will use it for a task that is so context-dependent.
Update: I disagree with Mr. Potato Head about only one assert per Unit Test. It sounds quite fine in theory but, in practice, it leads to either loads of mostly redundant tests or people doing tons of work in setup and tear-down that itself should be tested.
I follow the BDD style of TDD. See:
http://blog.daveastels.com/files/BDD_Intro.pdf
http://dannorth.net/introducing-bdd
http://behaviour-driven.org/Introduction
In short this means that
The tests are not thought as "tests", but as specifications of the system's behaviour (hereafter called "specs"). The intention of the specs is not to verify that the system works under every circumstance. Their intention is to specify the behaviour and to drive the design of the system.
The spec method names are written as full English sentences. For example the specs for a ball could include "the ball is round" and "when the ball hits a floor then it bounces".
There is no forced 1:1 relation between the production classes and the spec classes (and generating a test method for every production method would be insane). Instead there is a 1:1 relation between the behaviour of the system and the specs.
Some time ago I wrote TDD tutorial (where you begin writing a Tetris game using the provided tests) which shows this style of writing tests as specs. You can download it from http://www.orfjackal.net/tdd-tutorial/tdd-tutorial_2008-09-04.zip The instructions about how to do TDD/BDD are still missing from that tutorial, but the example code is ready, so you can see how the tests are organized and write code that passes them.
You will notice that in this tutorial the production classes are named such as Board, Block, Piece and Tetrominoe which are centered around the concepts of a Tetris game. But the test classes are centered around the behaviour of the Tetris game: FallingBlocksTest, RotatingPiecesOfBlocksTest, RotatingTetrominoesTest, FallingPiecesTest, MovingAFallingPieceTest, RotatingAFallingPieceTest etc.
Try to use as few assert statements per test method as possible. This makes sure that the purpose of the test is well-defined.
I know this will be controversial, but don't test the compiler - time spent testing Java Bean accessors and mutators is better spent writing other tests.
Try, where possible, to use TDD instead of writing your tests after your code.
I've found that most testing conventions can be enforced through the use of a standard base class for all your tests. Forcing the tester to override methods so that they all have the same name.
I also advocate the Arrange-Act-Assert (AAA) style of testing as you can then generate fairly useful documentation from your tests. It also forces you to consider what behaviour you are expecting due to the naming style.
Another item you can put in your standards is to try and keep your unit test size small. That is the actuall test methods themselves. Unless you are doing a full integration unit test there usually is no need for large unit tests, like say more than 100 lines. I'll give you that much in case you have a lot of setup to get to your one test. However if you do you should maybe refactor it.
People also talk about refactoring there code make sure people realize that unit tests is code too. So refactor, refactor, refactor.
I find the biggest problem in the uses I have seen is that people do not tend to recognize that you want to keep your unit tests light and agile. You don't want a monolithic beast for your tests after all. With that in mind if you have a method you are trying to test you should not test every possible path in one unit test. You should have multiple unit tests to account for every possible path through the method.
Yes if you are doing your unit tests correctly you should on average have more lines of unit test code than your application. While this sounds like a lot of work it will save you alot of time in the end when comes time for the inevitable business requirement change.
Users of full-featured IDE's will find that "some of them" have quite detailed support for creating tests in a specific pattern. Given this class:
public class MyService {
public String method1(){
return "";
}
public void method2(){
}
public void method3HasAlongName(){
}
}
When I press ctrl-shift-T in intellij IDEA I get this test class after answering 1 dialog box:
public class MyServiceTest {
#Test
public void testMethod1() {
// Add your code here
}
#Test
public void testMethod2() {
// Add your code here
}
#Test
public void testMethod3HasAlongName() {
// Add your code here
}
}
So you may want to take a close look at tool support before writing your standards.
I use nearly plain English for my unit test function names. Helps to define what they do exactly:
TEST( TestThatVariableFooDoesNotOverflowWhenCalledRecursively )
{
/* do test */
}
I use C++ but the naming convention can be used anywhere.
Make sure to include what is not an unit tests. See: What not to test when it comes to Unit Testing?
Include a guideline so integration tests are clearly identified and can be run separately from unit tests. This is important, because you can end with a set of "unit" tests that are really slow if the unit tests are mixed with other types of tests.
Check this for more info on it: How can I improve my junit tests ... specially the second update.
If you are using tools from the family of Junit (OCunit, SHunit, ...), names of tests already follow some rules.
For my tests, I use custom doxygen tags in order to gather their documentation in a specific page.

Categories