C#: Using Conditional-attribute for tests - c#

Are there any good way to use the Conditional-attribute in the context of testing?
My thoughts were that if you can do this:
[Conditional("Debug")]
public void DebugMethod()
{
//...
}
Maybe you could have some use for: (?)
[Conditional("Test")]
public void TestableMethod()
{
//...
}

I don't see a use when there's a better alternative: Test Projects.
Use NUnit or MSTest to achieve this functionality in a more graceful way.

I'd accept Mehrdad's answer - I just wanted to give more context on when you might use these attributes:
Things like [Conditional] are more generally used to control things like logging/tracing, or interactions with an executing debugger; where it makes sense for the calls to be in the middle of your regular code, but which you might not want in certain builds (and #if... etc is just so ugly and easy to forget).

If the test code is not part of the product it should not be in the product code base. I have seen projects trying to include unit tests in the same project as the tested objects, and using #if statements to include them only in debug builds only to regret it later.
One obvious problem would be that the application project gets a reference to the unit testing framework. Even though you may not need to ship that framework as part of your product (if you can guarantee that no code in the release build will reference it), it still smells a bit funny to me.
Let test code be test code and production code be production code, and let the production code have no clue about it.

The other problem with this is that you may want to run your unit tests on your release builds. (We certainly do).

Related

How can i sort tests by my liking?

For example i got 3 test and their name are
[Test]
public void ATest()
{}
[Test]
public void BTest()
{}
[Test]
public void DTest()
{}
How i can make show that i want to run them in this sort DTest to be 1st, ATest 2nd and BTest 3rd?
Only solution i found is to rename them in Test1_DTest, Test2_ATest and Test3_BTest.
Any better idea?
The idea with Unit Testing is that it's irrelevant which order they are ran in because tests are supposed to run independently of each other. If all the tests you have run have passed then you shouldn't care about the order. If a test fails your focus will be on fixing that test.
If the order of the tests really is important to you then like Alex said the only way to do it is by alphabetizing your tests. I've seen cases where people have put A, B, C etc at the start of similar naming tests, but this is a bad idea.
Unfortunately that's the way to do it, by alphabetizing your test cases. Take a look at this
https://bugs.launchpad.net/nunit-3.0/+bug/740539
Quote from Charlie Pool, on of the main devs for nunit:
"Relying on alphabetical order is a workaround that you can use but it
is not documented and supported beyond the visual order of the
display. In theory it could change at any time. In practice it won't
change until NUnit 3.0, so you're pretty safe using it as a
workaround."
Alternatively you could try testng which has a nice "preserve-order" flag which you could use to ensure test case order execution.
If you use JUnit 4.11 (the latest version) , I do believe there is a "alphebetical order' scheme that you can use to execute in alphabetical order. I'm not sure if NUnit has the same capability.
But, personally I will usually configure my build system , either Gradle or Maven, to execute certain test classes in atomic operations. That way I can combine them in the order I need.

How to organize unit tests and do not make refactoring a nightmare?

My current way of organizing unit tests boils down to the following:
Each project has its own dedicated project with unit tests. For a project BusinessLayer, there is a BusinessLayer.UnitTests test project.
For each class I want to test, there is a separate test class in the test project placed within exactly the same folder structure and in exactly the same namespace as the class under test. For a class CustomerRepository from a namespace BusinessLayer.Repositories, there is a test class CustomerRepositoryTests in a namespace BusinessLayerUnitTests.Repositories.
Methods within each test class follow simple naming convention MethodName_Condition_ExpectedOutcome. So the class CustomerRepositoryTests that contains tests for a class CustomerRepository with a Get method defined looks like the following:
[TestFixture]
public class CustomerRepositoryTests
{
[Test]
public void Get_WhenX_ThenRecordIsReturned()
{
// ...
}
[Test]
public void Get_WhenY_ThenExceptionIsThrown()
{
// ...
}
}
This approach has served me quite well, because it makes locating tests for some piece of code really simple. On the opposite site, it makes code refactoring really more difficult then it should be:
When I decide to split one project into multiple smaller ones, I also need to split my test project.
When I want to change namespace of a class, I have to remember to change a namespace (and folder structure) of a test class as well.
When I change name of a method, I have to go through all tests and change the name there, as well. Sure, I can use Search & Replace, but that is not very reliable. In the end, I still need to check the changes manually.
Is there some clever way of organizing unit tests that would still allow me to locate tests for a specific code quickly and at the same time lend itself more towards refactoring?
Alternatively, is there some, uh, perhaps Visual Studio extension, that would allow me to somehow say that "hey, these tests are for that method, so when name of the method changes, please be so kind and change the tests as well"? To be honest, I am seriously considering to write something like that myself :)
After working a lot with tests, I've come to realize that (at least for me) having all those restrictions bring a lot of problems in the long run, rather than good things. So instead of using "Names" and conventions to determine that, we've started using code. Each project and each class can have any number of test projects and test classes. All the test code is organized based on what is being tested from a functionality perspective (or which requirement it implements, or which bug it reproduced, etc...). Then for finding the tests for a piece of code we do this:
[TestFixture]
public class MyFunctionalityTests
{
public IEnumerable<Type> TestedClasses()
{
// We can find the tests for a class, because the test cases references in some special method.
return new []{typeof(SomeTestedType), typeof(OtherTestedType)};
}
[Test]
public void TestRequirement23423432()
{
// ... test code.
this.TestingMethod(someObject.methodBeingTested); //We do something similar for methods if we want to track which methods are being tested (we usually don't)
// ...
}
}
We can use tools like resharper "usages" to find the test cases, etc... And when that's not enough, we do some magic by reflection and LINQ by loading all the test classes, and running something like allTestClasses.where(testClass => testClass.TestedClasses().FindSomeTestClasses());
You can also use the TearDown to gather information about which methods are tested by each method/class and do the same.
One way to keep class and test locations in sync when moving the code:
Move the code to a uniquely named temporary namespace
Search for references to that namespace in your tests to identify the tests that need to be moved
Move the tests to the proper new location
Once all references to the temporary namespace from tests are in the right place, then move the original code to its intended target
One strength of end-to-end or behavioral tests is the tests are grouped by requirement and not code, so you avoid the problem of keeping test locations in sync with the corresponding code.
Regarding VS extensions that associate code to tests, take a look at Visual Studio's Test Impact. It runs the tests under a profiler and creates a compact database that maps IL sequence points to unit tests. So in other words, when you change the code Visual Studio knows which tests need to be run.
One unit test project per project is the way to go. We have tried with a mega unit test project but this increased the compile time.
To help you refactor use a product like resharper or code rush.
Is there some clever way of organizing unit tests that would still
allow me to locate tests for a specific code quickly
Resharper have some good short cuts that allows you to search file or code
As you said for class CustomerRepository their is a test CustomerRepositoryTests
R# shortcut shows inpput box for what you find in you case you can just input CRT and it will show you all the files starting with name have first as Capital C then R and then T
It also allow you do search by wild cards such as CR* will show you the list of file CustomerRepository and CustomerRepositoryTests

Is there a common way to test complex functions with NUnit?

is there a common way to test complex functions with several parameters with NUnit? I think it is very hard or impossible to test every condition.
I'm afraid the combination of parameters that isn't expected in the function is also not expected in the test.
So the expected condition will not fail but the unexpected.
Thanks
This shouldn't be hard to test at all. If it is, the method isn't designed for testability, and that is a code smell that tells you that you need to refactor it.
I tend to write tests in these cases as follows (others may have better suggestions):
Does it work as intended when all appropriate parameters are passed?
Does it throw expected exceptions when I think it should? (ArgumentNullException, etc.)
For each parameter, what happens when I pass null, the minimum and the maximum. (This can be very extensive, depending on the number of arguments.)
If your method takes a lot of parameters, consider refactoring it to take an object with the information on it, so that you can encapsulate the rules for it in the object, and pass the object to the method.
For data-driven tests in NUnit, there is [TestCase] attribute. Unit tests usually dont't test every possible scenario. They just test representative set of inputs, which have good coverage of what the SUT does on various inputs. Just pick some characteristic inputs, and you'll be fine.
Don't know if this is the kind of thing you are looking for, but there is an automated unit test generator that was created by Microsoft research called PEX.
Pex automatically generates test suites with high code coverage. Right from the Visual Studio code editor, Pex finds interesting input-output values of your methods, which you can save as a small test suite with high code coverage. Microsoft Pex is a Visual Studio add-in for testing .NET Framework applications.
Use RowTest similar question can be found at
C#, NUnit Assert in a Loop
have a look at #"Sam Holder" reply, I copied the code from it, with few tweaks.
[TestFixture]
public class TestExample
{
[RowTest]
[Row( 1)]
[Row( 2)]
[Row( 3)]
[Row( 4)]
public void TestMethodExample(int value)
{
...
...
...
Assert.IsTrue("some condition ..");
}
}
I agree with Mike Hofer, that the question indicates a code smell.
Nevertheless, NUnit has a Combinatorial attribute that might help you, if you're not refactoring/redesigning.

Writing standards for unit testing

I plan to introduce a set of standards for writing unit tests into my team. But what to include?
These two posts (Unit test naming best practices and Best practices for file system dependencies in unit/integration tests) have given me some food for thought already.
Other domains that should be covered in my standards should be how test classes are set up and how to organize them. For example if you have class called OrderLineProcessor there should be a test class called OrderLineProcessorTest. If there's a method called Process() on that class then there should be a test called ProcessTest (maybe more to test different states).
Any other things to include?
Does your company have standards for unit testing?
EDIT: I'm using Visual Studio Team System 2008 and I develop in C#.Net
Have a look at Michael Feathers on what is a unit test (or what makes unit tests bad unit tests)
Have a look at the idea of "Arrange, Act, Assert", i.e. the idea that a test does only three things, in a fixed order:
Arrange any input data and processing classes needed for the test
Perform the action under test
Test the results with one or more asserts. Yes, it can be more than one assert, so long as they all work to test the action that was performed.
Have a Look at Behaviour Driven Development for a way to align test cases with requirements.
Also, my opinion of standard documents today is that you shouldn't write them unless you have to - there are lots of resources available already written. Link to them rather than rehashing their content. Provide a reading list for developers who want to know more.
You should probably take a look at the "Pragmatic Unit Testing" series. This is the C# version but there is another for Java.
With respect to your spec, I would not go overboard. You have a very good start there - the naming conventions are very important. We also require that the directory structure match the original project. Coverage also needs to extend to boundary cases and illegal values (checking for exceptions). This is obvious but your spec is the place to write it down for that argument that you'll inevitably have in the future with the guy who doesn't want to test for someone passing an illegal value. But don't make the spec more than a few pages or no one will use it for a task that is so context-dependent.
Update: I disagree with Mr. Potato Head about only one assert per Unit Test. It sounds quite fine in theory but, in practice, it leads to either loads of mostly redundant tests or people doing tons of work in setup and tear-down that itself should be tested.
I follow the BDD style of TDD. See:
http://blog.daveastels.com/files/BDD_Intro.pdf
http://dannorth.net/introducing-bdd
http://behaviour-driven.org/Introduction
In short this means that
The tests are not thought as "tests", but as specifications of the system's behaviour (hereafter called "specs"). The intention of the specs is not to verify that the system works under every circumstance. Their intention is to specify the behaviour and to drive the design of the system.
The spec method names are written as full English sentences. For example the specs for a ball could include "the ball is round" and "when the ball hits a floor then it bounces".
There is no forced 1:1 relation between the production classes and the spec classes (and generating a test method for every production method would be insane). Instead there is a 1:1 relation between the behaviour of the system and the specs.
Some time ago I wrote TDD tutorial (where you begin writing a Tetris game using the provided tests) which shows this style of writing tests as specs. You can download it from http://www.orfjackal.net/tdd-tutorial/tdd-tutorial_2008-09-04.zip The instructions about how to do TDD/BDD are still missing from that tutorial, but the example code is ready, so you can see how the tests are organized and write code that passes them.
You will notice that in this tutorial the production classes are named such as Board, Block, Piece and Tetrominoe which are centered around the concepts of a Tetris game. But the test classes are centered around the behaviour of the Tetris game: FallingBlocksTest, RotatingPiecesOfBlocksTest, RotatingTetrominoesTest, FallingPiecesTest, MovingAFallingPieceTest, RotatingAFallingPieceTest etc.
Try to use as few assert statements per test method as possible. This makes sure that the purpose of the test is well-defined.
I know this will be controversial, but don't test the compiler - time spent testing Java Bean accessors and mutators is better spent writing other tests.
Try, where possible, to use TDD instead of writing your tests after your code.
I've found that most testing conventions can be enforced through the use of a standard base class for all your tests. Forcing the tester to override methods so that they all have the same name.
I also advocate the Arrange-Act-Assert (AAA) style of testing as you can then generate fairly useful documentation from your tests. It also forces you to consider what behaviour you are expecting due to the naming style.
Another item you can put in your standards is to try and keep your unit test size small. That is the actuall test methods themselves. Unless you are doing a full integration unit test there usually is no need for large unit tests, like say more than 100 lines. I'll give you that much in case you have a lot of setup to get to your one test. However if you do you should maybe refactor it.
People also talk about refactoring there code make sure people realize that unit tests is code too. So refactor, refactor, refactor.
I find the biggest problem in the uses I have seen is that people do not tend to recognize that you want to keep your unit tests light and agile. You don't want a monolithic beast for your tests after all. With that in mind if you have a method you are trying to test you should not test every possible path in one unit test. You should have multiple unit tests to account for every possible path through the method.
Yes if you are doing your unit tests correctly you should on average have more lines of unit test code than your application. While this sounds like a lot of work it will save you alot of time in the end when comes time for the inevitable business requirement change.
Users of full-featured IDE's will find that "some of them" have quite detailed support for creating tests in a specific pattern. Given this class:
public class MyService {
public String method1(){
return "";
}
public void method2(){
}
public void method3HasAlongName(){
}
}
When I press ctrl-shift-T in intellij IDEA I get this test class after answering 1 dialog box:
public class MyServiceTest {
#Test
public void testMethod1() {
// Add your code here
}
#Test
public void testMethod2() {
// Add your code here
}
#Test
public void testMethod3HasAlongName() {
// Add your code here
}
}
So you may want to take a close look at tool support before writing your standards.
I use nearly plain English for my unit test function names. Helps to define what they do exactly:
TEST( TestThatVariableFooDoesNotOverflowWhenCalledRecursively )
{
/* do test */
}
I use C++ but the naming convention can be used anywhere.
Make sure to include what is not an unit tests. See: What not to test when it comes to Unit Testing?
Include a guideline so integration tests are clearly identified and can be run separately from unit tests. This is important, because you can end with a set of "unit" tests that are really slow if the unit tests are mixed with other types of tests.
Check this for more info on it: How can I improve my junit tests ... specially the second update.
If you are using tools from the family of Junit (OCunit, SHunit, ...), names of tests already follow some rules.
For my tests, I use custom doxygen tags in order to gather their documentation in a specific page.

How do I test database-related code with NUnit?

I want to write unit tests with NUnit that hit the database. I'd like to have the database in a consistent state for each test. I thought transactions would allow me to "undo" each test so I searched around and found several articles from 2004-05 on the topic:
http://weblogs.asp.net/rosherove/archive/2004/07/12/180189.aspx
http://weblogs.asp.net/rosherove/archive/2004/10/05/238201.aspx
http://davidhayden.com/blog/dave/archive/2004/07/12/365.aspx
http://haacked.com/archive/2005/12/28/11377.aspx
These seem to resolve around implementing a custom attribute for NUnit which builds in the ability to rollback DB operations after each test executes.
That's great but...
Does this functionality exists somewhere in NUnit natively?
Has this technique been improved upon in the last 4 years?
Is this still the best way to test database-related code?
Edit: it's not that I want to test my DAL specifically, it's more that I want to test pieces of my code that interact with the database. For these tests to be "no-touch" and repeatable, it'd be awesome if I could reset the database after each one.
Further, I want to ease this into an existing project that has no testing place at the moment. For that reason, I can't practically script up a database and data from scratch for each test.
NUnit now has a [Rollback] attribute, but I prefer to do it a different way. I use the TransactionScope class. There are a couple of ways to use it.
[Test]
public void YourTest()
{
using (TransactionScope scope = new TransactionScope())
{
// your test code here
}
}
Since you didn't tell the TransactionScope to commit it will rollback automatically. It works even if an assertion fails or some other exception is thrown.
The other way is to use the [SetUp] to create the TransactionScope and [TearDown] to call Dispose on it. It cuts out some code duplication, but accomplishes the same thing.
[TestFixture]
public class YourFixture
{
private TransactionScope scope;
[SetUp]
public void SetUp()
{
scope = new TransactionScope();
}
[TearDown]
public void TearDown()
{
scope.Dispose();
}
[Test]
public void YourTest()
{
// your test code here
}
}
This is as safe as the using statement in an individual test because NUnit will guarantee that TearDown is called.
Having said all that I do think that tests that hit the database are not really unit tests. I still write them, but I think of them as integration tests. I still see them as providing value. One place I use them often is in testing LINQ to SQL code. I don't use the designer. I hand write the DTO's and attributes. I've been known to get it wrong. The integration tests help catch my mistake.
I just went to a .NET user group and the presenter said he used SQLlite in test setup and teardown and used the in memory option. He had to fudge the connection a little and explicit destroy the connection, but it would give a clean DB every time.
http://houseofbilz.com/archive/2008/11/14/update-for-the-activerecord-quotmockquot-framework.aspx
I would call these integration tests, but no matter. What I have done for such tests is have my setup methods in the test class clear all the tables of interest before each test. I generally hand write the SQL to do this so that I'm not using the classes under test.
Generally, I rely on an ORM for my datalayer and thus I don't write unit tests for much there. I don't feel a need to unit test code that I don't write. For code that I add in the layer, I generally use dependency injection to abstract out the actual connection to the database so that when I test my code, it doesn't touch the actual database. Do this in conjunction with a mocking framework for best results.
For this sort of testing, I experimented with NDbUnit (working in concert with NUnit). If memory serves, it was a port of DbUnit from the Java platform. It had a lot of slick commands for just the sort of thing you're trying to do. The project appears to have moved here:
http://code.google.com/p/ndbunit/
(it used to be at http://ndbunit.org).
The source appears to be available via this link:
http://ndbunit.googlecode.com/svn/trunk/
Consider creating a database script so that you can run it automatically from NUnit as well as manually for other types of testing. For example, if using Oracle then kick off SqlPlus from within NUnit and run the scripts. These scripts are usually faster to write and easier to read. Also, very importantly, running SQL from Toad or equivalent is more illuminating than running SQL from code or going through an ORM from code. Generally I'll create both a setup and teardown script and put them in setup and teardown methods.
Whether you should be going through the DB at all from unit tests is another discussion. I believe it often does make sense to do so. For many apps the database is the absolute center of action, the logic is highly set based, and all the other technologies and languages and techniques are passing ghosts. And with the rise of functional languages we are starting to realize that SQL, like JavaScript, is actually a great language that was right there under our noses all these years.
Just as an aside, Linq to SQL (which I like in concept though have never used) almost seems to me like a way to do raw SQL from within code without admitting what we are doing. Some people like SQL and know they like it, others like it and don't know they like it. :)
For anyone coming to this thread these days like me, I'd like to recommend trying the Reseed library I'm developing currently for this specific case.
Neither in-memory db replacement (lack of features) nor transaction rollback (transactions can't be nested) were a suitable option for me, so I ended up with a simple delete/insert cycle for the data restore purpose. Ended up with a library to generate those, while trying to optimize my tests speed and simplicity of setup. Would be happy if it helps anyone else.
Another alternative I'd recommend is using database snapshots to restore data, which is of comparable performance and usability.
Workflow is as follows:
delete existing snapshots;
create db;
insert data;
create snapshot ;
execute test;
restore from snapshot;
go to "execute test" until none left;
drop snapshot.
It's suitable if you could have the only data script for all the tests and allows you to execute the insertion (which is supposed to be the slowest) the only time, moreover you don't need data cleanup script at all.
For further performance improvement, as such tests could take a lot of time, consider using a pool of databases and tests parallelization.

Categories