Basic Unit Test vs. Unit Test - c#

I am working on an MVC project and was wondering whether to use Basic Unit Test or Unit Test, I read articles / explanations about both but can't see much difference between the two. What are the main differences and which one is preferable for a large scale app with DB backend?

The difference between Visual Studio's Basic Unit Test item template and Unit Test item template is that the latter includes support for ClassInitialize, ClassCleanup, TestInitialize and TestCleanup routines allowing you to execute some code before/after the test fixture and some code before/after each unit test. If you don't need such functionality in your unit test you could go wit the basic template which generates the following file:
[TestClass]
public class UnitTest2
{
[TestMethod]
public void TestMethod1()
{
}
}
Of course you could always add the corresponding routines to a basic unit test if you want to support later this functionality.

Related

How do I order execution of NUnit test fixtures?

I have a NUnit test project which has two [TextFixture]s.
I want one of them to run before the other as it deals with file creation. They're currently running in the wrong order.
Given that I can't change the [Test]s or group them into a single unit test, is there a way in which I can have control over test fixture running order?
I have tried [Order(int num)] attribute and have also tried to create a new playlist.
Both of them aren't working.
C#, .NET Framework, NUnit Testing Framework, Windows.
The documentation for [OrderAttribute] states that ordering for fixtures applies within the containing namespace.
Make sure that your fixtures are within the same namespace & that you've applied [OrderAttribute] at the test fixture level:
namespace SameNamespace {
[TestFixture, Order(1)]
public class MyFirstFixture
{
/* ... */ }
}
[TestFixture, Order(2)]
public class MySecondFixture
{
/* ... */ }
}
}
Also, it's important to remember that while MyFirstFixture will run before MySecondFixture, the ordering of the tests inside is local to the test fixture.
A test with [Order(1)] in MySecondFixture will run after all the tests in MyFirstFixture have completed.
Important note: the documentation also does not guarantee ordering.
Tests do not wait for prior tests to finish. If multiple threads are in use, a test may be started while some earlier tests are still being run.
Regardless, tests should be following the F.I.R.S.T principles of testing, introduced by Robert C. Martin in his book "Clean Code".
The I in F.I.R.ST. stands for isolated, meaning that tests should not be dependable on one another & each test should be responsible for the setup it requires to be executed correctly.
Try your best to eventually combine the tests into one if they are testing one thing, or rewrite your logic in a way where the piece of code being tested by test 1, can be tested isolated from the piece of code being tested by test 2.
This will also have the side effect of cleaner code adhering to SRP.
Win-win situation.

How do I distinguish between Unit Tests and Integration Tests inside a test class?

My question is similar to this one: Junit: splitting integration test and Unit tests. However, my question regards NUnit instead of: JUnit. What is the best way to distinguish between Unit Tests and Integration Tests inside a test class? I was hoping to be able to do something like this:
[TestFixture]
public class MyFixture
{
[IntegrationTest]
[Test]
public void MyTest1()
{
}
[UnitTest]
[Test]
public void MyTest1()
{
}
}
Is there a way to do this with NUnit? Is there a better way to dot this?
Personally I've found it better to keep them in separate assemblies. You can use a convention, such as name.Integration.Tests and name.Tests (or whatever your team prefers).
Either assemblies or attributes work fine for CI servers like TeamCity. The pain with the attribute approach tends to show up in IDE test runners. I want to be able to quickly run only my unit tests. With separate assemblies, it's easy - select the appropriate test project and run tests.
The Category Attribute might help you do this.
https://github.com/nunit/docs/wiki/Category-Attribute
namespace NUnit.Tests
{
using System;
using NUnit.Framework;
[TestFixture]
public class SuccessTests
{
[Test]
[Category("Unit")]
public void VeryLongTest()
{ /* ... */ }
}
This answer shares some details with a few other answers, but I'd like to put the question in a slightly different perspective.
The design of TestFixtures is such that every test gets the same setup. To use TestFixtures correctly, you should divide your tests in such a way that all the tests with the same setup end up in the same test class. This is how almost every xunit framework is designed to be used and you always get better results when you use software as it is designed to be used.
Since Integration and Unit tests are not likely to share the same setup, this would naturally lead to putting them in a separate class. By doing that, you can group all integration tests under a namespace that makes them easy to run independently.
Even better, as another answer suggests, put them in a separate assembly. This works much better with most CI builds, since failure of an integration test may be more easily distinguished from failure of an integration test. Also, use of a separate assembly eliminates all the complication of using categories or special attributes.
Do not have them in the same class, either split them down into folders within your test assembly or split them into two separate test assemblies.
In the long run this will be far easier to manage especially if you use tools like NCrunch.

Map Testcase ID with NUnit

I'm currently building out my automation framework using NUnit, I've got everything working just fine and the last enchancement I'd like to make is to be able to map my automated test scripts to test cases in my testing software.
I'm using TestRail for all my testcases.
My ideal situation is to be able to decorate each test case with the corresponding testcase ID in test rail and when it comes to report the test result in TestRail, I can just use the Case id. Currently I'm doing this via matching test name/script name.
Example -
[Test]
[TestCaseId("001")]
public void NavigateToSite()
{
LoginPage login = new LoginPage(Driver);
login.NavigateToLogInPage();
login.AssertLoginPageLoaded();
}
And then in my teardown method, it would be something like -
[TearDown]
public static void TestTearDown(IWebDriver Driver)
{
var testcaseId = TestContext.CurrentContext.TestCaseid;
var result = TestContext.CurrentContext.Result.Outcome;
//Static method to report to Testrail using API
Report.ReportTestResult(testcaseId, result);
}
I've just made up the testcaseid attribute, but this is what I'm looking for.
[TestCaseId("001")]
I may have missed this if it already exists, or how do I go about possibly extending NUnit to do this?
You can use PropertyAttribute supplied by NUnit.
Example:
[Property("TestCaseId", "001")]
[Test]
public void NavigateToSite()
{
...
}
[TearDown]
public void TearDown()
{
var testCaseId = TestContext.CurrentContext.Test.Properties["TestCaseId"];
}
In addition you can create custom property attribute - see NUnit link
For many years I recommended that people not do this: mix test management code into the tests themselves. It's an obvious violation of the single responsibility principle and it creates difficulties in maintaining the tests.
In addition, there's the problem that the result presented in TearDown may not be final. For example, if you have used [MaxTime] on the test and it exceeds the time specified, your successful test will change to a failure. Several other built-in attributes work this way and of course there is always the possibility of a user-created attribute. The purpose of TearDown is to clean up after your code, not as a springboard for creating a reporting or test management system.
That said, with older versions of NUnit, folks got into the habit of doing this. This was in part due to the fact that NUnit addins (the approach we designed) were fairly complicated to write. There were also fewer problems because NUNit V2 was significantly less extensible on the test side of things. With 3.0, we provided a means for creating test management functions such as this as extensions to the NUnit engine and I suggest you consider using that facility instead of mixing them in with the test code.
The general approach would be to create a property, as suggested in Sulo's answer but to replace your TearDown code with an EventListener extension that reports the result to TestRail. The EventListener has access all the result information - not just the limited fields available in TestContext - in XML format. You can readily extract whatever needs to go to TestRail.
Details of writing TestEngine extensions are found here: https://github.com/nunit/docs/wiki/Writing-Engine-Extensions
Note that are some outstanding issues if you want to use extensions under the Visual Studio adapter, which we are working on. Right now, you'll get the best experience using the console runner.

Is it possible to use [TestMethod] attribute outside of the test project?

I feel that test methods should be placed right under the methods they are supposed to test. But in tutorials I found so far they only placed in [TestClass]es inside Unit Test Projects. Why is that necessary?
Why would you want to use [TestMethod] outside the Unit Test project? The idea of [TestMethod] is to mark it as a method to be unit tested.
Normal best practice is to have your Unit Tests in a separate project. I believe it was Roy Osherove who recommended that you setup your unit tests like this:
Each project has a unit test project called YourProjectName.Tests (can further be broken into YourProjectName.UnitTests and YourProjectName.IntegrationTests if desired)
Each class or unit of work to be tested should have its own file in the unit test project named something like YourClassNameUnitTests
Each method or unit of work to be tested needs to be labelled with [TestMethod] or similar and you should use descriptive names like public void MethodName_ScenarioUnderTest_ExpectedBehaviour()
To specifically answer your question, if you have [TestMethod] under the method itself you will make things very difficult to manage because:
When you have 100's of tests you will have to look all over the place to find them
Your tests will get mixed up in your production code instead of being separate (when they're a separate project you can release your production code without a ton of unit tests in them)
Someone who comes along after you to maintain the tests will much appreciate being able to look at one file for a class and see all the tests instead of having to scroll a production class full of methods > test methods > more methods > more test methods.
This also makes the unit tests very hard to maintain. If you ever need to move unit tests for any reason, imagine how difficult it will be if your unit tests aren't in one file? If you do it how you describe, you will have to go through tests one by one cutting and pasting because you can't just select a bunch at once.
Hope that helps.

Do I need duplicate test methods; 1 for unit and 1 for integration tests?

I'm newer to unit testing and it seems like most of the information I find is on the unit testing side of things. I'm getting a good grasp around this and am planning on using MS Test Framework with Moq so I don't have to hand roll any mocks for my unit test dependencies.
Let's say I have the following unit test method:
[TestMethod]
public void GetCustomerByIDUnitTest()
{
//Uses Moq for dependency for getting customer to make sure
//ID I set up is same one returned to test in Assertion
}
Do I have to create another identical test that instead uses the actual Entity Framework and Database call to make an integration test?
[TestMethod]
public void GetCustomerByIDIntegrationTest()
{
//Uses actual repository interface for EF and DB to do integration testing
}
For the purpose of this question please leave topics about TDD or BDD out; I'm simple trying to determine if I physically need (2) separate tests and the manner of organizing these tests. Is this a requirement when doing both unit and integration testing?
Thanks!
In my opinion, it is somewhat situational. If I am working on a small personal project, then no, I just do the unit tests.
If it is a corporate / enterprise project then I do tend to do both unit and integration tests. However, keep unit and integration tests separated. Developers should be able to run unit tests frequently and quickly. Integration tests can be run less frequently because they usually take a long time to run. Usually I just run integration tests once before a commit, whereas I run unit tests much more frequently.
As an additional note, make your test names explain what should be happening. The test name GetCustomerByIDUnitTest really doesn't tell me much. Better would be something like: GetCustomerByID_ReturnsTheCorrectUser_WhenAValidIdIsPassed and conversely GetCustomerByID_ReturnsNull_WhenNonExistantIdIsPassed
I tend to favor a What_Does_When naming convention, but that too is a personal preference. In general, the more explanatory the better though.
Hmm, I hope I do not fail you by mentioning things you rather have unmentioned. But here my 2 cents on it. One disclaimer up-front. I use nunit with RhinoMocks, so syntax could be different, concepts are the same though.
Yes, you need separate tests. You can debate if you want to store the tests in the same test class, and tag them with [Category("integrationtest")] so that you can easily run your unit tests without running integration tests, and the other way around. With your TDD practices (oops, I know you don't want me talking about that :)) you need your unit tests to be completed as fast as possible.
To look at this from a slightly different angle; you are not really duplicating your tests. Your integration tests validate the functionality, while your unit tests validate a method in isolation. So they can very well have completely different names. As long as they make sense to you (or if you develop something with a team: as long as it makes sense for your team).
I think the most important thing is that you find a way that works for you. There isn't really a right or wrong. I think it's a big plus that you are writing both unit tests and integration tests. How you organize them is kinda up to you. I had different approaches in different projects I participated in:
Project A:
1 test class for integration tests
1 test class for unit tests
That helped to create meaningful names for the test classes, they could capture the actual feature we are testing. As for the unit tests, the test class had the same name as the class that we are testing.
Project B:
Mixed up integration tests with unit tests in one test class.
This worked fine as well, although we sometimes did have trouble finding an integration test. But tbh, with resharper at your side, how hard can it be :).
As I know you should have separate project for UnitTesting and IntegrationTesting.
The suggestion of This book is to create two projects and name them like ProjectName.UnitTests and ProjectName.IntegrationTests.
developers must run each of them separately and easily.
You can find many interesting topics and videos about testing here

Categories