I'm trying to become better at unit testing and one of my biggest uncertainties is writing unit tests for methods that require quite a bit of setup code, and I haven't found a good answer. The answers that I find are generally along the lines of "break your tests down into smaller units of work" or "use mocks". I'm trying to follow all of those best practices. However, even with mocking (I'm using Moq) and trying to break down everything into the smallest unit of work, I eventually run into a method that has several inputs, makes calls to several mock services, and requires me to specify return values for those mock method calls.
Here's an example of the code under test:
public class Order
{
public string CustomerId { get; set; }
public string OrderNumber { get; set; }
public List<OrderLine> Lines { get; set; }
public decimal Value { get { /* return the order's calculated value */ } }
public Order()
{
this.Lines = new List<OrderLine>();
}
}
public class OrderLine
{
public string ItemId { get; set; }
public int QuantityOrdered { get; set; }
public decimal UnitPrice { get; set; }
}
public class OrderManager
{
private ICustomerService customerService;
private IInventoryService inventoryService;
public OrderManager(ICustomerService customerService, IInventoryService inventoryService)
{
// Guard clauses omitted to make example smaller
this.customerService = customerService;
this.inventoryService = inventoryService;
}
// This is the method being tested.
// Return false if this order's value is greater than the customer's credit limit.
// Return false if there is insufficient inventory for any of the items on the order.
// Return false if any of the items on the order on hold.
public bool IsOrderShippable(Order order)
{
// Return false if the order's value is greater than the customer's credit limit
decimal creditLimit = this.customerService.GetCreditLimit(order.CustomerId);
if (creditLimit < order.Value)
{
return false;
}
// Return false if there is insufficient inventory for any of this order's items
foreach (OrderLine orderLine in order.Lines)
{
if (orderLine.QuantityOrdered > this.inventoryService.GetInventoryQuantity(orderLine.ItemId)
{
return false;
}
}
// Return false if any of the items on this order are on hold
foreach (OrderLine orderLine in order.Lines)
{
if (this.inventoryService.IsItemOnHold(orderLine.ItemId))
{
return false;
}
}
// If we are here, then the order is shippable
return true;
}
}
Here's a test:
[TestClass]
public class OrderManagerTests
{
[TestMethod]
public void IsOrderShippable_OrderIsShippable_ShouldReturnTrue()
{
// Setup inventory on-hand quantities for this test
Mock<IInventoryService> inventoryService = new Mock<IInventoryService>();
inventoryService.Setup(e => e.GetInventoryQuantity("ITEM-1")).Returns(10);
inventoryService.Setup(e => e.GetInventoryQuantity("ITEM-2")).Returns(20);
inventoryService.Setup(e => e.GetInventoryQuantity("ITEM-3")).Returns(30);
// Configure each item to be not on hold
inventoryService.Setup(e => e.IsItemOnHold("ITEM-1")).Returns(false);
inventoryService.Setup(e => e.IsItemOnHold("ITEM-2")).Returns(false);
inventoryService.Setup(e => e.IsItemOnHold("ITEM-3")).Returns(false);
// Setup the customer's credit limit
Mock<ICustomerService> customerService = new Mock<ICustomerService>();
customerService.Setup(e => e.GetCreditLimit("CUSTOMER-1")).Returns(1000m);
// Create the order being tested
Order order = new Order { CustomerId = "CUSTOMER-1" };
order.Lines.Add(new OrderLine { ItemId = "ITEM-1", QuantityOrdered = 10, UnitPrice = 1.00m });
order.Lines.Add(new OrderLine { ItemId = "ITEM-2", QuantityOrdered = 20, UnitPrice = 2.00m });
order.Lines.Add(new OrderLine { ItemId = "ITEM-3", QuantityOrdered = 30, UnitPrice = 3.00m });
OrderManager orderManager = new OrderManager(
customerService: customerService.Object,
inventoryService: inventoryService.Object);
bool isShippable = orderManager.IsOrderShippable(order);
Assert.IsTrue(isShippable);
}
}
This is an abbreviated example. My actual methods that I'm testing are similar in their structure, but they often have a few more service methods that they're calling or they have more setup code for the models (for instance, the Order object requires more properties to be assigned in order for the test to work).
Given that some of my methods have to do several things at once like this example (such as methods that are behind button-click events), is this the best way of dealing with writing unit tests for those methods?
You are already on the right path. And at some point, if a 'method under test' is big (not complex), then your unit test is bound to be big (not complex). i tend to differentiate between code which is 'big' vs. code which is 'complex'. A complex code snippet needs to be simplified.. a big code snippet is sometimes more clearer yet simple..
In your case, your code is just big, not complex. Hence it is not a big deal, if your unit tests are big as well.
Having said that, here is how we can make it crisper and more readable.
Option #1
The target code under test seems to be:
public bool IsOrderShippable(Order order)
As i can see, there are at least 4 unit test scenarios straightaway:
// Scenario 1: Return false if the order's value is
// greater than the customer's credit limit
[TestMethod]
public void IsOrderShippable_OrderValueGreaterThanCustomerCreditLimit_ShouldReturnFalse()
{
// Setup the customer's credit limit
var customerService = new Mock<ICustomerService>();
customerService.Setup(e => e.GetCreditLimit(It.IsAny<string>())).Returns(1000m);
// Create the order with value greater than credit limit
var order = new Order { Value = 1001m };
var orderManager = new OrderManager(
customerService: customerService.Object,
inventoryService: new Mock<IInventoryService>().Object);
bool isShippable = orderManager.IsOrderShippable(order);
Assert.IsFalse(isShippable);
}
As you can see, this test is pretty compact. it doesn't bother to setup a lot of mocks etc. that you don't expect your scenario code to hit.
similarly you can write compact tests for the other 2 scenarios as well..
and then finally for the last scenario, you have the proper unit test.
the only thing i would do is extract out some private helper methods to make the actual unit test pretty crisp and readable as follows:
[TestMethod]
public void IsOrderShippable_OrderIsShippable_ShouldReturnTrue()
{
// you can parametrize this helper method as needed
var inventoryService = GetMockInventoryServiceWithItemsNotOnHold();
// You can parametrize this helper method with credit line, etc.
var customerService = GetMockCustomerService(1000m);
// parametrize this method with number of items and total price etc.
Order order = GetTestOrderWithItems();
OrderManager orderManager = new OrderManager(
customerService: customerService.Object,
inventoryService: inventoryService.Object);
bool isShippable = orderManager.IsOrderShippable(order);
Assert.IsTrue(isShippable);
}
As you can see, by using helper methods, you made the test smaller and crisper, but we do lose some readability in terms of what parameters are being setup.
However, i tend to be very explicit about helper method names and parameter names, so that by reading the method name and parameters, a reader is clear about what sort of data is being arranged.
Most of the times, the happy path scenarios end up requiring the maximum setup code, since they need all the mocks setup properly with all correlated items, quantity, prices etc. In those cases, i prefer to sometimes put all the setup code on the TestSetup method.. so that it is by default available to every test method.
The upside, is that the tests get a good mock value out of the box.. (your happy path unit test can literally be just 2 lines, since you can keep a well-valid Order ready in the TestSetup method)
The downside is that the happy path scenario is typically one unit test.. but putting that stuff in the testSetup will run it for every unit test, even though they would never need it.
Option #2
Here is another way..
you could breakdown your IsOrderShippable method into 4 private methods that each exercise the 4 scenarios. You can make these private methods internal and then have your unit tests, work on those methods (internalsvisibleto etc.).. it is still a bit clunky, since you are making private methods internal, and also you still need to unit test your public method, which brings us kinda back to the original problem.
Related
I'm new to unit testing and need some help. This example is only for me to learn, I'm not actually counting the number of users in a static variable when I clearly could just use the count property on the List data structure. Help me figure out how to get my original assertion that there are 3 users. Here is the code:
Class User
namespace TestStatic
{
public class User
{
public string Name { get; set; }
public int Dollars { get; set; }
public static int Num_users { get; set; }
public User(string name)
{
this.Name = name;
Num_users++;
}
public int CalculateInterest(int interestRate)
{
return Dollars * interestRate;
}
}
}
Test using MSTest
namespace TestStaticUnitTest
{
[TestClass]
public class CalcInterest
{
[TestMethod]
public void UserMoney()
{
// arrange
User bob = new User("Bob");
bob.Dollars = 24;
// act
int result = bob.CalculateInterest(6);
// assert
Assert.AreEqual(144, result);
//cleanup?
}
[TestMethod]
public void UserCount()
{
// arrange
List<User> users = new List<User>(){ new User("Joe"), new User("Bob"), new User("Greg") };
// act
int userCount = User.Num_users;
// assert
Assert.AreEqual(3, userCount);
}
}
}
The result in the UserCount test fails because a fourth user exist. The user from the UserMoney test is still in memory. What should I do to get three users? Should I garbage collect the first Bob?
Also, I would think that a test that reaches into another test wouldn't be a good unit test. I know that could be an argument, but I'll take any advice from the community on this code. Thanks for the help.
The obvious solution would be to remove the static counter. As you see, when you enter the second unit test method UserCount() the value of that counter is still 1 from the execution of the first unit test method UserMoney() before.
If you want to keep the counter (for learning purposes to see what's going on), you can use cleanup methods which will "reset" the environment before all or each unit test method. In this case you want to reset the counter to 0 for every unit test method execution. You do so by writing a method with the [TestInitialize] attribute:
[TestInitialize]
public void _Initialize() {
User.Num_users = 0;
}
That way, each unit test runs with a "clean" state where the counter will be reset to 0 before the actual unit test method is executed.
You might want to look at Why does TestInitialize get fired for every test in my Visual Studio unit tests? to see how these attributes work.
I'm trying to write some tests for a class using NSubstitute.
Class constructor is:
public class ClassToTest : IClassToTest
{
private IDataBase DB;
public ClassToTest(IDatabase DB)
{
this.DB = DB;
this.DB.Configuration.AutoDetectChangesEnabled = false;
}
Here is my UnitTests class:
[TestFixture]
public class ClassToTestUnitTests
{
private ClassToTest _testClass;
[SetUp]
public void SetUp()
{
var Db = Substitute.For<IDatabase>();
//Db.Configuration.AutoDetectChangesEnabled = false; <- I've tried to do it like this
var dummyData = Substitute.For<DbSet<Data>, IQueryable<Data>, IDbAsyncEnumerable<Data>>().SetupData(GetData());
Db.Data.Returns(dummyData);
_testClass = new ClassToTest(Db);
}
Whenever I try to test some method, the test fails and there is a NullReferenceException and it goes in StackTrace to the SetUp method.
When I commented out the
this.DB.Configuration.AutoDetectChangesEnabled = false; in ClassToTest constructor the tests work fine.
Edit:
public interface IInventoryDatabase
{
DbSet<NamesV> NamesV { get; set; }
DbSet<Adress> Adresses { get; set; }
DbSet<RandomData> Randomdata { get; set; }
// (...more DbSets)
System.Data.Entity.Database Database { get; }
DbContextConfiguration Configuration { get; }
int SaveChanges();
}
The reason for the NullReferenceException is that NSubstitute cannot automatically substitute for DbContextConfiguration (it can only do so for purely virtual classes).
Normally we could work around this by manually configuration this property, something like Db.Configuration.Returns(myConfiguration), but in this case DbContextConfiguration does not seem to have a public constructor so we are unable to create an instance for myConfiguration.
At this stage I can think of two main options: wrap the problematic class in a more testable adapter class; or switch to testing this at a different level. (My preference is the latter which I'll explain below.)
The first option involves something like this:
public interface IDbContextConfiguration {
bool AutoDetectChangesEnabled { get; set; }
// ... any other required members here ...
}
public class DbContextConfigurationAdapter : IDbContextConfiguration {
DbContextConfiguration config;
public DbContextConfigurationAdapter(DbContextConfiguration config) {
this.config = config;
}
public bool AutoDetectChangedEnabled {
get { return config.AutoDetectChangedEnabled; }
set { config = value; }
}
}
Then updating IInventoryDatabase to using the more testable IDbContextConfiguration type. My opposition to this approach is that it can end up requiring a lot of work for something that should be fairly simple. This approach can be very useful for cases where we have behaviours that make sense to be grouped under a logical interface, but for working with an AutoDetectChangedEnabled property this seems unnecessary work.
The other option is to test this at a different level. I think the friction in testing the current code is that we are trying to substitute for details of Entity Framework, rather than interfaces we've created for partitioning the logical details of our app. Search for "don't mock types you don't own" for more information on why this can be a problem (I've written about it before here).
One example of testing at a different level is to switch to an in-memory database for testing this part of the code. This will tell you much more valuable information: given a known state of the test database, you are demonstrating the queries return the expected information. This is in contrast to a test showing we are calling Entity Framework in the way we think is required.
To combine this approach with mocking (not necessarily required!), we can create a higher level interface and substitute for that for testing our application code, then make an implementation of that interface and test that using the in-memory database. We have then divided the application into two parts that we can test independently: first that our app uses data from the data access interface correctly, and secondly that our implementation of that interface works as expected.
So that would give us something like this:
public interface IAppDatabase {
// These members just for example. Maybe instead of something general like
// `GetAllNames()` we have operations specific to app operations such as
// `UpdateAddress(Guid id, Address newAddress)`, `GetNameFor(SomeParams p)` etc.
Task<List<Name>> GetAllNames();
Task<Address> LookupAddress(Guid id);
}
public class AppDatabase : IAppDatabase {
// ...
public AppDatabase(IInventoryDatabase db) { ... }
public Task<List<Name>> GetAllNames() {
// use `db` and Entity Framework to retrieve data...
}
// ...
}
The AppDatabase class we test with an in-memory database. The rest of the app we test with respect to a substitute IAppDatabase.
Note that we can skip the mocking step here by using the in-memory database for all relevant tests. Using mocking may be easier than setting up all the required data in the database, or may make tests run faster. Or maybe not -- I suggest considering both options.
Hope this helps.
I recently started reading about rich domain model instead of anemic models. All the projects I worked on before, we followed service pattern. In my new new project I'm trying to implement rich domain model. One of the issues I'm running into is trying to decide where the behavior goes in (in which class). Consider this example -
public class Order
{
int OrderID;
string OrderName;
List<Items> OrderItems;
}
public class Item
{
int OrderID;
int ItemID;
string ItemName;
}
So in this example, I have the AddItem method in Item class. Before I add an Item to an order, I need to make sure a valid order id is passed in. So I do that validation in AddItem method. Am I on the right track with this? Or do I need create validation in Order class that tells if the OrderID is valid?
Wouldn't the Order have the AddItem method? An Item is added to the Order, not the other way around.
public class Order
{
int OrderID;
string OrderName;
List<Items> OrderItems;
bool AddItem(Item item)
{
//add item to the list
}
}
In which case, the Order is valid, because it has been created. Of course, the Order doesn't know the Item is valid, so there persists a potential validation issue. So validation could be added in the AddItem method.
public class Order
{
int OrderID;
string OrderName;
List<Items> OrderItems;
public bool AddItem(Item item)
{
//if valid
if(IsValid(item))
{
//add item to the list
}
}
public bool IsValid(Item item)
{
//validate
}
}
All of this is in line with the original OOP concept of keeping the data and its behaviors together in a class. However, how is the validation performed? Does it have to make a database call? Check for inventory levels or other things outside the boundary of the class? If so, pretty soon the Order class is bloated with extra code not related to the order, but to check the validity of the Item, call external resources, etc. This is not exactly OOPy, and definitely not SOLID.
In the end, it depends. Are the behaviors' needs contained within the class? How complex are the behaviors? Can they be used elsewhere? Are they only needed in a limited part of the object's life-cycle? Can they be tested? In some cases it makes more sense to extract the behaviors into classes that are more focused.
So, build out the richer classes, make them work and write the appropriate tests Then see how they look and smell and decide if they meet your objectives, can be extended and maintained, or if they need to be refactored.
First of all, every item is responsible of it's own state (information). In good OOP design the object can never be set in an invalid state. You should at least try to prevent it.
In order to do that you cannot have public setters if one or more fields are required in combination.
In your example an Item is invalid if its missing the orderId or the itemId. Without that information the order cannot be completed.
Thus you should implement that class like this:
public class Item
{
public Item(int orderId, int itemId)
{
if (orderId <= 0) throw new ArgumentException("Order is required");
if (itemId <= 0) throw new ArgumentException("ItemId is required");
OrderId = orderId;
ItemId = itemId;
}
public int OrderID { get; private set; }
public int ItemID { get; private set; }
public string ItemName { get; set; }
}
See what I did there? I ensured that the item is in a valid state from the beginning by forcing and validating the information directly in the constructor.
The ItemName is just a bonus, it's not required for you to be able to process an order.
If the property setters are public, it's easy to forget to specify both the required fields, thus getting one or more bugs later when that information is processed. By forcing it to be included and also validating the information you catch bugs much earlier.
Order
The order object must ensure that it's entire structure is valid. Thus it need to have control over the information that it carries, which also include the order items.
if you have something like this:
public class Order
{
int OrderID;
string OrderName;
List<Items> OrderItems;
}
You are basically saying: I have order items, but I do not really care how many or what they contain. That is an invite to bugs later on in the development process.
Even if you say something like this:
public class Order
{
int OrderID;
string OrderName;
List<Items> OrderItems;
public void AddItem(item);
public void ValidateItem(item);
}
You are communicating something like: Please be nice, validate the item first and then add it through the Add method. However, if you have order with id 1 someone could still do order.AddItem(new Item{OrderId = 2, ItemId=1}) or order.Items.Add(new Item{OrderId = 2, ItemId=1}), thus making the order contain invalid information.
imho a ValidateItem method doesn't belong in Order but in Item as it is its own responsibility to be in a valid state.
A better design would be:
public class Order
{
private List<Item> _items = new List<Item>();
public Order(int orderId)
{
if (orderId <= 0) throw new ArgumentException("OrderId must be specified");
OrderId = orderId;
}
public int OrderId { get; private set; }
public string OrderName { get; set; }
public IReadOnlyList<Items> OrderItems { get { return _items; } }
public void Add(Item item)
{
if (item == null) throw new ArgumentNullException("item");
//make sure that the item is for us
if (item.OrderId != OrderId) throw new InvalidOperationException("Item belongs to another order");
_items.Add(item);
}
}
Now you have gotten control over the entire order, if changes should be made to the item list, it has to be done directly in the order object.
However, an item can still be modified without the order knowing it. Someone could for instance to order.Items.First(x=>x.Id=3).ApplyDiscount(10.0); which would be fatal if the order had a cached Total field.
However, good design is not always doing it 100% properly, but a tradeoff between code that we can work with and code that does everything right according to principles and patterns.
I would agree with the first part of dbugger's solution, but not with the part where the validation takes place.
You might ask: "Why not dbugger's code? It's simpler and has less methods to implement!"
Well the reason is that the resulting code would be somewhat confusing.
Just imagine someone would use dbuggers implementation.
He could possibly write code like this:
[...]
Order myOrder = ...;
Item myItem = ...;
[...]
bool isValid = myOrder.IsValid(myItem);
[...]
Someone who doesn't know the implementation details of dbugger's "IsValid" method would simply not understand what this code is supposed to do.
Worse that that, he or she might also guess that this would be a comparison between an order and an item.
That is because this method has weak cohesion and violates the single responsibility principle of OOP.
Both classes should only be responsible for validating themself.
If the validation also includes the validation of a referenced class (like item in Order), then the item could be asked if it is valid for a specific order:
public class Item
{
public int ItemID { get; set; }
public string ItemName { get; set; }
public bool IsValidForOrder(Order order)
{
// order-item validation code
}
}
If you want to use this approach, you might want to take care that you don't call a method that triggers an item validation from within the item validation method. The result would be an infinite loop.
[Update]
Now Trailmax stated that acessing a DB from within the validation-code of the application domain would be problematic and that he uses a special ItemOrderValidator class to do the validation.
I totally agree with that.
In my opinion you should never access the DB from within the application domain model.
I know there are some patterns like Active Record, that promote such behaviour, but I find the resultig code always a tiny bit unclean.
So the core question is: how to integrate an external dependency in your rich domain model.
From my point of view there are just two valid solutions to this.
1) Don't. Just make it procedural. Write a service that lives on top of an anemic model. (I guess that is Trailmax's solution)
or
2) Include the (formerly) external information and logic in your domain model. The result will be a rich domain model.
Just like Yoda said: Do or do not. There is no try.
But the initial question was how to design a rich domain model instead of an anemic domain model.
Not how to design an anemic domain model instead of a rich domain model.
The resulting classes would look like this:
public class Item
{
public int ItemID { get; set; }
public int StockAmount { get; set; }
public string ItemName { get; set; }
public void Validate(bool validateStocks)
{
if (validateStocks && this.StockAmount <= 0) throw new Exception ("Out of stock");
// additional item validation code
}
}
public class Order
{
public int OrderID { get; set; }
public string OrderName { get; set; }
public List<Items> OrderItems { get; set; }
public void Validate(bool validateStocks)
{
if(!this.OrderItems.Any()) throw new Exception("Empty order.");
this.OrderItems.ForEach(item => item.Validate(validateStocks));
}
}
Before you ask: you will still need a (procedural) service method to load the data (order with items) from the DB and trigger the validation (of the loaded order-object).
But the difference to an anemic domain model is that this service does NOT contain the validation logic itself.
The domain logic is within the domain model, not within the service/manager/validator or whatever name you call your service classes.
Using a rich domain model means that the services just orchestrate different external dependencies, but they don't include domain logic.
So what if you want to update your domain-data at a specific point within your domain logic, e.g. immediately after the "IsValidForOrder" method is called?
Well, that would be problem.
If you really have such a transaction-oriented demand I would recommend not to use a rich domain model.
[Update: DB-related ID checks removed - persistence checks should be in a service]
[Update: Added conditional item stock checks, code cleanup]
If you go with Rich Domain Model implement AddItem method inside Order. But SOLID principles don't want you validation and other things inside this method.
Imagine you have AddItem() method in Order that validates item and recalculate total order sum including taxes. You next change is that validation depends on country, selected language and selected currency. Your next change is taxes depends on country too. Next requirements can be translation check, discounts etc. Your code will become very complex and difficult to maintenance. So I thing it is better to have such thing inside AddItem:
public void AddItem(IOrderContext orderItemContext) {
var orderItem = _orderItemBuilder.BuildItem(_orderContext, orderItemContext);
_orderItems.Add(orderItem);
}
Now you can test item creation and item adding to the order separately. You IOrderItemBuilder.Build() method can be like this for some country:
public IOrderItem BuildItem(IOrderContext orderContext, IOrderItemContext orderItemContext) {
var orderItem = Build(orderItemContext);
_orderItemVerifier.Verify(orderItem, orderContext);
totalTax = _orderTaxCalculator.Calculate(orderItem, orderContext);
...
return orderItem;
}
So you can test and use separately code for different responsibility and country. It is easy to mock each component, as well as change them at runtime depending on user choice.
To model a composite transaction, use two classes: a Transaction (Order) and a LineItem (OrderLineItem) class. Each LineItem is then associated with a particular Product.
When it comes to behavior adopt the following rule:
"An action on an object in the real world, becomes a service (method) of that object in an Object Oriented approach."
We are using Machine.Specification as our test framework on my current project. This works well for most of what we are testing. However, we have a number of view models where we have 'formatted' properties that take some raw data, apply some logic, and return a formatted version of that data.
Since there is logic involved in the formatting (null checks, special cases for zero, etc.) I want test a number of possible data values including boundary conditions. To me, this doesn't feel like the right use case for MSpec and that we should drop down into something like NUnit where I can write a data-driven test using something like the [TestCase] attribute.
Is there a clean, simple way to write this kind of test in MSpec, or am I right in my feeling that we should be using a different tool for this kind of test?
View Model
public class DwellingInformation
{
public DateTime? PurchaseDate { get; set; }
public string PurchaseDateFormatted
{
if(PurchaseDate == null)
return "N/A";
return PurchaseDate.Value.ToShortDateString();
}
public int? ReplacementCost { get; set; }
public string ReplacementCostFormatted
{
if(ReplacementCost == null)
return "N/A";
if(ReplacementCost == 0)
return "Not Set";
return ReplacementCost.ToString("C0");
}
// ... and so on...
}
MSpec Tests
public class When_ReplacementCost_is_null
{
private static DwellingInformation information;
Establish context = () =>
{
information = new DwellingInformation { ReplacementCost = null };
};
It ReplacementCostFormatted_should_be_Not_Available = () => information.ReplacementCostFormatted.ShouldEqual("N/A");
}
public class When_ReplacementCost_is_zero
{
private static DwellingInformation information;
Establish context = () =>
{
information = new DwellingInformation { ReplacementCost = "0" };
};
It ReplacementCostFormatted_should_be_Not_Set = () => information.ReplacementCostFormatted.ShouldEqual("Not Set");
}
public class When_ReplacementCost_is_a_non_zero_value
{
private static DwellingInformation information;
Establish context = () =>
{
information = new DwellingInformation { ReplacementCost = 200000 };
};
It ReplacementCostFormatted_should_be_formatted_as_currency = () => information.ReplacementCostFormatted.ShouldEqual("$200,000");
}
NUnit w/TestCase
[TestCase(null, "N/A")]
[TestCase(0, "Not Set")]
[TestCase(200000, "$200,000")]
public void ReplacementCostFormatted_Correctly_Formats_Values(int? inputVal, string expectedVal)
{
var information = new DwellingInformation { ReplacementCost = inputVal };
information.ReplacementCostFormatted.ShouldEqual(expectedVal);
}
Is there a better way to write the MSpec tests that I'm missing because I'm just not familiar enough with MSpec yet, or is MSpec really just the wrong tool for the job in this case?
NOTE: Another Dev on the team feels we should write all of our tests in MSpec because he doesn't want to introduce multiple testing frameworks into the project. While I understand his point, I want to make sure we are using the right tool for the right job, so if MSpec is not the right tool, I'm looking for points I can use to argue the case for introducing another framework.
Short answer, use NUnit or xunit. Combinatorial testing is not the sweet spot of mspec and likely will never be. I never cared for multiple test frameworks in my projects, especially when a second tool works better for specific scenarios. Mspec works best for behavioral specifications. Testing input variants is not.
I've been trying to wrap my head around unit testing and I'm trying to deal with unit testing a function whose return value depends on a bunch of parameters. There's a lot of information however and it's a bit overwhelming..
Consider the following:
I have a class Article, which has a collection of prices. It has a method GetCurrentPrice which determines the current price based on a few rules:
public class Article
{
public string Id { get; set; }
public string Description { get; set; }
public List<Price> Prices { get; set; }
public Article()
{
Prices = new List<Price>();
}
public Price GetCurrentPrice()
{
if (Prices == null)
return null;
return (
from
price in Prices
where
price.Active &&
DateTime.Now >= price.Start &&
DateTime.Now <= price.End
select price)
.OrderByDescending(p => p.Type)
.FirstOrDefault();
}
}
The PriceType enum and Price class:
public enum PriceType
{
Normal = 0,
Action = 1
}
public class Price
{
public string Id { get; set; }
public string Description { get; set; }
public decimal Amount { get; set; }
public PriceType Type { get; set; }
public DateTime Start { get; set; }
public DateTime End { get; set; }
public bool Active { get; set; }
}
I want to create a unit test for the GetCurrentPrice method. Basically I want to test all combinations of rules that could possibly occur, so I would have to create multiple articles to contain various combinations of prices to get full coverage.
I'm thinking of a unit test such as this (pseudo):
[TestMethod()]
public void GetCurrentPriceTest()
{
var articles = getTestArticles();
foreach (var article in articles)
{
var price = article.GetCurrentPrice();
// somehow compare the gotten price to a predefined value
}
}
I've read that 'multiple asserts are evil', but don't I need
them to test all conditions here? Or would I need a separate unit
test per condition?
How would I go about providing the unit test with a set of test data?
Should I mock a repository? And should that data also include the
expected values?
You are not using a repository in this example so there's no need to mock anything. What you could do is to create multiple unit tests for the different possible inputs:
[TestMethod]
public void Foo()
{
// arrange
var article = new Article();
// TODO: go ahead and populate the Prices collection with dummy data
// act
var actual = article.GetCurrentPrice();
// assert
// TODO: assert on the actual price returned by the method
// depending on what you put in the arrange phase you know
}
and so on you could add other unit tests where you would only change the arrange and assert phases for each possible input.
You do not need multiple asserts. You need multiple tests with only a single assert each.
new test for each startup condition and single assert,f.e.
[Test]
public void GetCurrentPrice_PricesCollection1_ShouldReturnNormalPrice(){...}
[Test]
public void GetCurrentPrice_PricesCollection2_ShouldReturnActionPrice(){...}
and also test for boundaries
for unit tests i use pattern
MethodName_UsedData_ExpectedResult()
I think you need datadriven testing. In vsts there is an attribute called Datasource, using it you can send a test method multiple test cases. Make sure you don't use multiple asserts. Here is one MSDN link http://msdn.microsoft.com/en-us/library/ms182527.aspx
Hope this will help you.