AllureFeature duplicates over multiple NUnit test cases - c#

I'd like to use the AllureFeature attribute accross different test cases, however in the report, each feature has all the tests from the testcases
In the below example we have tests related to testing tax rules. We apply tax rules to shipping methods, directly to products and to surcharges. I'd like the report to show a breakdown of all shipping method related tests as well as tax rules etc.
namespace Tests.Ecommerce
{
[TestFixture]
[AllureNUnit]
[AllureEpic("Ecommerce")]
[AllureFeature("TaxRule")]
public class TaxRules : EcommerceBase
{
[TestCase("ProductVariant")]
[TestCase("Shipping"), AllureFeature("Shipping")]
[TestCase("Surcharge"), AllureFeature("Surcharge")]
public static void MyRandomTest(string feature)
{
// Do stuff
}
}
}

After some further research. It doesn't look like it's possible to pass in alternate attributes in test cases.
[TestCase("ProductVariant")]
[TestCase("Shipping"), AllureFeature("Shipping")]
[TestCase("Surcharge"), AllureFeature("Surcharge")]
public static void MyRandomTest(string feature)
{
// Do stuff
}
This should rather be:
[TestCase("ProductVariant")]
[TestCase("Shipping")]
[TestCase("Surcharge")]
[AllureFeature("Surcharge")]
public static void MyRandomTest(string feature)
{
// Do stuff
}
Which, unfortunately, doesn't give me the desired outcome. The solution is to add AllureFeature programatically using:
AllureLifecycle.Instance.UpdateTestCase(x => x.labels.Add(Label.Feature(arg)));

Related

MSUnit test assertion is WRONG. I know why, but I can't think of how to fix it. Help! What can I do to get my assertion that 3 is the count of users?

I'm new to unit testing and need some help. This example is only for me to learn, I'm not actually counting the number of users in a static variable when I clearly could just use the count property on the List data structure. Help me figure out how to get my original assertion that there are 3 users. Here is the code:
Class User
namespace TestStatic
{
public class User
{
public string Name { get; set; }
public int Dollars { get; set; }
public static int Num_users { get; set; }
public User(string name)
{
this.Name = name;
Num_users++;
}
public int CalculateInterest(int interestRate)
{
return Dollars * interestRate;
}
}
}
Test using MSTest
namespace TestStaticUnitTest
{
[TestClass]
public class CalcInterest
{
[TestMethod]
public void UserMoney()
{
// arrange
User bob = new User("Bob");
bob.Dollars = 24;
// act
int result = bob.CalculateInterest(6);
// assert
Assert.AreEqual(144, result);
//cleanup?
}
[TestMethod]
public void UserCount()
{
// arrange
List<User> users = new List<User>(){ new User("Joe"), new User("Bob"), new User("Greg") };
// act
int userCount = User.Num_users;
// assert
Assert.AreEqual(3, userCount);
}
}
}
The result in the UserCount test fails because a fourth user exist. The user from the UserMoney test is still in memory. What should I do to get three users? Should I garbage collect the first Bob?
Also, I would think that a test that reaches into another test wouldn't be a good unit test. I know that could be an argument, but I'll take any advice from the community on this code. Thanks for the help.
The obvious solution would be to remove the static counter. As you see, when you enter the second unit test method UserCount() the value of that counter is still 1 from the execution of the first unit test method UserMoney() before.
If you want to keep the counter (for learning purposes to see what's going on), you can use cleanup methods which will "reset" the environment before all or each unit test method. In this case you want to reset the counter to 0 for every unit test method execution. You do so by writing a method with the [TestInitialize] attribute:
[TestInitialize]
public void _Initialize() {
User.Num_users = 0;
}
That way, each unit test runs with a "clean" state where the counter will be reset to 0 before the actual unit test method is executed.
You might want to look at Why does TestInitialize get fired for every test in my Visual Studio unit tests? to see how these attributes work.

How to test DB.Configuration.AutoDetectChangesEnabled = false

I'm trying to write some tests for a class using NSubstitute.
Class constructor is:
public class ClassToTest : IClassToTest
{
private IDataBase DB;
public ClassToTest(IDatabase DB)
{
this.DB = DB;
this.DB.Configuration.AutoDetectChangesEnabled = false;
}
Here is my UnitTests class:
[TestFixture]
public class ClassToTestUnitTests
{
private ClassToTest _testClass;
[SetUp]
public void SetUp()
{
var Db = Substitute.For<IDatabase>();
//Db.Configuration.AutoDetectChangesEnabled = false; <- I've tried to do it like this
var dummyData = Substitute.For<DbSet<Data>, IQueryable<Data>, IDbAsyncEnumerable<Data>>().SetupData(GetData());
Db.Data.Returns(dummyData);
_testClass = new ClassToTest(Db);
}
Whenever I try to test some method, the test fails and there is a NullReferenceException and it goes in StackTrace to the SetUp method.
When I commented out the
this.DB.Configuration.AutoDetectChangesEnabled = false; in ClassToTest constructor the tests work fine.
Edit:
public interface IInventoryDatabase
{
DbSet<NamesV> NamesV { get; set; }
DbSet<Adress> Adresses { get; set; }
DbSet<RandomData> Randomdata { get; set; }
// (...more DbSets)
System.Data.Entity.Database Database { get; }
DbContextConfiguration Configuration { get; }
int SaveChanges();
}
The reason for the NullReferenceException is that NSubstitute cannot automatically substitute for DbContextConfiguration (it can only do so for purely virtual classes).
Normally we could work around this by manually configuration this property, something like Db.Configuration.Returns(myConfiguration), but in this case DbContextConfiguration does not seem to have a public constructor so we are unable to create an instance for myConfiguration.
At this stage I can think of two main options: wrap the problematic class in a more testable adapter class; or switch to testing this at a different level. (My preference is the latter which I'll explain below.)
The first option involves something like this:
public interface IDbContextConfiguration {
bool AutoDetectChangesEnabled { get; set; }
// ... any other required members here ...
}
public class DbContextConfigurationAdapter : IDbContextConfiguration {
DbContextConfiguration config;
public DbContextConfigurationAdapter(DbContextConfiguration config) {
this.config = config;
}
public bool AutoDetectChangedEnabled {
get { return config.AutoDetectChangedEnabled; }
set { config = value; }
}
}
Then updating IInventoryDatabase to using the more testable IDbContextConfiguration type. My opposition to this approach is that it can end up requiring a lot of work for something that should be fairly simple. This approach can be very useful for cases where we have behaviours that make sense to be grouped under a logical interface, but for working with an AutoDetectChangedEnabled property this seems unnecessary work.
The other option is to test this at a different level. I think the friction in testing the current code is that we are trying to substitute for details of Entity Framework, rather than interfaces we've created for partitioning the logical details of our app. Search for "don't mock types you don't own" for more information on why this can be a problem (I've written about it before here).
One example of testing at a different level is to switch to an in-memory database for testing this part of the code. This will tell you much more valuable information: given a known state of the test database, you are demonstrating the queries return the expected information. This is in contrast to a test showing we are calling Entity Framework in the way we think is required.
To combine this approach with mocking (not necessarily required!), we can create a higher level interface and substitute for that for testing our application code, then make an implementation of that interface and test that using the in-memory database. We have then divided the application into two parts that we can test independently: first that our app uses data from the data access interface correctly, and secondly that our implementation of that interface works as expected.
So that would give us something like this:
public interface IAppDatabase {
// These members just for example. Maybe instead of something general like
// `GetAllNames()` we have operations specific to app operations such as
// `UpdateAddress(Guid id, Address newAddress)`, `GetNameFor(SomeParams p)` etc.
Task<List<Name>> GetAllNames();
Task<Address> LookupAddress(Guid id);
}
public class AppDatabase : IAppDatabase {
// ...
public AppDatabase(IInventoryDatabase db) { ... }
public Task<List<Name>> GetAllNames() {
// use `db` and Entity Framework to retrieve data...
}
// ...
}
The AppDatabase class we test with an in-memory database. The rest of the app we test with respect to a substitute IAppDatabase.
Note that we can skip the mocking step here by using the in-memory database for all relevant tests. Using mocking may be easier than setting up all the required data in the database, or may make tests run faster. Or maybe not -- I suggest considering both options.
Hope this helps.

Unit Testing with Complex Setup and Logic

I'm trying to become better at unit testing and one of my biggest uncertainties is writing unit tests for methods that require quite a bit of setup code, and I haven't found a good answer. The answers that I find are generally along the lines of "break your tests down into smaller units of work" or "use mocks". I'm trying to follow all of those best practices. However, even with mocking (I'm using Moq) and trying to break down everything into the smallest unit of work, I eventually run into a method that has several inputs, makes calls to several mock services, and requires me to specify return values for those mock method calls.
Here's an example of the code under test:
public class Order
{
public string CustomerId { get; set; }
public string OrderNumber { get; set; }
public List<OrderLine> Lines { get; set; }
public decimal Value { get { /* return the order's calculated value */ } }
public Order()
{
this.Lines = new List<OrderLine>();
}
}
public class OrderLine
{
public string ItemId { get; set; }
public int QuantityOrdered { get; set; }
public decimal UnitPrice { get; set; }
}
public class OrderManager
{
private ICustomerService customerService;
private IInventoryService inventoryService;
public OrderManager(ICustomerService customerService, IInventoryService inventoryService)
{
// Guard clauses omitted to make example smaller
this.customerService = customerService;
this.inventoryService = inventoryService;
}
// This is the method being tested.
// Return false if this order's value is greater than the customer's credit limit.
// Return false if there is insufficient inventory for any of the items on the order.
// Return false if any of the items on the order on hold.
public bool IsOrderShippable(Order order)
{
// Return false if the order's value is greater than the customer's credit limit
decimal creditLimit = this.customerService.GetCreditLimit(order.CustomerId);
if (creditLimit < order.Value)
{
return false;
}
// Return false if there is insufficient inventory for any of this order's items
foreach (OrderLine orderLine in order.Lines)
{
if (orderLine.QuantityOrdered > this.inventoryService.GetInventoryQuantity(orderLine.ItemId)
{
return false;
}
}
// Return false if any of the items on this order are on hold
foreach (OrderLine orderLine in order.Lines)
{
if (this.inventoryService.IsItemOnHold(orderLine.ItemId))
{
return false;
}
}
// If we are here, then the order is shippable
return true;
}
}
Here's a test:
[TestClass]
public class OrderManagerTests
{
[TestMethod]
public void IsOrderShippable_OrderIsShippable_ShouldReturnTrue()
{
// Setup inventory on-hand quantities for this test
Mock<IInventoryService> inventoryService = new Mock<IInventoryService>();
inventoryService.Setup(e => e.GetInventoryQuantity("ITEM-1")).Returns(10);
inventoryService.Setup(e => e.GetInventoryQuantity("ITEM-2")).Returns(20);
inventoryService.Setup(e => e.GetInventoryQuantity("ITEM-3")).Returns(30);
// Configure each item to be not on hold
inventoryService.Setup(e => e.IsItemOnHold("ITEM-1")).Returns(false);
inventoryService.Setup(e => e.IsItemOnHold("ITEM-2")).Returns(false);
inventoryService.Setup(e => e.IsItemOnHold("ITEM-3")).Returns(false);
// Setup the customer's credit limit
Mock<ICustomerService> customerService = new Mock<ICustomerService>();
customerService.Setup(e => e.GetCreditLimit("CUSTOMER-1")).Returns(1000m);
// Create the order being tested
Order order = new Order { CustomerId = "CUSTOMER-1" };
order.Lines.Add(new OrderLine { ItemId = "ITEM-1", QuantityOrdered = 10, UnitPrice = 1.00m });
order.Lines.Add(new OrderLine { ItemId = "ITEM-2", QuantityOrdered = 20, UnitPrice = 2.00m });
order.Lines.Add(new OrderLine { ItemId = "ITEM-3", QuantityOrdered = 30, UnitPrice = 3.00m });
OrderManager orderManager = new OrderManager(
customerService: customerService.Object,
inventoryService: inventoryService.Object);
bool isShippable = orderManager.IsOrderShippable(order);
Assert.IsTrue(isShippable);
}
}
This is an abbreviated example. My actual methods that I'm testing are similar in their structure, but they often have a few more service methods that they're calling or they have more setup code for the models (for instance, the Order object requires more properties to be assigned in order for the test to work).
Given that some of my methods have to do several things at once like this example (such as methods that are behind button-click events), is this the best way of dealing with writing unit tests for those methods?
You are already on the right path. And at some point, if a 'method under test' is big (not complex), then your unit test is bound to be big (not complex). i tend to differentiate between code which is 'big' vs. code which is 'complex'. A complex code snippet needs to be simplified.. a big code snippet is sometimes more clearer yet simple..
In your case, your code is just big, not complex. Hence it is not a big deal, if your unit tests are big as well.
Having said that, here is how we can make it crisper and more readable.
Option #1
The target code under test seems to be:
public bool IsOrderShippable(Order order)
As i can see, there are at least 4 unit test scenarios straightaway:
// Scenario 1: Return false if the order's value is
// greater than the customer's credit limit
[TestMethod]
public void IsOrderShippable_OrderValueGreaterThanCustomerCreditLimit_ShouldReturnFalse()
{
// Setup the customer's credit limit
var customerService = new Mock<ICustomerService>();
customerService.Setup(e => e.GetCreditLimit(It.IsAny<string>())).Returns(1000m);
// Create the order with value greater than credit limit
var order = new Order { Value = 1001m };
var orderManager = new OrderManager(
customerService: customerService.Object,
inventoryService: new Mock<IInventoryService>().Object);
bool isShippable = orderManager.IsOrderShippable(order);
Assert.IsFalse(isShippable);
}
As you can see, this test is pretty compact. it doesn't bother to setup a lot of mocks etc. that you don't expect your scenario code to hit.
similarly you can write compact tests for the other 2 scenarios as well..
and then finally for the last scenario, you have the proper unit test.
the only thing i would do is extract out some private helper methods to make the actual unit test pretty crisp and readable as follows:
[TestMethod]
public void IsOrderShippable_OrderIsShippable_ShouldReturnTrue()
{
// you can parametrize this helper method as needed
var inventoryService = GetMockInventoryServiceWithItemsNotOnHold();
// You can parametrize this helper method with credit line, etc.
var customerService = GetMockCustomerService(1000m);
// parametrize this method with number of items and total price etc.
Order order = GetTestOrderWithItems();
OrderManager orderManager = new OrderManager(
customerService: customerService.Object,
inventoryService: inventoryService.Object);
bool isShippable = orderManager.IsOrderShippable(order);
Assert.IsTrue(isShippable);
}
As you can see, by using helper methods, you made the test smaller and crisper, but we do lose some readability in terms of what parameters are being setup.
However, i tend to be very explicit about helper method names and parameter names, so that by reading the method name and parameters, a reader is clear about what sort of data is being arranged.
Most of the times, the happy path scenarios end up requiring the maximum setup code, since they need all the mocks setup properly with all correlated items, quantity, prices etc. In those cases, i prefer to sometimes put all the setup code on the TestSetup method.. so that it is by default available to every test method.
The upside, is that the tests get a good mock value out of the box.. (your happy path unit test can literally be just 2 lines, since you can keep a well-valid Order ready in the TestSetup method)
The downside is that the happy path scenario is typically one unit test.. but putting that stuff in the testSetup will run it for every unit test, even though they would never need it.
Option #2
Here is another way..
you could breakdown your IsOrderShippable method into 4 private methods that each exercise the 4 scenarios. You can make these private methods internal and then have your unit tests, work on those methods (internalsvisibleto etc.).. it is still a bit clunky, since you are making private methods internal, and also you still need to unit test your public method, which brings us kinda back to the original problem.

Does MSpec support "row tests" or data-driven tests, like NUnit TestCase?

We are using Machine.Specification as our test framework on my current project. This works well for most of what we are testing. However, we have a number of view models where we have 'formatted' properties that take some raw data, apply some logic, and return a formatted version of that data.
Since there is logic involved in the formatting (null checks, special cases for zero, etc.) I want test a number of possible data values including boundary conditions. To me, this doesn't feel like the right use case for MSpec and that we should drop down into something like NUnit where I can write a data-driven test using something like the [TestCase] attribute.
Is there a clean, simple way to write this kind of test in MSpec, or am I right in my feeling that we should be using a different tool for this kind of test?
View Model
public class DwellingInformation
{
public DateTime? PurchaseDate { get; set; }
public string PurchaseDateFormatted
{
if(PurchaseDate == null)
return "N/A";
return PurchaseDate.Value.ToShortDateString();
}
public int? ReplacementCost { get; set; }
public string ReplacementCostFormatted
{
if(ReplacementCost == null)
return "N/A";
if(ReplacementCost == 0)
return "Not Set";
return ReplacementCost.ToString("C0");
}
// ... and so on...
}
MSpec Tests
public class When_ReplacementCost_is_null
{
private static DwellingInformation information;
Establish context = () =>
{
information = new DwellingInformation { ReplacementCost = null };
};
It ReplacementCostFormatted_should_be_Not_Available = () => information.ReplacementCostFormatted.ShouldEqual("N/A");
}
public class When_ReplacementCost_is_zero
{
private static DwellingInformation information;
Establish context = () =>
{
information = new DwellingInformation { ReplacementCost = "0" };
};
It ReplacementCostFormatted_should_be_Not_Set = () => information.ReplacementCostFormatted.ShouldEqual("Not Set");
}
public class When_ReplacementCost_is_a_non_zero_value
{
private static DwellingInformation information;
Establish context = () =>
{
information = new DwellingInformation { ReplacementCost = 200000 };
};
It ReplacementCostFormatted_should_be_formatted_as_currency = () => information.ReplacementCostFormatted.ShouldEqual("$200,000");
}
NUnit w/TestCase
[TestCase(null, "N/A")]
[TestCase(0, "Not Set")]
[TestCase(200000, "$200,000")]
public void ReplacementCostFormatted_Correctly_Formats_Values(int? inputVal, string expectedVal)
{
var information = new DwellingInformation { ReplacementCost = inputVal };
information.ReplacementCostFormatted.ShouldEqual(expectedVal);
}
Is there a better way to write the MSpec tests that I'm missing because I'm just not familiar enough with MSpec yet, or is MSpec really just the wrong tool for the job in this case?
NOTE: Another Dev on the team feels we should write all of our tests in MSpec because he doesn't want to introduce multiple testing frameworks into the project. While I understand his point, I want to make sure we are using the right tool for the right job, so if MSpec is not the right tool, I'm looking for points I can use to argue the case for introducing another framework.
Short answer, use NUnit or xunit. Combinatorial testing is not the sweet spot of mspec and likely will never be. I never cared for multiple test frameworks in my projects, especially when a second tool works better for specific scenarios. Mspec works best for behavioral specifications. Testing input variants is not.

Resolution for Model View Presenter Testing... Do I use DTO's or Domain objects or both?

The basic issue is how to test a presenter.
Take:
Domain object (will eventually be persisted to DB)
Base attributes are Id (DB ID, Int/GUID/Whatever) and TransientID (Local ID until saved, GUID)
DomainObject
namespace domain {
public class DomainObject {
private int _id;
private Guid transientId;
public DomainObject()
{
_transient_Id = Guid.NewGuid();
}
}
}
PresenterTest:
var repository = Mock.StrictMock();
var view = Mock.StrictMock();
view.Save += null;
var saveEvent = LastCall.Ignore().GetEventRaiser();
var domainObject = new DomainObject() {Id = 0, Attribute = "Blah"};
Mock.ExpectCall(Repository.Save(domainObject)).Returns(True);
Mock.ReplayAll();
var sut = new Presenter(repository, view);
Save_Event.raise(view, EventArgs.Empty);
Mock.Verify()
So the problem here is that the domain object identity is calculated with ID and failing that it's calculated with transientID, there's no way to know what the transientID will be so I can't have the mock repository check for equality.
The workarounds so far are:
1) LastCall.Ignore and content myself with jsut testing that the method got called but not test the content of the call.
2) Write a DTO to test against and save to a service. The service than handles the mapping to domain.
3) Write a fake testRepository that uses custom logic to determine success.
--1 doesn't test the majority of the logic. --2 is a lot of extra code for no good purpose --3 Seems potentially brittle.
Right now I'm leaning towards DTO's and a service in the theory that it gives the greatest isolation between tiers but is probably 75% unnecessary...
there's no way to know what the transientID will be so I can't have the mock repository check for equality.
Actually, I think there is an opportunity here.
Instead of calling Guid.NewGuid(), you could create your own GuidFactory class that generates GUIDs. By default, it would use Guid.NewGuid() internally, but you could take control of it for tests.
public static class GuidFactory
{
static Func<Guid> _strategy = () => Guid.NewGuid();
public static Guid Build()
{
return _strategy();
}
public static void SetStrategy(Func<Guid> strategy)
{
_strategy = strategy;
}
}
In your constructor, you replace Guid.NewGuid() with GuidFactory.Build().
In your test setup, you override the strategy to suit your needs -- return a known Guid that you can use elsewhere in your tests or just output the default result to a field.
For example:
public class PseudoTest
{
IList<Guid> GeneratedGuids = new List<Guid>();
public void SetUpTest()
{
GuidFactory.SetStrategy(() =>
{
var result = Guid.NewGuid();
GeneratedGuids.Add(result);
return result;
});
}
public void Test()
{
systemUnderTest.DoSomething();
Assert.AreEqual(GeneratedGuids.Last(), someOtherGuid);
}
}
WPF has helped me realize that you really don't need to do much testing if any on the Controller/Presenter/VM. You really should focus all of your tests on the models and services you use. All business logic should be there, the view model or presenter or controller should be as light as possible, and only role is to transfer back and forth between the model and the view.
What's the point of testing whether you call a service when a button command makes it to the presenter? Or testing whether an event is wired properly?
Don't get me wrong, I still have a very small test fixture for the view models or controllers but really the focus of tests should be on the models, let the integration tests test the success of the view and the presenter.
Skinny controllers/VMs/Presenters.
Fat Models.
This is my answer because I ran into the same issue trying to test viewmodels, I wasted so much time trying to figure out how best to test them and another developer gave a great talk on Model-View patterns with this argument. Don't spend too much time making tests for these, focus on the models/services.

Categories