I am just starting out with Unit testings and TDD in general. I have dabbled before but now I am determined to add it to my workflow and write better software.
I asked a question yesterday that kind of included this, but it seems to be a question on its own. I have sat down to start implementing a service class that I will use to abstract away the business logic from the controllers and map to specific models and data interactions using EF6.
The issue is I have roadblocked myself already because I didn't want to abstract EF away in a repository (it will still be available outside the services for specific queries, etc) and would like to test my services (EF Context will be used).
Here I guess is the question, is there a point to doing this? If so, how are people doing it in the wild in light of the leaky abstractions caused by IQueryable and the many great posts by Ladislav Mrnka on the subject of unit testing not being straightforward because of the differences in Linq providers when working with an in memory implementation as apposed to a specific database.
The code I want to test seems pretty simple. (this is just dummy code to try and understand what i am doing, I want to drive the creation using TDD)
Context
public interface IContext
{
IDbSet<Product> Products { get; set; }
IDbSet<Category> Categories { get; set; }
int SaveChanges();
}
public class DataContext : DbContext, IContext
{
public IDbSet<Product> Products { get; set; }
public IDbSet<Category> Categories { get; set; }
public DataContext(string connectionString)
: base(connectionString)
{
}
}
Service
public class ProductService : IProductService
{
private IContext _context;
public ProductService(IContext dbContext)
{
_context = dbContext;
}
public IEnumerable<Product> GetAll()
{
var query = from p in _context.Products
select p;
return query;
}
}
Currently I am in the mindset of doing a few things:
Mocking EF Context with something like this approach- Mocking EF When Unit Testing or directly using a mocking framework on the interface like moq - taking the pain that the unit tests may pass but not necessarily work end to end and back them up with Integration tests?
Maybe using something like Effort to mock EF - I have never used it and not sure if anyone else is using it in the wild?
Not bother testing anything that simply calls back to EF - so essentially service methods that call EF directly (getAll etc) are not unit tested but just integration tested?
Anyone out there actually doing this out there without a Repo and having success?
This is a topic I'm very interested in. There are many purists who say that you shouldn't test technologies such as EF and NHibernate. They are right, they're already very stringently tested and as a previous answer stated it's often pointless to spend vast amounts of time testing what you don't own.
However, you do own the database underneath! This is where this approach in my opinion breaks down, you don't need to test that EF/NH are doing their jobs correctly. You need to test that your mappings/implementations are working with your database. In my opinion this is one of the most important parts of a system you can test.
Strictly speaking however we're moving out of the domain of unit testing and into integration testing but the principles remain the same.
The first thing you need to do is to be able to mock your DAL so your BLL can be tested independently of EF and SQL. These are your unit tests. Next you need to design your Integration Tests to prove your DAL, in my opinion these are every bit as important.
There are a couple of things to consider:
Your database needs to be in a known state with each test. Most systems use either a backup or create scripts for this.
Each test must be repeatable
Each test must be atomic
There are two main approaches to setting up your database, the first is to run a UnitTest create DB script. This ensures that your unit test database will always be in the same state at the beginning of each test (you may either reset this or run each test in a transaction to ensure this).
Your other option is what I do, run specific setups for each individual test. I believe this is the best approach for two main reasons:
Your database is simpler, you don't need an entire schema for each test
Each test is safer, if you change one value in your create script it doesn't invalidate dozens of other tests.
Unfortunately your compromise here is speed. It takes time to run all these tests, to run all these setup/tear down scripts.
One final point, it can be very hard work to write such a large amount of SQL to test your ORM. This is where I take a very nasty approach (the purists here will disagree with me). I use my ORM to create my test! Rather than having a separate script for every DAL test in my system I have a test setup phase which creates the objects, attaches them to the context and saves them. I then run my test.
This is far from the ideal solution however in practice I find it's a LOT easier to manage (especially when you have several thousand tests), otherwise you're creating massive numbers of scripts. Practicality over purity.
I will no doubt look back at this answer in a few years (months/days) and disagree with myself as my approaches have changed - however this is my current approach.
To try and sum up everything I've said above this is my typical DB integration test:
[Test]
public void LoadUser()
{
this.RunTest(session => // the NH/EF session to attach the objects to
{
var user = new UserAccount("Mr", "Joe", "Bloggs");
session.Save(user);
return user.UserID;
}, id => // the ID of the entity we need to load
{
var user = LoadMyUser(id); // load the entity
Assert.AreEqual("Mr", user.Title); // test your properties
Assert.AreEqual("Joe", user.Firstname);
Assert.AreEqual("Bloggs", user.Lastname);
}
}
The key thing to notice here is that the sessions of the two loops are completely independent. In your implementation of RunTest you must ensure that the context is committed and destroyed and your data can only come from your database for the second part.
Edit 13/10/2014
I did say that I'd probably revise this model over the upcoming months. While I largely stand by the approach I advocated above I've updated my testing mechanism slightly. I now tend to create the entities in in the TestSetup and TestTearDown.
[SetUp]
public void Setup()
{
this.SetupTest(session => // the NH/EF session to attach the objects to
{
var user = new UserAccount("Mr", "Joe", "Bloggs");
session.Save(user);
this.UserID = user.UserID;
});
}
[TearDown]
public void TearDown()
{
this.TearDownDatabase();
}
Then test each property individually
[Test]
public void TestTitle()
{
var user = LoadMyUser(this.UserID); // load the entity
Assert.AreEqual("Mr", user.Title);
}
[Test]
public void TestFirstname()
{
var user = LoadMyUser(this.UserID);
Assert.AreEqual("Joe", user.Firstname);
}
[Test]
public void TestLastname()
{
var user = LoadMyUser(this.UserID);
Assert.AreEqual("Bloggs", user.Lastname);
}
There are several reasons for this approach:
There are no additional database calls (one setup, one teardown)
The tests are far more granular, each test verifies one property
Setup/TearDown logic is removed from the Test methods themselves
I feel this makes the test class simpler and the tests more granular (single asserts are good)
Edit 5/3/2015
Another revision on this approach. While class level setups are very helpful for tests such as loading properties they are less useful where the different setups are required. In this case setting up a new class for each case is overkill.
To help with this I now tend to have two base classes SetupPerTest and SingleSetup. These two classes expose the framework as required.
In the SingleSetup we have a very similar mechanism as described in my first edit. An example would be
public TestProperties : SingleSetup
{
public int UserID {get;set;}
public override DoSetup(ISession session)
{
var user = new User("Joe", "Bloggs");
session.Save(user);
this.UserID = user.UserID;
}
[Test]
public void TestLastname()
{
var user = LoadMyUser(this.UserID); // load the entity
Assert.AreEqual("Bloggs", user.Lastname);
}
[Test]
public void TestFirstname()
{
var user = LoadMyUser(this.UserID);
Assert.AreEqual("Joe", user.Firstname);
}
}
However references which ensure that only the correct entites are loaded may use a SetupPerTest approach
public TestProperties : SetupPerTest
{
[Test]
public void EnsureCorrectReferenceIsLoaded()
{
int friendID = 0;
this.RunTest(session =>
{
var user = CreateUserWithFriend();
session.Save(user);
friendID = user.Friends.Single().FriendID;
} () =>
{
var user = GetUser();
Assert.AreEqual(friendID, user.Friends.Single().FriendID);
});
}
[Test]
public void EnsureOnlyCorrectFriendsAreLoaded()
{
int userID = 0;
this.RunTest(session =>
{
var user = CreateUserWithFriends(2);
var user2 = CreateUserWithFriends(5);
session.Save(user);
session.Save(user2);
userID = user.UserID;
} () =>
{
var user = GetUser(userID);
Assert.AreEqual(2, user.Friends.Count());
});
}
}
In summary both approaches work depending on what you are trying to test.
Effort Experience Feedback here
After a lot of reading I have been using Effort in my tests: during the tests the Context is built by a factory that returns a in memory version, which lets me test against a blank slate each time. Outside of the tests, the factory is resolved to one that returns the whole Context.
However i have a feeling that testing against a full featured mock of the database tends to drag the tests down; you realize you have to take care of setting up a whole bunch of dependencies in order to test one part of the system. You also tend to drift towards organizing together tests that may not be related, just because there is only one huge object that handles everything. If you don't pay attention, you may find yourself doing integration testing instead of unit testing
I would have prefered testing against something more abstract rather than a huge DBContext but i couldn't find the sweet spot between meaningful tests and bare-bone tests. Chalk it up to my inexperience.
So i find Effort interesting; if you need to hit the ground running it is a good tool to quickly get started and get results. However i think that something a bit more elegant and abstract should be the next step and that is what I am going to investigate next. Favoriting this post to see where it goes next :)
Edit to add: Effort do take some time to warm up, so you're looking at approx. 5 seconds at test start up. This may be a problem for you if you need your test suite to be very efficient.
Edited for clarification:
I used Effort to test a webservice app. Each message M that enters is routed to a IHandlerOf<M> via Windsor. Castle.Windsor resolves the IHandlerOf<M> which resovles the dependencies of the component. One of these dependencies is the DataContextFactory, which lets the handler ask for the factory
In my tests I instantiate the IHandlerOf component directly, mock all the sub-components of the SUT and handles the Effort-wrapped DataContextFactory to the handler.
It means that I don't unit test in a strict sense, since the DB is hit by my tests. However as I said above it let me hit the ground running and I could quickly test some points in the application
If you want to unit test code then you need to isolate your code you want to test (in this case your service) from external resources (e.g. databases). You could probably do this with some sort of in-memory EF provider, however a much more common way is to abstract away your EF implementation e.g. with some sort of repository pattern. Without this isolation any tests you write will be integration tests, not unit tests.
As for testing EF code - I write automated integration tests for my repositories that write various rows to the database during their initialization, and then call my repository implementations to make sure that they behave as expected (e.g. making sure that results are filtered correctly, or that they are sorted in the correct order).
These are integration tests not unit tests, as the tests rely on having a database connection present, and that the target database already has the latest up-to-date schema installed.
I have fumbled around sometime to reach these considerations:
1- If my application access the database, why the test should not? What if there is something wrong with data access? The tests must know it beforehand and alert myself about the problem.
2- The Repository Pattern is somewhat hard and time consuming.
So I came up with this approach, that I don't think is the best, but fulfilled my expectations:
Use TransactionScope in the tests methods to avoid changes in the database.
To do it it's necessary:
1- Install the EntityFramework into the Test Project.
2- Put the connection string into the app.config file of Test Project.
3- Reference the dll System.Transactions in Test Project.
The unique side effect is that identity seed will increment when trying to insert, even when the transaction is aborted. But since the tests are made against a development database, this should be no problem.
Sample code:
[TestClass]
public class NameValueTest
{
[TestMethod]
public void Edit()
{
NameValueController controller = new NameValueController();
using(var ts = new TransactionScope()) {
Assert.IsNotNull(controller.Edit(new Models.NameValue()
{
NameValueId = 1,
name1 = "1",
name2 = "2",
name3 = "3",
name4 = "4"
}));
//no complete, automatically abort
//ts.Complete();
}
}
[TestMethod]
public void Create()
{
NameValueController controller = new NameValueController();
using (var ts = new TransactionScope())
{
Assert.IsNotNull(controller.Create(new Models.NameValue()
{
name1 = "1",
name2 = "2",
name3 = "3",
name4 = "4"
}));
//no complete, automatically abort
//ts.Complete();
}
}
}
I would not unit test code I don't own. What are you testing here, that the MSFT compiler works?
That said, to make this code testable, you almost HAVE to make your data access layer separate from your business logic code. What I do is take all of my EF stuff and put it in a (or multiple) DAO or DAL class which also has a corresponding interface. Then I write my service which will have the DAO or DAL object injected in as a dependency (constructor injection preferably) referenced as the interface. Now the part that needs to be tested (your code) can easily be tested by mocking out the DAO interface and injecting that into your service instance inside your unit test.
//this is testable just inject a mock of IProductDAO during unit testing
public class ProductService : IProductService
{
private IProductDAO _productDAO;
public ProductService(IProductDAO productDAO)
{
_productDAO = productDAO;
}
public List<Product> GetAllProducts()
{
return _productDAO.GetAll();
}
...
}
I would consider live Data Access Layers to be part of integration testing, not unit testing. I have seen guys run verifications on how many trips to the database hibernate makes before, but they were on a project that involved billions of records in their datastore and those extra trips really mattered.
So here's the thing, Entity Framework is an implementation so despite the fact that it abstracts the complexity of database interaction, interacting directly is still tight coupling and that's why it's confusing to test.
Unit testing is about testing the logic of a function and each of its potential outcomes in isolation from any external dependencies, which in this case is the data store. In order to do that, you need to be able to control the behavior of the data store. For example, if you want to assert that your function returns false if the fetched user doesn't meet some set of criteria, then your [mocked] data store should be configured to always return a user that fails to meet the criteria, and vice versa for the opposite assertion.
With that said, and accepting the fact that EF is an implementation, I would likely favor the idea of abstracting a repository. Seem a bit redundant? It's not, because you are solving a problem which is isolating your code from the data implementation.
In DDD, the repositories only ever return aggregate roots, not DAO. That way, the consumer of the repository never has to know about the data implementation (as it shouldn't) and we can use that as an example of how to solve this problem. In this case, the object that is generated by EF is a DAO and as such, should be hidden from your application. This another benefit of the repository that you define. You can define a business object as its return type instead of the EF object. Now what the repo does is hide the calls to EF and maps the EF response to that business object defined in the repos signature. Now you can use that repo in place of the DbContext dependency that you inject into your classes and consequently, now you can mock that interface to give you the control that you need in order to test your code in isolation.
It's a bit more work and many thumb their nose at it, but it solves a real problem. There's an in-memory provider that was mentioned in a different answer that could be an option (I have not tried it), and its very existence is evidence of the need for the practice.
I completely disagree with the top answer because it sidesteps the real issue which is isolating your code and then goes on a tangent about testing your mapping. By all means test your mapping if you want to, but address the actual issue here and get some real code coverage.
In short I would say no, the juice is not worth the squeeze to test a service method with a single line that retrieves model data. In my experience people who are new to TDD want to test absolutely everything. The old chestnut of abstracting a facade to a 3rd party framework just so you can create a mock of that frameworks API with which you bastardise/extend so that you can inject dummy data is of little value in my mind. Everyone has a different view of how much unit testing is best. I tend to be more pragmatic these days and ask myself if my test is really adding value to the end product, and at what cost.
I want to share an approach commented about and briefly discussed but show an actual example that I am currently using to help unit test EF-based services.
First, I would love to use the in-memory provider from EF Core, but this is about EF 6. Furthermore, for other storage systems like RavenDB, I'd also be a proponent of testing via the in-memory database provider. Again--this is specifically to help test EF-based code without a lot of ceremony.
Here are the goals I had when coming up with a pattern:
It must be simple for other developers on the team to understand
It must isolate the EF code at the barest possible level
It must not involve creating weird multi-responsibility interfaces (such as a "generic" or "typical" repository pattern)
It must be easy to configure and setup in a unit test
I agree with previous statements that EF is still an implementation detail and it's okay to feel like you need to abstract it in order to do a "pure" unit test. I also agree that ideally, I would want to ensure the EF code itself works--but this involves a sandbox database, in-memory provider, etc. My approach solves both problems--you can safely unit test EF-dependent code and create integration tests to test your EF code specifically.
The way I achieved this was through simply encapsulating EF code into dedicated Query and Command classes. The idea is simple: just wrap any EF code in a class and depend on an interface in the classes that would've originally used it. The main issue I needed to solve was to avoid adding numerous dependencies to classes and setting up a lot of code in my tests.
This is where a useful, simple library comes in: Mediatr. It allows for simple in-process messaging and it does it by decoupling "requests" from the handlers that implement the code. This has an added benefit of decoupling the "what" from the "how". For example, by encapsulating the EF code into small chunks it allows you to replace the implementations with another provider or totally different mechanism, because all you are doing is sending a request to perform an action.
Utilizing dependency injection (with or without a framework--your preference), we can easily mock the mediator and control the request/response mechanisms to enable unit testing EF code.
First, let's say we have a service that has business logic we need to test:
public class FeatureService {
private readonly IMediator _mediator;
public FeatureService(IMediator mediator) {
_mediator = mediator;
}
public async Task ComplexBusinessLogic() {
// retrieve relevant objects
var results = await _mediator.Send(new GetRelevantDbObjectsQuery());
// normally, this would have looked like...
// var results = _myDbContext.DbObjects.Where(x => foo).ToList();
// perform business logic
// ...
}
}
Do you start to see the benefit of this approach? Not only are you explicitly encapsulating all EF-related code into descriptive classes, you are allowing extensibility by removing the implementation concern of "how" this request is handled--this class doesn't care if the relevant objects come from EF, MongoDB, or a text file.
Now for the request and handler, via MediatR:
public class GetRelevantDbObjectsQuery : IRequest<DbObject[]> {
// no input needed for this particular request,
// but you would simply add plain properties here if needed
}
public class GetRelevantDbObjectsEFQueryHandler : IRequestHandler<GetRelevantDbObjectsQuery, DbObject[]> {
private readonly IDbContext _db;
public GetRelevantDbObjectsEFQueryHandler(IDbContext db) {
_db = db;
}
public DbObject[] Handle(GetRelevantDbObjectsQuery message) {
return _db.DbObjects.Where(foo => bar).ToList();
}
}
As you can see, the abstraction is simple and encapsulated. It's also absolutely testable because in an integration test, you could test this class individually--there are no business concerns mixed in here.
So what does a unit test of our feature service look like? It's way simple. In this case, I'm using Moq to do mocking (use whatever makes you happy):
[TestClass]
public class FeatureServiceTests {
// mock of Mediator to handle request/responses
private Mock<IMediator> _mediator;
// subject under test
private FeatureService _sut;
[TestInitialize]
public void Setup() {
// set up Mediator mock
_mediator = new Mock<IMediator>(MockBehavior.Strict);
// inject mock as dependency
_sut = new FeatureService(_mediator.Object);
}
[TestCleanup]
public void Teardown() {
// ensure we have called or expected all calls to Mediator
_mediator.VerifyAll();
}
[TestMethod]
public void ComplexBusinessLogic_Does_What_I_Expect() {
var dbObjects = new List<DbObject>() {
// set up any test objects
new DbObject() { }
};
// arrange
// setup Mediator to return our fake objects when it receives a message to perform our query
// in practice, I find it better to create an extension method that encapsulates this setup here
_mediator.Setup(x => x.Send(It.IsAny<GetRelevantDbObjectsQuery>(), default(CancellationToken)).ReturnsAsync(dbObjects.ToArray()).Callback(
(GetRelevantDbObjectsQuery message, CancellationToken token) => {
// using Moq Callback functionality, you can make assertions
// on expected request being passed in
Assert.IsNotNull(message);
});
// act
_sut.ComplexBusinessLogic();
// assertions
}
}
You can see all we need is a single setup and we don't even need to configure anything extra--it's a very simple unit test. Let's be clear: This is totally possible to do without something like Mediatr (you would simply implement an interface and mock it for tests, e.g. IGetRelevantDbObjectsQuery), but in practice for a large codebase with many features and queries/commands, I love the encapsulation and innate DI support Mediatr offers.
If you're wondering how I organize these classes, it's pretty simple:
- MyProject
- Features
- MyFeature
- Queries
- Commands
- Services
- DependencyConfig.cs (Ninject feature modules)
Organizing by feature slices is beside the point, but this keeps all relevant/dependent code together and easily discoverable. Most importantly, I separate the Queries vs. Commands--following the Command/Query Separation principle.
This meets all my criteria: it's low-ceremony, it's easy to understand, and there are extra hidden benefits. For example, how do you handle saving changes? Now you can simplify your Db Context by using a role interface (IUnitOfWork.SaveChangesAsync()) and mock calls to the single role interface or you could encapsulate committing/rolling back inside your RequestHandlers--however you prefer to do it is up to you, as long as it's maintainable. For example, I was tempted to create a single generic request/handler where you'd just pass an EF object and it would save/update/remove it--but you have to ask what your intention is and remember that if you wanted to swap out the handler with another storage provider/implementation, you should probably create explicit commands/queries that represent what you intend to do. More often than not, a single service or feature will need something specific--don't create generic stuff before you have a need for it.
There are of course caveats to this pattern--you can go too far with a simple pub/sub mechanism. I've limited my implementation to only abstracting EF-related code, but adventurous developers could start using MediatR to go overboard and message-ize everything--something good code review practices and peer reviews should catch. That's a process issue, not an issue with MediatR, so just be cognizant of how you're using this pattern.
You wanted a concrete example of how people are unit testing/mocking EF and this is an approach that's working successfully for us on our project--and the team is super happy with how easy it is to adopt. I hope this helps! As with all things in programming, there are multiple approaches and it all depends on what you want to achieve. I value simplicity, ease of use, maintainability, and discoverability--and this solution meets all those demands.
In order to unit test code that relies on your database you need to setup a database or mock for each and every test.
Having a database (real or mocked) with a single state for all your tests will bite you quickly; you cannot test all records are valid and some aren't from the same data.
Setting up an in-memory database in a OneTimeSetup will have issues where the old database is not cleared down before the next test starts up. This will show as tests working when you run them individually, but failing when you run them all.
A Unit test should ideally only set what affects the test
I am working in an application that has a lot of tables with a lot of connections and some massive Linq blocks. These need testing. A simple grouping missed, or a join that results in more than 1 row will affect results.
To deal with this I have setup a heavy Unit Test Helper that is a lot of work to setup, but enables us to reliably mock the database in any state, and running 48 tests against 55 interconnected tables, with the entire database setup 48 times takes 4.7 seconds.
Here's how:
In the Db context class ensure each table class is set to virtual
public virtual DbSet<Branch> Branches { get; set; }
public virtual DbSet<Warehouse> Warehouses { get; set; }
In a UnitTestHelper class create a method to setup your database. Each table class is an optional parameter. If not supplied, it will be created through a Make method
internal static Db Bootstrap(bool onlyMockPassedTables = false, List<Branch> branches = null, List<Products> products = null, List<Warehouses> warehouses = null)
{
if (onlyMockPassedTables == false) {
branches ??= new List<Branch> { MakeBranch() };
warehouses ??= new List<Warehouse>{ MakeWarehouse() };
}
For each table class, each object in it is mapped to the other lists
branches?.ForEach(b => {
b.Warehouse = warehouses.FirstOrDefault(w => w.ID == b.WarehouseID);
});
warehouses?.ForEach(w => {
w.Branches = branches.Where(b => b.WarehouseID == w.ID);
});
And add it to the DbContext
var context = new Db(new DbContextOptionsBuilder<Db>().UseInMemoryDatabase(Guid.NewGuid().ToString()).Options);
context.Branches.AddRange(branches);
context.Warehouses.AddRange(warehouses);
context.SaveChanges();
return context;
}
Define a list of IDs to make is easier to reuse them and make sure joins are valid
internal const int BranchID = 1;
internal const int WarehouseID = 2;
Create a Make for each table to setup the most basic, but connected version it can be
internal static Branch MakeBranch(int id = BranchID, string code = "The branch", int warehouseId = WarehouseID) => new Branch { ID = id, Code = code, WarehouseID = warehouseId };
internal static Warehouse MakeWarehouse(int id = WarehouseID, string code = "B", string name = "My Big Warehouse") => new Warehouse { ID = id, Code = code, Name = name };
It's a lot of work, but it only needs doing once, and then your tests can be very focused because the rest of the database will be setup for it.
[Test]
[TestCase(new string [] {"ABC", "DEF"}, "ABC", ExpectedResult = 1)]
[TestCase(new string [] {"ABC", "BCD"}, "BC", ExpectedResult = 2)]
[TestCase(new string [] {"ABC"}, "EF", ExpectedResult = 0)]
[TestCase(new string[] { "ABC", "DEF" }, "abc", ExpectedResult = 1)]
public int Given_SearchingForBranchByName_Then_ReturnCount(string[] codesInDatabase, string searchString)
{
// Arrange
var branches = codesInDatabase.Select(x => UnitTestHelpers.MakeBranch(code: $"qqqq{x}qqq")).ToList();
var db = UnitTestHelpers.Bootstrap(branches: branches);
var service = new BranchService(db);
// Act
var result = service.SearchByName(searchString);
// Assert
return result.Count();
}
There is Effort which is an in memory entity framework database provider. I've not actually tried it... Haa just spotted this was mentioned in the question!
Alternatively you could switch to EntityFrameworkCore which has an in memory database provider built-in.
https://blog.goyello.com/2016/07/14/save-time-mocking-use-your-real-entity-framework-dbcontext-in-unit-tests/
https://github.com/tamasflamich/effort
I used a factory to get a context, so i can create the context close to its use. This seems to work locally in visual studio but not on my TeamCity build server, not sure why yet.
return new MyContext(#"Server=(localdb)\mssqllocaldb;Database=EFProviders.InMemory;Trusted_Connection=True;");
I like to separate my filters from other portions of the code and test those as I outline on my blog here http://coding.grax.com/2013/08/testing-custom-linq-filter-operators.html
That being said, the filter logic being tested is not identical to the filter logic executed when the program is run due to the translation between the LINQ expression and the underlying query language, such as T-SQL. Still, this allows me to validate the logic of the filter. I don't worry too much about the translations that happen and things such as case-sensitivity and null-handling until I test the integration between the layers.
It is important to test what you are expecting entity framework to do (i.e. validate your expectations). One way to do this that I have used successfully, is using moq as shown in this example (to long to copy into this answer):
https://learn.microsoft.com/en-us/ef/ef6/fundamentals/testing/mocking
However be careful... A SQL context is not guaranteed to return things in a specific order unless you have an appropriate "OrderBy" in your linq query, so its possible to write things that pass when you test using an in-memory list (linq-to-entities) but fail in your uat / live environment when (linq-to-sql) gets used.
Related
How do I create a unit test that partially makes use of database unit tests but is called from a regular unit test?
Yes, perhaps they may not be unit tests; you may wish to call them integration tests. Regardless of what you would like to label them, they are tests.
Within my nunit tests, I am using helper constants:
private const string NumericSerial="123123";
private const string NonNumericSerial="123123 bad serialnumber";
private const int ValidPatientId=123;
private const int InvalidPatientId=-1;
private const int PatientIdThatDoesNotExistInppp=111291;
private const string SerialNumberThatDoesNotExistInshmk="123123";
private const string SerialNumberThatDoesExistInshmk="1015873";
private const byte InvalidFirmwareVersion=0;
private const int FacilityThatDoesntExistInAAA=Int32.MaxValue;
Prior to running any tests that make use of these constants, I would like to verify that these constant are correctly defined.
For example I can Assert that NumericSerial is indeed numeric without having to do any mocking or injecting of any dependencies - I would simply tryparse it.
However, other constants such as PatientIdThatDoesNotExistInppp will need to be verified that it indeed does not exist in the ppp database. In order to do this, I can follow several routes:
Implement a linq query within entity framework
Explicitly send a select statement to the database
Or I could a database unit test by first creating the necessary record (in our case it would make sure that 111291 does not exist in the database.
Unless you advise strongly against option #3, I am inclined to implement that. How do I create a unit test that partially makes use of database unit tests but is called from a regular unit test?
I am looking for something like the following:
[Test]
public response_from_database_unit_test_equals_111291()
{
//call the database unit test
//retrieve value from database unit test
}
And here is the additional answer you requested based on purely Entity Framework
[TestMethod]
public void GetAllBlogs_Orders_By_Name()
{
var data = new List<Blog>
{
new Blog { Name = "BBB" },
new Blog { Name = "ZZZ" },
new Blog { Name = "AAA" },
}.AsQueryable();
var mockSet = new Mock<DbSet<Blog>>();
mockSet.As<IQueryable<Blog>>().Setup(m => m.Provider).Returns(data.Provider);
mockSet.As<IQueryable<Blog>>().Setup(m => m.Expression).Returns(data.Expression);
mockSet.As<IQueryable<Blog>>().Setup(m => m.ElementType).Returns(data.ElementType);
mockSet.As<IQueryable<Blog>>().Setup(m => m.GetEnumerator()).Returns(0 => data.GetEnumerator());
// we tell EF that treat our data list as the table of Blogs.
var mockContext = new Mock<BloggingContext>();
mockContext.Setup(c => c.Blogs).Returns(mockSet.Object);
// make EF query the blogs table (i.e. our list)
var context = mockContext.Object;
// EF has queried our list here
// you could run Select, OrderBy, Where etc. as well
var blogs = context.Blogs.ToList();
Assert.AreEqual(3, blogs.Count);
Assert.AreEqual("AAA", blogs[0].Name);
Assert.AreEqual("BBB", blogs[1].Name);
Assert.AreEqual("ZZZ", blogs[2].Name);
}
i didn't write this sample. it is straight from https://msdn.microsoft.com/en-us/library/dn314429.aspx (so all credit to MSDN)
the point is that, with 0% involvement of a database, we have let Entity Framework know that, "hey EF! treat my List as the table."
EF then runs all the actual queries (Where, Select, OrderBy, GroupBy etc.) on your list. And it is so trivial to setup this list. (including nested links)
Your data layer code will always be encompassed with this unit testing.
And you can trust EF to make the right SQL calls, when the actual table is involved.
p.s. the only 2 things, that i look out for is Include clauses and DbFunctions. These 2 behave differently between local Lists and actual tables. But once you are careful of those 2 things, it is a delightful thing to unit test your data layer to this extent, without worrying about a real database at all.
If you want to do database related testing, then there are a couple of options that make it easier for you. (it won't be 100% wrinkle free since DB testing is not trivial)
in the [Setup] method of your test class, setup a new Transaction Scope and setup any pre-requisites you need. (presence or absence of a record)
in the actual test case, do your verification using Linq2Sql or EF LINQ. it is very productive and avoids nasty SQL/Stored procedures.
in the [Teardown] method, complete the scope to commit the transaction.
on any exception, the transaction will be naturally rolled back.
And instead of one unit test calling database unit tests, extract the common code into common methods and call them both your database tests and these tests. (e.g. GetRecordWithId(5678))
That way, your tests don't really have a dependency on database unit tests, but share data access code.
And as you mentioned correctly, it doesn't matter what you call these.. they maybe data testing or integration testing etc. (not necessarily unit tests)
We can do our best to keep the state of the database clean using transaction scope, but ultimately connectivity issues, parallel test execution, test execution on dev servers connecting to a common data server (or local sql server pre-requisites), test execution on build server etc. create issues when it comes to database testing, when an actual database is involved.
Many teams employ a strategy of standing up a new database specific to a test run [prefix+timestamp], so that it doesn't collide with other runs. And tear down the db at the end of it. (worst case, there is a background process, which monitors databases with a specific prefix name and cleans it up every midnight, based on the timestamp. as you can see, there is a lot of peripheral work to do, for the value of testing a data layer.
First, let's address terminology here. Everything I search for says, "Unit tests don't touch the database!" I don't want a unit test. I want a test that when I send data to a database, I know that it correctly saves it (and testing of other crud operations). I have a repository layer that essentially accepts a DTO, then maps that DTO to an entity framework model, then saves that model to the database.
I need to be able to ensure that sending a DTO to these methods is in fact saving to the database.
An example method signature on the repository is:
public bool Save(SomeObjectDTO someObject)
I just need to test against whether or not this method call returns true.
What is the best way to set up tests where my methods being called are ones that save to the database?
Furthermore, is there a standard way to set up a blank testing database? It would be great if when I hit "Run tests" it constructs an empty database, fills it with initial data that is necessary, and then performs all the CRUD operations (all of my repository calls) to see that they all are saving like they should be.
I apologize if this is already answered, but everything I have searched either has someone saying you shouldn't be testing database calls, or people talking about mocking which is not really useful here.
I just want an example and/or the standard practice on how these types of tests should be set up.
What you're looking for is called integration testing, and is just as important as writing unit tests. There's a lot of potential bugs that are exposed by your underlying data provider that mocking your repository won't necessarily find (invalid foreign keys, null data for something marked as not null, etc).
I think it's also important that you test against the same database provider as your production system, otherwise there's a risk of missing implementation specific behavior. I use Azure SQL for a project, and instead of creating an in-memory SQL CE instance, I have a separate database on Azure that's only used for my integration tests.
If you use XUnit (and I'm sure it exists for other testing frameworks), there's a handy attribute [AutoRollback] that will automatically roll back your transaction after each test runs.
[Fact]
[AutoRollback]
public void AddProductTest_AddsProductAndRetrievesItFromTheDatabase()
{
// connect to your test db
YourDbContext dbContext = new YourDbContext("TestConnectionString")
dbContext.Products.Add(new Product(...));
// get the recently added product (or whatever your query is)
var result = dbContext.Single();
// assert everything saved correctly
Assert.Equals(...);
}
After the test is finished, your database will be at a blank slate again (or whatever it was before you ran the test).
For testing against a database when using EntityFramework, here is how I roll:
First of all, I define the class that will access the ObjectContext with a factory for the ObjectContext if needed: in my case I work in a NT service, so the context doesn't live during a request, or some other scope: YMMV but if you are testing a component you could work in complete isolation without too much hassle since your factory for the context in the web would certainly fetch the context from the request: just don't initialize / close it in your DAL class.
public DataAccessClass: IWorkOnStuff
{
public Func<DataEntities> DataAccessFactory { get; internal set; }
private string ConnectionString;
public PortailPatientManagerImplementation(string connectionString)
{
ConnectionString = connectionString;
DataAccessFactory = () => { return new DataEntities(ConnectionString); };
}
/* interface methods */
public IEnumerable<Stuff> GetTheStuff(SomeParameters params)
{
using (var context = DataAccessFactory())
{
return context.Stuff.Where(stuff => params.Match(stuff));
}
}
}
Now, what's interesting is that when you want to test this, you can use a library called Effort, which lets you map a database in memory. To do it, just create your class, and in the test setup tell Effort to take it from here:
public class TestDataAccessClass
{
public DataAccessClass Target { get; set; }
protected int Calls = 0;
protected DataEntities DE;
[SetUp]
public void before_each_test()
{
Target = new DataAccessClass(string.Empty);
Calls = 0;
FullAccessCalls = 0;
var fakeConnection = "metadata=res://*/bla.csdl|res://*/bla.ssdl|res://*/bla.msl;provider=System.Data.SqlClient";
DE = Effort.ObjectContextFactory.CreateTransient<DataEntities>(fakeConnection);
Target.DataAccessFactory = () => { Calls++; return DE; };
SetupSomeTestData(DE);
}
}
In the SetupSomeTestData just add the entities you want (references, etc) and now you can call your methods to ensure that your data do come from the ObjectContext as defined in your setup.
Funnily enough, just as mfanto notes, this is an integration test, not a unit test, but as he says it himself:
This does not sound like unit but integration testing for me!
You are right, I use the term "unit testing" in the title because of
SEO reasons :) Also most people don't seem to know about the
differences between them.
I don't know if this is the best way to test against an Entity Framework DAL; it took me some time to achieve this solution and I find it is not without merits, but I will be watching this question in order to see what other solutions are proposed.
I am just starting out with Unit testings and TDD in general. I have dabbled before but now I am determined to add it to my workflow and write better software.
I asked a question yesterday that kind of included this, but it seems to be a question on its own. I have sat down to start implementing a service class that I will use to abstract away the business logic from the controllers and map to specific models and data interactions using EF6.
The issue is I have roadblocked myself already because I didn't want to abstract EF away in a repository (it will still be available outside the services for specific queries, etc) and would like to test my services (EF Context will be used).
Here I guess is the question, is there a point to doing this? If so, how are people doing it in the wild in light of the leaky abstractions caused by IQueryable and the many great posts by Ladislav Mrnka on the subject of unit testing not being straightforward because of the differences in Linq providers when working with an in memory implementation as apposed to a specific database.
The code I want to test seems pretty simple. (this is just dummy code to try and understand what i am doing, I want to drive the creation using TDD)
Context
public interface IContext
{
IDbSet<Product> Products { get; set; }
IDbSet<Category> Categories { get; set; }
int SaveChanges();
}
public class DataContext : DbContext, IContext
{
public IDbSet<Product> Products { get; set; }
public IDbSet<Category> Categories { get; set; }
public DataContext(string connectionString)
: base(connectionString)
{
}
}
Service
public class ProductService : IProductService
{
private IContext _context;
public ProductService(IContext dbContext)
{
_context = dbContext;
}
public IEnumerable<Product> GetAll()
{
var query = from p in _context.Products
select p;
return query;
}
}
Currently I am in the mindset of doing a few things:
Mocking EF Context with something like this approach- Mocking EF When Unit Testing or directly using a mocking framework on the interface like moq - taking the pain that the unit tests may pass but not necessarily work end to end and back them up with Integration tests?
Maybe using something like Effort to mock EF - I have never used it and not sure if anyone else is using it in the wild?
Not bother testing anything that simply calls back to EF - so essentially service methods that call EF directly (getAll etc) are not unit tested but just integration tested?
Anyone out there actually doing this out there without a Repo and having success?
This is a topic I'm very interested in. There are many purists who say that you shouldn't test technologies such as EF and NHibernate. They are right, they're already very stringently tested and as a previous answer stated it's often pointless to spend vast amounts of time testing what you don't own.
However, you do own the database underneath! This is where this approach in my opinion breaks down, you don't need to test that EF/NH are doing their jobs correctly. You need to test that your mappings/implementations are working with your database. In my opinion this is one of the most important parts of a system you can test.
Strictly speaking however we're moving out of the domain of unit testing and into integration testing but the principles remain the same.
The first thing you need to do is to be able to mock your DAL so your BLL can be tested independently of EF and SQL. These are your unit tests. Next you need to design your Integration Tests to prove your DAL, in my opinion these are every bit as important.
There are a couple of things to consider:
Your database needs to be in a known state with each test. Most systems use either a backup or create scripts for this.
Each test must be repeatable
Each test must be atomic
There are two main approaches to setting up your database, the first is to run a UnitTest create DB script. This ensures that your unit test database will always be in the same state at the beginning of each test (you may either reset this or run each test in a transaction to ensure this).
Your other option is what I do, run specific setups for each individual test. I believe this is the best approach for two main reasons:
Your database is simpler, you don't need an entire schema for each test
Each test is safer, if you change one value in your create script it doesn't invalidate dozens of other tests.
Unfortunately your compromise here is speed. It takes time to run all these tests, to run all these setup/tear down scripts.
One final point, it can be very hard work to write such a large amount of SQL to test your ORM. This is where I take a very nasty approach (the purists here will disagree with me). I use my ORM to create my test! Rather than having a separate script for every DAL test in my system I have a test setup phase which creates the objects, attaches them to the context and saves them. I then run my test.
This is far from the ideal solution however in practice I find it's a LOT easier to manage (especially when you have several thousand tests), otherwise you're creating massive numbers of scripts. Practicality over purity.
I will no doubt look back at this answer in a few years (months/days) and disagree with myself as my approaches have changed - however this is my current approach.
To try and sum up everything I've said above this is my typical DB integration test:
[Test]
public void LoadUser()
{
this.RunTest(session => // the NH/EF session to attach the objects to
{
var user = new UserAccount("Mr", "Joe", "Bloggs");
session.Save(user);
return user.UserID;
}, id => // the ID of the entity we need to load
{
var user = LoadMyUser(id); // load the entity
Assert.AreEqual("Mr", user.Title); // test your properties
Assert.AreEqual("Joe", user.Firstname);
Assert.AreEqual("Bloggs", user.Lastname);
}
}
The key thing to notice here is that the sessions of the two loops are completely independent. In your implementation of RunTest you must ensure that the context is committed and destroyed and your data can only come from your database for the second part.
Edit 13/10/2014
I did say that I'd probably revise this model over the upcoming months. While I largely stand by the approach I advocated above I've updated my testing mechanism slightly. I now tend to create the entities in in the TestSetup and TestTearDown.
[SetUp]
public void Setup()
{
this.SetupTest(session => // the NH/EF session to attach the objects to
{
var user = new UserAccount("Mr", "Joe", "Bloggs");
session.Save(user);
this.UserID = user.UserID;
});
}
[TearDown]
public void TearDown()
{
this.TearDownDatabase();
}
Then test each property individually
[Test]
public void TestTitle()
{
var user = LoadMyUser(this.UserID); // load the entity
Assert.AreEqual("Mr", user.Title);
}
[Test]
public void TestFirstname()
{
var user = LoadMyUser(this.UserID);
Assert.AreEqual("Joe", user.Firstname);
}
[Test]
public void TestLastname()
{
var user = LoadMyUser(this.UserID);
Assert.AreEqual("Bloggs", user.Lastname);
}
There are several reasons for this approach:
There are no additional database calls (one setup, one teardown)
The tests are far more granular, each test verifies one property
Setup/TearDown logic is removed from the Test methods themselves
I feel this makes the test class simpler and the tests more granular (single asserts are good)
Edit 5/3/2015
Another revision on this approach. While class level setups are very helpful for tests such as loading properties they are less useful where the different setups are required. In this case setting up a new class for each case is overkill.
To help with this I now tend to have two base classes SetupPerTest and SingleSetup. These two classes expose the framework as required.
In the SingleSetup we have a very similar mechanism as described in my first edit. An example would be
public TestProperties : SingleSetup
{
public int UserID {get;set;}
public override DoSetup(ISession session)
{
var user = new User("Joe", "Bloggs");
session.Save(user);
this.UserID = user.UserID;
}
[Test]
public void TestLastname()
{
var user = LoadMyUser(this.UserID); // load the entity
Assert.AreEqual("Bloggs", user.Lastname);
}
[Test]
public void TestFirstname()
{
var user = LoadMyUser(this.UserID);
Assert.AreEqual("Joe", user.Firstname);
}
}
However references which ensure that only the correct entites are loaded may use a SetupPerTest approach
public TestProperties : SetupPerTest
{
[Test]
public void EnsureCorrectReferenceIsLoaded()
{
int friendID = 0;
this.RunTest(session =>
{
var user = CreateUserWithFriend();
session.Save(user);
friendID = user.Friends.Single().FriendID;
} () =>
{
var user = GetUser();
Assert.AreEqual(friendID, user.Friends.Single().FriendID);
});
}
[Test]
public void EnsureOnlyCorrectFriendsAreLoaded()
{
int userID = 0;
this.RunTest(session =>
{
var user = CreateUserWithFriends(2);
var user2 = CreateUserWithFriends(5);
session.Save(user);
session.Save(user2);
userID = user.UserID;
} () =>
{
var user = GetUser(userID);
Assert.AreEqual(2, user.Friends.Count());
});
}
}
In summary both approaches work depending on what you are trying to test.
Effort Experience Feedback here
After a lot of reading I have been using Effort in my tests: during the tests the Context is built by a factory that returns a in memory version, which lets me test against a blank slate each time. Outside of the tests, the factory is resolved to one that returns the whole Context.
However i have a feeling that testing against a full featured mock of the database tends to drag the tests down; you realize you have to take care of setting up a whole bunch of dependencies in order to test one part of the system. You also tend to drift towards organizing together tests that may not be related, just because there is only one huge object that handles everything. If you don't pay attention, you may find yourself doing integration testing instead of unit testing
I would have prefered testing against something more abstract rather than a huge DBContext but i couldn't find the sweet spot between meaningful tests and bare-bone tests. Chalk it up to my inexperience.
So i find Effort interesting; if you need to hit the ground running it is a good tool to quickly get started and get results. However i think that something a bit more elegant and abstract should be the next step and that is what I am going to investigate next. Favoriting this post to see where it goes next :)
Edit to add: Effort do take some time to warm up, so you're looking at approx. 5 seconds at test start up. This may be a problem for you if you need your test suite to be very efficient.
Edited for clarification:
I used Effort to test a webservice app. Each message M that enters is routed to a IHandlerOf<M> via Windsor. Castle.Windsor resolves the IHandlerOf<M> which resovles the dependencies of the component. One of these dependencies is the DataContextFactory, which lets the handler ask for the factory
In my tests I instantiate the IHandlerOf component directly, mock all the sub-components of the SUT and handles the Effort-wrapped DataContextFactory to the handler.
It means that I don't unit test in a strict sense, since the DB is hit by my tests. However as I said above it let me hit the ground running and I could quickly test some points in the application
If you want to unit test code then you need to isolate your code you want to test (in this case your service) from external resources (e.g. databases). You could probably do this with some sort of in-memory EF provider, however a much more common way is to abstract away your EF implementation e.g. with some sort of repository pattern. Without this isolation any tests you write will be integration tests, not unit tests.
As for testing EF code - I write automated integration tests for my repositories that write various rows to the database during their initialization, and then call my repository implementations to make sure that they behave as expected (e.g. making sure that results are filtered correctly, or that they are sorted in the correct order).
These are integration tests not unit tests, as the tests rely on having a database connection present, and that the target database already has the latest up-to-date schema installed.
I have fumbled around sometime to reach these considerations:
1- If my application access the database, why the test should not? What if there is something wrong with data access? The tests must know it beforehand and alert myself about the problem.
2- The Repository Pattern is somewhat hard and time consuming.
So I came up with this approach, that I don't think is the best, but fulfilled my expectations:
Use TransactionScope in the tests methods to avoid changes in the database.
To do it it's necessary:
1- Install the EntityFramework into the Test Project.
2- Put the connection string into the app.config file of Test Project.
3- Reference the dll System.Transactions in Test Project.
The unique side effect is that identity seed will increment when trying to insert, even when the transaction is aborted. But since the tests are made against a development database, this should be no problem.
Sample code:
[TestClass]
public class NameValueTest
{
[TestMethod]
public void Edit()
{
NameValueController controller = new NameValueController();
using(var ts = new TransactionScope()) {
Assert.IsNotNull(controller.Edit(new Models.NameValue()
{
NameValueId = 1,
name1 = "1",
name2 = "2",
name3 = "3",
name4 = "4"
}));
//no complete, automatically abort
//ts.Complete();
}
}
[TestMethod]
public void Create()
{
NameValueController controller = new NameValueController();
using (var ts = new TransactionScope())
{
Assert.IsNotNull(controller.Create(new Models.NameValue()
{
name1 = "1",
name2 = "2",
name3 = "3",
name4 = "4"
}));
//no complete, automatically abort
//ts.Complete();
}
}
}
I would not unit test code I don't own. What are you testing here, that the MSFT compiler works?
That said, to make this code testable, you almost HAVE to make your data access layer separate from your business logic code. What I do is take all of my EF stuff and put it in a (or multiple) DAO or DAL class which also has a corresponding interface. Then I write my service which will have the DAO or DAL object injected in as a dependency (constructor injection preferably) referenced as the interface. Now the part that needs to be tested (your code) can easily be tested by mocking out the DAO interface and injecting that into your service instance inside your unit test.
//this is testable just inject a mock of IProductDAO during unit testing
public class ProductService : IProductService
{
private IProductDAO _productDAO;
public ProductService(IProductDAO productDAO)
{
_productDAO = productDAO;
}
public List<Product> GetAllProducts()
{
return _productDAO.GetAll();
}
...
}
I would consider live Data Access Layers to be part of integration testing, not unit testing. I have seen guys run verifications on how many trips to the database hibernate makes before, but they were on a project that involved billions of records in their datastore and those extra trips really mattered.
So here's the thing, Entity Framework is an implementation so despite the fact that it abstracts the complexity of database interaction, interacting directly is still tight coupling and that's why it's confusing to test.
Unit testing is about testing the logic of a function and each of its potential outcomes in isolation from any external dependencies, which in this case is the data store. In order to do that, you need to be able to control the behavior of the data store. For example, if you want to assert that your function returns false if the fetched user doesn't meet some set of criteria, then your [mocked] data store should be configured to always return a user that fails to meet the criteria, and vice versa for the opposite assertion.
With that said, and accepting the fact that EF is an implementation, I would likely favor the idea of abstracting a repository. Seem a bit redundant? It's not, because you are solving a problem which is isolating your code from the data implementation.
In DDD, the repositories only ever return aggregate roots, not DAO. That way, the consumer of the repository never has to know about the data implementation (as it shouldn't) and we can use that as an example of how to solve this problem. In this case, the object that is generated by EF is a DAO and as such, should be hidden from your application. This another benefit of the repository that you define. You can define a business object as its return type instead of the EF object. Now what the repo does is hide the calls to EF and maps the EF response to that business object defined in the repos signature. Now you can use that repo in place of the DbContext dependency that you inject into your classes and consequently, now you can mock that interface to give you the control that you need in order to test your code in isolation.
It's a bit more work and many thumb their nose at it, but it solves a real problem. There's an in-memory provider that was mentioned in a different answer that could be an option (I have not tried it), and its very existence is evidence of the need for the practice.
I completely disagree with the top answer because it sidesteps the real issue which is isolating your code and then goes on a tangent about testing your mapping. By all means test your mapping if you want to, but address the actual issue here and get some real code coverage.
In short I would say no, the juice is not worth the squeeze to test a service method with a single line that retrieves model data. In my experience people who are new to TDD want to test absolutely everything. The old chestnut of abstracting a facade to a 3rd party framework just so you can create a mock of that frameworks API with which you bastardise/extend so that you can inject dummy data is of little value in my mind. Everyone has a different view of how much unit testing is best. I tend to be more pragmatic these days and ask myself if my test is really adding value to the end product, and at what cost.
I want to share an approach commented about and briefly discussed but show an actual example that I am currently using to help unit test EF-based services.
First, I would love to use the in-memory provider from EF Core, but this is about EF 6. Furthermore, for other storage systems like RavenDB, I'd also be a proponent of testing via the in-memory database provider. Again--this is specifically to help test EF-based code without a lot of ceremony.
Here are the goals I had when coming up with a pattern:
It must be simple for other developers on the team to understand
It must isolate the EF code at the barest possible level
It must not involve creating weird multi-responsibility interfaces (such as a "generic" or "typical" repository pattern)
It must be easy to configure and setup in a unit test
I agree with previous statements that EF is still an implementation detail and it's okay to feel like you need to abstract it in order to do a "pure" unit test. I also agree that ideally, I would want to ensure the EF code itself works--but this involves a sandbox database, in-memory provider, etc. My approach solves both problems--you can safely unit test EF-dependent code and create integration tests to test your EF code specifically.
The way I achieved this was through simply encapsulating EF code into dedicated Query and Command classes. The idea is simple: just wrap any EF code in a class and depend on an interface in the classes that would've originally used it. The main issue I needed to solve was to avoid adding numerous dependencies to classes and setting up a lot of code in my tests.
This is where a useful, simple library comes in: Mediatr. It allows for simple in-process messaging and it does it by decoupling "requests" from the handlers that implement the code. This has an added benefit of decoupling the "what" from the "how". For example, by encapsulating the EF code into small chunks it allows you to replace the implementations with another provider or totally different mechanism, because all you are doing is sending a request to perform an action.
Utilizing dependency injection (with or without a framework--your preference), we can easily mock the mediator and control the request/response mechanisms to enable unit testing EF code.
First, let's say we have a service that has business logic we need to test:
public class FeatureService {
private readonly IMediator _mediator;
public FeatureService(IMediator mediator) {
_mediator = mediator;
}
public async Task ComplexBusinessLogic() {
// retrieve relevant objects
var results = await _mediator.Send(new GetRelevantDbObjectsQuery());
// normally, this would have looked like...
// var results = _myDbContext.DbObjects.Where(x => foo).ToList();
// perform business logic
// ...
}
}
Do you start to see the benefit of this approach? Not only are you explicitly encapsulating all EF-related code into descriptive classes, you are allowing extensibility by removing the implementation concern of "how" this request is handled--this class doesn't care if the relevant objects come from EF, MongoDB, or a text file.
Now for the request and handler, via MediatR:
public class GetRelevantDbObjectsQuery : IRequest<DbObject[]> {
// no input needed for this particular request,
// but you would simply add plain properties here if needed
}
public class GetRelevantDbObjectsEFQueryHandler : IRequestHandler<GetRelevantDbObjectsQuery, DbObject[]> {
private readonly IDbContext _db;
public GetRelevantDbObjectsEFQueryHandler(IDbContext db) {
_db = db;
}
public DbObject[] Handle(GetRelevantDbObjectsQuery message) {
return _db.DbObjects.Where(foo => bar).ToList();
}
}
As you can see, the abstraction is simple and encapsulated. It's also absolutely testable because in an integration test, you could test this class individually--there are no business concerns mixed in here.
So what does a unit test of our feature service look like? It's way simple. In this case, I'm using Moq to do mocking (use whatever makes you happy):
[TestClass]
public class FeatureServiceTests {
// mock of Mediator to handle request/responses
private Mock<IMediator> _mediator;
// subject under test
private FeatureService _sut;
[TestInitialize]
public void Setup() {
// set up Mediator mock
_mediator = new Mock<IMediator>(MockBehavior.Strict);
// inject mock as dependency
_sut = new FeatureService(_mediator.Object);
}
[TestCleanup]
public void Teardown() {
// ensure we have called or expected all calls to Mediator
_mediator.VerifyAll();
}
[TestMethod]
public void ComplexBusinessLogic_Does_What_I_Expect() {
var dbObjects = new List<DbObject>() {
// set up any test objects
new DbObject() { }
};
// arrange
// setup Mediator to return our fake objects when it receives a message to perform our query
// in practice, I find it better to create an extension method that encapsulates this setup here
_mediator.Setup(x => x.Send(It.IsAny<GetRelevantDbObjectsQuery>(), default(CancellationToken)).ReturnsAsync(dbObjects.ToArray()).Callback(
(GetRelevantDbObjectsQuery message, CancellationToken token) => {
// using Moq Callback functionality, you can make assertions
// on expected request being passed in
Assert.IsNotNull(message);
});
// act
_sut.ComplexBusinessLogic();
// assertions
}
}
You can see all we need is a single setup and we don't even need to configure anything extra--it's a very simple unit test. Let's be clear: This is totally possible to do without something like Mediatr (you would simply implement an interface and mock it for tests, e.g. IGetRelevantDbObjectsQuery), but in practice for a large codebase with many features and queries/commands, I love the encapsulation and innate DI support Mediatr offers.
If you're wondering how I organize these classes, it's pretty simple:
- MyProject
- Features
- MyFeature
- Queries
- Commands
- Services
- DependencyConfig.cs (Ninject feature modules)
Organizing by feature slices is beside the point, but this keeps all relevant/dependent code together and easily discoverable. Most importantly, I separate the Queries vs. Commands--following the Command/Query Separation principle.
This meets all my criteria: it's low-ceremony, it's easy to understand, and there are extra hidden benefits. For example, how do you handle saving changes? Now you can simplify your Db Context by using a role interface (IUnitOfWork.SaveChangesAsync()) and mock calls to the single role interface or you could encapsulate committing/rolling back inside your RequestHandlers--however you prefer to do it is up to you, as long as it's maintainable. For example, I was tempted to create a single generic request/handler where you'd just pass an EF object and it would save/update/remove it--but you have to ask what your intention is and remember that if you wanted to swap out the handler with another storage provider/implementation, you should probably create explicit commands/queries that represent what you intend to do. More often than not, a single service or feature will need something specific--don't create generic stuff before you have a need for it.
There are of course caveats to this pattern--you can go too far with a simple pub/sub mechanism. I've limited my implementation to only abstracting EF-related code, but adventurous developers could start using MediatR to go overboard and message-ize everything--something good code review practices and peer reviews should catch. That's a process issue, not an issue with MediatR, so just be cognizant of how you're using this pattern.
You wanted a concrete example of how people are unit testing/mocking EF and this is an approach that's working successfully for us on our project--and the team is super happy with how easy it is to adopt. I hope this helps! As with all things in programming, there are multiple approaches and it all depends on what you want to achieve. I value simplicity, ease of use, maintainability, and discoverability--and this solution meets all those demands.
In order to unit test code that relies on your database you need to setup a database or mock for each and every test.
Having a database (real or mocked) with a single state for all your tests will bite you quickly; you cannot test all records are valid and some aren't from the same data.
Setting up an in-memory database in a OneTimeSetup will have issues where the old database is not cleared down before the next test starts up. This will show as tests working when you run them individually, but failing when you run them all.
A Unit test should ideally only set what affects the test
I am working in an application that has a lot of tables with a lot of connections and some massive Linq blocks. These need testing. A simple grouping missed, or a join that results in more than 1 row will affect results.
To deal with this I have setup a heavy Unit Test Helper that is a lot of work to setup, but enables us to reliably mock the database in any state, and running 48 tests against 55 interconnected tables, with the entire database setup 48 times takes 4.7 seconds.
Here's how:
In the Db context class ensure each table class is set to virtual
public virtual DbSet<Branch> Branches { get; set; }
public virtual DbSet<Warehouse> Warehouses { get; set; }
In a UnitTestHelper class create a method to setup your database. Each table class is an optional parameter. If not supplied, it will be created through a Make method
internal static Db Bootstrap(bool onlyMockPassedTables = false, List<Branch> branches = null, List<Products> products = null, List<Warehouses> warehouses = null)
{
if (onlyMockPassedTables == false) {
branches ??= new List<Branch> { MakeBranch() };
warehouses ??= new List<Warehouse>{ MakeWarehouse() };
}
For each table class, each object in it is mapped to the other lists
branches?.ForEach(b => {
b.Warehouse = warehouses.FirstOrDefault(w => w.ID == b.WarehouseID);
});
warehouses?.ForEach(w => {
w.Branches = branches.Where(b => b.WarehouseID == w.ID);
});
And add it to the DbContext
var context = new Db(new DbContextOptionsBuilder<Db>().UseInMemoryDatabase(Guid.NewGuid().ToString()).Options);
context.Branches.AddRange(branches);
context.Warehouses.AddRange(warehouses);
context.SaveChanges();
return context;
}
Define a list of IDs to make is easier to reuse them and make sure joins are valid
internal const int BranchID = 1;
internal const int WarehouseID = 2;
Create a Make for each table to setup the most basic, but connected version it can be
internal static Branch MakeBranch(int id = BranchID, string code = "The branch", int warehouseId = WarehouseID) => new Branch { ID = id, Code = code, WarehouseID = warehouseId };
internal static Warehouse MakeWarehouse(int id = WarehouseID, string code = "B", string name = "My Big Warehouse") => new Warehouse { ID = id, Code = code, Name = name };
It's a lot of work, but it only needs doing once, and then your tests can be very focused because the rest of the database will be setup for it.
[Test]
[TestCase(new string [] {"ABC", "DEF"}, "ABC", ExpectedResult = 1)]
[TestCase(new string [] {"ABC", "BCD"}, "BC", ExpectedResult = 2)]
[TestCase(new string [] {"ABC"}, "EF", ExpectedResult = 0)]
[TestCase(new string[] { "ABC", "DEF" }, "abc", ExpectedResult = 1)]
public int Given_SearchingForBranchByName_Then_ReturnCount(string[] codesInDatabase, string searchString)
{
// Arrange
var branches = codesInDatabase.Select(x => UnitTestHelpers.MakeBranch(code: $"qqqq{x}qqq")).ToList();
var db = UnitTestHelpers.Bootstrap(branches: branches);
var service = new BranchService(db);
// Act
var result = service.SearchByName(searchString);
// Assert
return result.Count();
}
There is Effort which is an in memory entity framework database provider. I've not actually tried it... Haa just spotted this was mentioned in the question!
Alternatively you could switch to EntityFrameworkCore which has an in memory database provider built-in.
https://blog.goyello.com/2016/07/14/save-time-mocking-use-your-real-entity-framework-dbcontext-in-unit-tests/
https://github.com/tamasflamich/effort
I used a factory to get a context, so i can create the context close to its use. This seems to work locally in visual studio but not on my TeamCity build server, not sure why yet.
return new MyContext(#"Server=(localdb)\mssqllocaldb;Database=EFProviders.InMemory;Trusted_Connection=True;");
I like to separate my filters from other portions of the code and test those as I outline on my blog here http://coding.grax.com/2013/08/testing-custom-linq-filter-operators.html
That being said, the filter logic being tested is not identical to the filter logic executed when the program is run due to the translation between the LINQ expression and the underlying query language, such as T-SQL. Still, this allows me to validate the logic of the filter. I don't worry too much about the translations that happen and things such as case-sensitivity and null-handling until I test the integration between the layers.
It is important to test what you are expecting entity framework to do (i.e. validate your expectations). One way to do this that I have used successfully, is using moq as shown in this example (to long to copy into this answer):
https://learn.microsoft.com/en-us/ef/ef6/fundamentals/testing/mocking
However be careful... A SQL context is not guaranteed to return things in a specific order unless you have an appropriate "OrderBy" in your linq query, so its possible to write things that pass when you test using an in-memory list (linq-to-entities) but fail in your uat / live environment when (linq-to-sql) gets used.
I've been trying to find a good solution to this but with no luck, so either I'm not searching for the right keywords, or we're doing things wrong from the start so the problem shouldn't really exist.
Update for clarification: I would like this to work as a unit test rather than as an integration test, so I don't want this to hit the database, but I want to mock the associations made when EF persists changes in my unit test.
Original question:
Say you are testing a service method like so:
[Test]
public void AssignAuthorToBook_NewBookNewAuthor_SuccessfullyAssigned()
{
IBookService service = new BookService();
var book = new Book();
var Author = new Author() { Id = 123 };
service.AssignAuthorToBook(book, author);
Assert.AreEqual(book.AuthorId, 123);
}
Now lets say that this test fails because AssignAuthorToBook actually works using the code book.Author = author; so it is not assigning the AuthorId, it is assigning the entity. When this is persisted using the Entity Framework SaveChanges() method on the context it will associate the entities and the IDs will correlate. However, in my example above the logic of Entity Framework would not have been applied. What I am saying is the code will work once SaveChanges() has been called, but the unit test will fail.
In this simple example, you'd probably know straight away why your test had failed as you had just written the test immediately before the code and could easily fix it. However, for more complicated operations and for operations where future changes that may change the way entities are associated, which will break tests but may not break functionality, how is unit testing best approached?
My thoughts are:
The service layer should be ignorant of the persistence layer - should we mock the Data Context in the unit tests to mock the manner that it works? Is there an easy way to do this that will automatically tie up the associations (i.e. assign to correct entity if Id is used or assign the correct Id if the entity is used)?
Or should the tests be structured in a slightly different manner?
The tests that exist in the current project I have inherited work as in my example above, but it niggles with me that there is something wrong with the approach and that I haven't managed to find a simple solution to a possibly common problem. I believe the Data Context should be mocked, but this seems like a lot of code will need to be added to the mock to dynamically create the associations - surely this has already been solved?
Update: These are the closest answers I've found so far, but they're not quite what I'm after. I don't want to test EF as such, I just wondered what was best practise for testing service methods that access repositories (either directly or via navigation properties through other repositories sharing the same context).
How Do I Mock Entity Framework's Navigational Property Intelligence?
Mocked datacontext and foreign keys/navigation properties
Fake DbContext of Entity Framework 4.1 to Test
Navigation properties not set when using ADO.NET Mocking Context Generator
Conclusion so far: That this is not possible using unit testing, and only possible using integration testing with a real DB. You can get close and probably code something to dynamically associate the navigation properties, but your mock data context will never quite replicate the real context. I would be happy if any solution enabled me to automatically associate the navigation properties which would allow my unit tests to be better, if not perfect (the nature of a successful unit test doesn't by any means guarantee functionality anyway) The ADO.NET Mocking Context Generator comes close, but it appears that I'll have to have a mock version of every entity which will not work for me in case functioanlity is added to them using partial classes in my implementation.
I'd argue that you are expecting a result from your test that implies the use of several dependencies, arguably not qualifying it as a unit test, especially because of an implied dependency on EF.
The idea here is that if you acknowledge that your BookService has a dependency on EF you should use a mock to assert it interacts correctly with it, unfortunately EF doesn't seem to like to be mocked, so we can always put it under a repository, here's an example of how that test could be written using Moq:
[Test]
public void AssignAuthorToBook_NewBookNewAuthor_CreatesNewBookAndAuthorAndAssociatesThem()
{
var bookRepositoryMock = new Mock<IBookRepository>(MockBehavior.Loose);
IBookService service = new BookService(bookRepositoryMock.Object);
var book = new Book() {Id = 0}
var author = new Author() {Id = 0};
service.AssignAuthorToBook(book, author);
bookRepositoryMock.Verify(repo => repo.AddNewBook(book));
bookRepositoryMock.Verify(repo => repo.AddNewAuthor(author));
bookRepositoryMock.Verfify(repo => repo.AssignAuthorToBook(book, author));
}
The id being set is something that you would use an integration test for, but I'd argue that you shouldn't worry about the EF failing to set the Id, I say this for the same reason you should not worry about testing if the .net framework does what it's supposed to do.
I've written about interaction testing in the past (which I think is the right way to go in this scenario, you are testing the interaction between the BookService and the Repository), hope it helps: http://blinkingcaret.wordpress.com/2012/11/20/interaction-testing-fakes-mocks-and-stubs/
I was having the same problem as you and came across your post. What I found after was a in memory database called Effort.
Take a look at Effort
The following test works correctly
EntityConnection conn = Effort.EntityConnectionFactory.CreateTransient("name=MyEntities");
MyEntities ctx = new MyEntities(conn);
JobStatus js = new JobStatus();
js.JobStatusId = 1;
js.Description= "New";
ctx.JobStatuses.Add(js);
Job j = new Job();
j.JobId = 1;
j.JobStatus = js;
ctx.Jobs.Add(j);
ctx.SaveChanges();
Assert.AreEqual(j.JobStatusId, 1);
Where MyEntities is a DbContext created with a Effort connection string.
You still need to create you in memory objects, but, save changes on the context sets the objects associations as a database does.
That will never work the way you have it. The data context is wrapped up behind the scenes in your service layer, and the association of the BookID is never going to get updated on your local variable.
When I have done TDD on something using EF, I generally wrap up all my EF logic into some kind of DAO and have CRUD methods for the entities.
Then you could do something like this:
[Test]
public void AssignAuthorToBook_NewBookNewAuthor_SuccessfullyAssigned()
{
IBookService service = new BookService();
var book = new Book();
var bookID = 122;
book.ID = bookId;
var Author = new Author() { Id = 123 };
service.AssignAuthorToBook(book, author);
//ask the service for the book, which uses EF to get the book and populate the navigational properties, etc...
book = service.GetBook(bookID)
Assert.AreEqual(book.AuthorId, 123);
}
Your question is: if you mock database operation, you cannot test the correct function of AssignAuthorToBook because this class is high coupled to Entity Framework and the behaviour will change.
So my solution:
Decouple the classes (using a DAO, an interface with all database operations), now AssignAuthorToBook is easy to test because use functions of SetBook / GetBookId.
And do a test for your operations (SetBook / GetBookId) testing the database (ok, it's not a clear unittesting, but your question is exactly this: who can I test a database operation?, so... testing it, but in a separate test.
The general solution is to split it into layers.
The persistence layer / repository pattern is in charge of writing out/reading in information from whatever store you choose. ORMs are supposed to be boxed inside the persistance layer. Above this layer, there should be no trace of the ORM. This layer returns entities/value objects as defined in the DDD book (I dont intend anything related to the EF)
Next the service layer calls onto the Repository interface to obtain these POCO entities/value objects and is in charge of domain/business logic.
As for the testing,
the basic purpose of the persistence layer is to persist the desired information.. e.g. write customer information to a file or DB. Hence integration testing this layer with the implementation specific tech (e.g. SQL) makes the most sense. If the tests pass but the layer isn't able to read/write to the actual SQL DB.. they are useless.
The service layer tests can mock the persistence layer entry interface (e.g. Repository interface) and verify the domain logic without any dependency on the persistence tech - which is the way it should be. Unit tests here.
If you want to test that the "AssignAuthorToBook" works as expected then you can do it being completely DB ignorant - you a mock of the Book object and verify that the correct setter was called with correct value. Everything persistence related should be stubbed. To verify that a setter or a method was called you can use Moq or RhinoMocks.
An example can be found here: Rhino Mocks: AAA Synax: Assert property was set with a given type
Here is an example of how to verify alternative expectations: Using RhinoMocks, how can I assert that one of several methods was called?
I may be wrong, but as far as I know your domain model should be consistent even without a persistence layer.
So, in this case, your service, should ensure that the property AuthorID is equal to the Assigned Author class, even before persisting it to the database. Or your Book AuthorID property getter get that info from its inner Author class ID property assigned by the service.
public class Book {
public Author Author { get; set; }
public int AuthorId {
get { return Author.ID; }
set { Author.ID = value; }
}
}
public class Author {
public int Id { get; set; }
}
public class BookService {
public void AssignAuthorToBook(Book book, Author author)
{
book.Author = author;
}
}
I have a save() method, not really sure how to test. below is my code.
public interface IRepository<T>
{
T Get(int id);
void Save(T item);
void Delete(int id);
}
save method doesn't return any values back, I cannot compare the value. however, I already have 4 users, after adding another one, I only check the total number of users, is it enough to test it?
[Test]
public void Add_a_new_smoothie_user_should_return_total_5_users()
{
// Arrange
var totalUsers = _users.Count();
_mockUserRepository.Setup(s => s.Save(It.IsAny<User>()))
.Callback((User user) => _users.Add(user));
var newUser = new User
{
Id = 3,
Email = "newuser#test.com",
Password = "1234567".Hash(),
Firstname = "",
Lastname = "",
CreatedDate = DateTime.Now,
LastLogin = DateTime.Now,
AccountType = AccountType.Smoothie,
DisplayName = "",
Avatar = "",
ThirdPartyId = "",
Status = Status.Approved,
Ip = "127.0.0.1"
};
// Act
_mockUserRepository.Object.Save(newUser);
// Assert
Assert.AreEqual(5, _users.Count());
Assert.AreEqual(1, _users.Count() - totalUsers);
}
You are mocking the part of the functionality you are trying to test. These tests will prove nothing, any other than Add() method of the data type you are holding the users. In the end it doesn't give any ideas if your repository is working.
You should try to implement a Database Sandbox for testing your repository functionality.
Never write tests for mocked code, because those tests actually do not test anything (well, except mocking framework implementation).
How to create interfaces with test-first approach? It's easy. Consider you have some FooController, which requires some data. At some point (during writing tests for controller) you decide, that there will be some dependency, which will provide that data to controller (yep, repository). Your current controller test requires some functionality to get some Bar object from data storage. So, you write test
Mock<IBarRepository> repositoryMock = new Mock<IBarRepository>();
repositoryMock.Setup(r => r.GetById(It.IsAny<int>()).Returns(new Bar());
FooController controller = new FooController(repositoryMock.Object);
controller.Exercise();
This test will not compile, because at this point you don't have IBarRepository interface, which is needed by controller. You create this interface. And you also add method GetById to this interface. After that you implement controller.
Good news - when controller will be finished, you will have IBarRepository interface definition, which has API very convenient for controller.
Next step is creating IBarRepository implementation. I rarely write tests for repositories. But, you can do it several ways:
If your have data access code, which is used by repository (ORM framework, ADO.NET classes, etc), you can mock these dependencies and verify that your repository implementation makes all required calls to underlying data access code. These tests are pretty brittle. And do not give you much benefit, because rarely repositories contain complex business logic.
You can do integration testing with real database (e.g. in-memory SQLite) and verify that data is really CRUD-ed in database tables. Those tests also brittle and very time-consuming. But in this case you will be sure, that repository works as it should.
Where your repository is saving? If it is saving in some file, then you can compare your file with some model file (gold) where everything was manually checked and is ok. If is some database then you should mock your database interface, log all insert queries and then compare log with ideal log.