Repository and Unit of Work patterns - How to save changes - c#

I'm struggling to understand the relationship between the Repository and Unit of Work patterns despite this kind of question being asked so many times. Essentially I still don't understand which part would save/commit data changes - the repository or the unit of work?
Since every example I've seen relates to using these in conjunction with a database/OR mapper let's make a more interesting example - lets persist the data to the file system in data files; according to the patterns I should be able to do this because where the data goes is irrelevant.
So for a basic entity:
public class Account
{
public int Id { get; set; }
public string Name { get; set; }
}
I imagine the following interfaces would be used:
public interface IAccountRepository
{
Account Get(int id);
void Add(Account account);
void Update(Account account);
void Remove(Account account);
}
public interface IUnitOfWork
{
void Save();
}
And I think in terms of usage it would look like this:
IUnitOfWork unitOfWork = // Create concrete implementation here
IAccountRepository repository = // Create concrete implementation here
// Add a new account
Account account = new Account() { Name = "Test" };
repository.Add(account);
// Commit changes
unitOfWork.Save();
Bearing in mind that all data will be persisted to files, where does the logic go to actually add/update/remove this data?
Does it go in the repository via the Add(), Update() and Remove() methods? It sounds logical to me to have all the code which reads/writes files in one place, but then what is the point of the IUnitOfWork interface?
Does it go in the IUnitOfWork implementation, which for this scenario would also be responsible for data change tracking too? To me this would suggest that the repository can read files while the unit of work has to write files but that the logic is now split into two places.

Repository can work without Unit Of Work, so it can also have Save method.
public interface IRepository<T>
{
T Get(int id);
void Add(T entity);
void Update(T entity);
void Remove(T entity);
void Save();
}
Unit Of Work is used when you have multiple repositories (may have different data context). It keeps track of all changes in a transaction until you call Commit method to persist all changes to database(file in this case).
So, when you call Add/Update/Remove in the Repository, it only changes the status of the entity, mark it as Added, Removed or Dirty... When you call Commit, Unit Of Work will loop through repositories and perform actual persistence:
If repositories share the same data context, the Unit Of Work can work directly with the data context for higher performance(open and write file in this case).
If repositories have different data context(different databases or files), the Unit Of Work will call each repository's Save method in a same TransactionScope.

I'm actually quite new to this but as nobody wiser has posted:
The code which CRUDs happens in the repositories as you would expect, but when Account.Add (for example) is called, all that happens is that an Account object is added to the list of things to be added later (the change is tracked).
When unitOfWork.Save() is called the repositories are allowed to look through their list of what has changed Or the UoW's list of what has changed (depending on how you choose to implement the pattern) and act appropriately - so in your case there might be a List<Account> NewItemsToAdd field that has been tracking what to add based on calls to .Add(). When the UoW says it's OK to save, the repository can actually persist the new items as files, and if successful clear the list of new items to add.
AFAIK the point of the UoW is to manage the Save across multiple repositories (which combined are the logical unit of work that we want to commit).
I really like your question.
I've used Uow / Repository Pattern with Entity Framework and it shows how much EF actually does (how the context tracks the changes until SaveChanges is finally called). To implement this design pattern in your example you need to write quite a bit of code to manage the changes.

Ehe, things are tricky. Imagine this scenario: one repo saves something in a db, other on the file system and the third something on the cloud. How do you commit that?
As a guideline, the UoW should commit things, however in the above scenario, Commit is just an illusion as you have 3 very different things to update. Enter eventual consistency, which means that all things will be consistent eventually (not in the same moment as you're used with a RDBMS).
That UoW is called a Saga in a message driven architecture. The point is every saga bit can be executed at different time. Saga completes only when all 3 repositories are updated.
You don't see this approach as often, because most of the time you'll work with a RDBMS, but nowadays NoSql is quite common so a classic transactional approach is very limited.
So, if you're sure you work ONLY with ONE rdbms, use a transaction with the UoW and pass teh associated connection to each repository. At the end, UoW will call commit.
If you know or expect you might have to work with more than one rdbms or a storage that doesn't support transactions, try to familiarize yourself with a message driven architecture and with the saga concept.

Using the file system can complicate things quite much if you want to do it on yourself.
Only write when the UoW is committed.
What you have to do is to let the repositories enqueue all IO operations in the UnitOfWork. Something like:
public class UserFileRepository : IUserRepository
{
public UserFileRepository(IUnitOfWork unitOfWork)
{
_enquableUow = unitOfWork as IEnquableUnitOfWork;
if (_enquableUow == null) throw new NotSupportedException("This repository only works with IEnquableUnitOfWork implementations.");
}
public void Add(User user)
{
_uow.Append(() => AppendToFile(user));
}
public void Uppate(User user)
{
_uow.Append(() => ReplaceInFile(user));
}
}
By doing so you can get all changes written to the file(s) at the same time.
The reason that you don't need to do that with DB repositories is that the transaction support is built into the DB. Hence you can tell the DB to start a transaction directly and then just use it to fake a Unit Of Work.
Transaction support
Will be complex as you have to be able to roll back changes in the files and also prevent different threads/transactions from accessing the same files during simultaneous transactions.

normally, repositories handle all reads, and unit-of-work handles all writes,but for sure you can handle all reads and writes by only using one of these two
(but if only using repository pattern, it will be very tedious to maintain maybe 10 repositories,more worse,maybe result in inconsistent reads and writes be overwritten),
advantage of mix using both is ease of tracing status change and ease of handling concurrency and consistent problems.
for better understanding,you can refer links: Repository Pattern with Entity Framework 4.1 and Parent/Child Relationships
and
https://softwareengineering.stackexchange.com/questions/263502/unit-of-work-concurrency-how-is-it-handled

Related

How to test Related Objects in EF Repository using Moq? [duplicate]

I am just starting out with Unit testings and TDD in general. I have dabbled before but now I am determined to add it to my workflow and write better software.
I asked a question yesterday that kind of included this, but it seems to be a question on its own. I have sat down to start implementing a service class that I will use to abstract away the business logic from the controllers and map to specific models and data interactions using EF6.
The issue is I have roadblocked myself already because I didn't want to abstract EF away in a repository (it will still be available outside the services for specific queries, etc) and would like to test my services (EF Context will be used).
Here I guess is the question, is there a point to doing this? If so, how are people doing it in the wild in light of the leaky abstractions caused by IQueryable and the many great posts by Ladislav Mrnka on the subject of unit testing not being straightforward because of the differences in Linq providers when working with an in memory implementation as apposed to a specific database.
The code I want to test seems pretty simple. (this is just dummy code to try and understand what i am doing, I want to drive the creation using TDD)
Context
public interface IContext
{
IDbSet<Product> Products { get; set; }
IDbSet<Category> Categories { get; set; }
int SaveChanges();
}
public class DataContext : DbContext, IContext
{
public IDbSet<Product> Products { get; set; }
public IDbSet<Category> Categories { get; set; }
public DataContext(string connectionString)
: base(connectionString)
{
}
}
Service
public class ProductService : IProductService
{
private IContext _context;
public ProductService(IContext dbContext)
{
_context = dbContext;
}
public IEnumerable<Product> GetAll()
{
var query = from p in _context.Products
select p;
return query;
}
}
Currently I am in the mindset of doing a few things:
Mocking EF Context with something like this approach- Mocking EF When Unit Testing or directly using a mocking framework on the interface like moq - taking the pain that the unit tests may pass but not necessarily work end to end and back them up with Integration tests?
Maybe using something like Effort to mock EF - I have never used it and not sure if anyone else is using it in the wild?
Not bother testing anything that simply calls back to EF - so essentially service methods that call EF directly (getAll etc) are not unit tested but just integration tested?
Anyone out there actually doing this out there without a Repo and having success?
This is a topic I'm very interested in. There are many purists who say that you shouldn't test technologies such as EF and NHibernate. They are right, they're already very stringently tested and as a previous answer stated it's often pointless to spend vast amounts of time testing what you don't own.
However, you do own the database underneath! This is where this approach in my opinion breaks down, you don't need to test that EF/NH are doing their jobs correctly. You need to test that your mappings/implementations are working with your database. In my opinion this is one of the most important parts of a system you can test.
Strictly speaking however we're moving out of the domain of unit testing and into integration testing but the principles remain the same.
The first thing you need to do is to be able to mock your DAL so your BLL can be tested independently of EF and SQL. These are your unit tests. Next you need to design your Integration Tests to prove your DAL, in my opinion these are every bit as important.
There are a couple of things to consider:
Your database needs to be in a known state with each test. Most systems use either a backup or create scripts for this.
Each test must be repeatable
Each test must be atomic
There are two main approaches to setting up your database, the first is to run a UnitTest create DB script. This ensures that your unit test database will always be in the same state at the beginning of each test (you may either reset this or run each test in a transaction to ensure this).
Your other option is what I do, run specific setups for each individual test. I believe this is the best approach for two main reasons:
Your database is simpler, you don't need an entire schema for each test
Each test is safer, if you change one value in your create script it doesn't invalidate dozens of other tests.
Unfortunately your compromise here is speed. It takes time to run all these tests, to run all these setup/tear down scripts.
One final point, it can be very hard work to write such a large amount of SQL to test your ORM. This is where I take a very nasty approach (the purists here will disagree with me). I use my ORM to create my test! Rather than having a separate script for every DAL test in my system I have a test setup phase which creates the objects, attaches them to the context and saves them. I then run my test.
This is far from the ideal solution however in practice I find it's a LOT easier to manage (especially when you have several thousand tests), otherwise you're creating massive numbers of scripts. Practicality over purity.
I will no doubt look back at this answer in a few years (months/days) and disagree with myself as my approaches have changed - however this is my current approach.
To try and sum up everything I've said above this is my typical DB integration test:
[Test]
public void LoadUser()
{
this.RunTest(session => // the NH/EF session to attach the objects to
{
var user = new UserAccount("Mr", "Joe", "Bloggs");
session.Save(user);
return user.UserID;
}, id => // the ID of the entity we need to load
{
var user = LoadMyUser(id); // load the entity
Assert.AreEqual("Mr", user.Title); // test your properties
Assert.AreEqual("Joe", user.Firstname);
Assert.AreEqual("Bloggs", user.Lastname);
}
}
The key thing to notice here is that the sessions of the two loops are completely independent. In your implementation of RunTest you must ensure that the context is committed and destroyed and your data can only come from your database for the second part.
Edit 13/10/2014
I did say that I'd probably revise this model over the upcoming months. While I largely stand by the approach I advocated above I've updated my testing mechanism slightly. I now tend to create the entities in in the TestSetup and TestTearDown.
[SetUp]
public void Setup()
{
this.SetupTest(session => // the NH/EF session to attach the objects to
{
var user = new UserAccount("Mr", "Joe", "Bloggs");
session.Save(user);
this.UserID = user.UserID;
});
}
[TearDown]
public void TearDown()
{
this.TearDownDatabase();
}
Then test each property individually
[Test]
public void TestTitle()
{
var user = LoadMyUser(this.UserID); // load the entity
Assert.AreEqual("Mr", user.Title);
}
[Test]
public void TestFirstname()
{
var user = LoadMyUser(this.UserID);
Assert.AreEqual("Joe", user.Firstname);
}
[Test]
public void TestLastname()
{
var user = LoadMyUser(this.UserID);
Assert.AreEqual("Bloggs", user.Lastname);
}
There are several reasons for this approach:
There are no additional database calls (one setup, one teardown)
The tests are far more granular, each test verifies one property
Setup/TearDown logic is removed from the Test methods themselves
I feel this makes the test class simpler and the tests more granular (single asserts are good)
Edit 5/3/2015
Another revision on this approach. While class level setups are very helpful for tests such as loading properties they are less useful where the different setups are required. In this case setting up a new class for each case is overkill.
To help with this I now tend to have two base classes SetupPerTest and SingleSetup. These two classes expose the framework as required.
In the SingleSetup we have a very similar mechanism as described in my first edit. An example would be
public TestProperties : SingleSetup
{
public int UserID {get;set;}
public override DoSetup(ISession session)
{
var user = new User("Joe", "Bloggs");
session.Save(user);
this.UserID = user.UserID;
}
[Test]
public void TestLastname()
{
var user = LoadMyUser(this.UserID); // load the entity
Assert.AreEqual("Bloggs", user.Lastname);
}
[Test]
public void TestFirstname()
{
var user = LoadMyUser(this.UserID);
Assert.AreEqual("Joe", user.Firstname);
}
}
However references which ensure that only the correct entites are loaded may use a SetupPerTest approach
public TestProperties : SetupPerTest
{
[Test]
public void EnsureCorrectReferenceIsLoaded()
{
int friendID = 0;
this.RunTest(session =>
{
var user = CreateUserWithFriend();
session.Save(user);
friendID = user.Friends.Single().FriendID;
} () =>
{
var user = GetUser();
Assert.AreEqual(friendID, user.Friends.Single().FriendID);
});
}
[Test]
public void EnsureOnlyCorrectFriendsAreLoaded()
{
int userID = 0;
this.RunTest(session =>
{
var user = CreateUserWithFriends(2);
var user2 = CreateUserWithFriends(5);
session.Save(user);
session.Save(user2);
userID = user.UserID;
} () =>
{
var user = GetUser(userID);
Assert.AreEqual(2, user.Friends.Count());
});
}
}
In summary both approaches work depending on what you are trying to test.
Effort Experience Feedback here
After a lot of reading I have been using Effort in my tests: during the tests the Context is built by a factory that returns a in memory version, which lets me test against a blank slate each time. Outside of the tests, the factory is resolved to one that returns the whole Context.
However i have a feeling that testing against a full featured mock of the database tends to drag the tests down; you realize you have to take care of setting up a whole bunch of dependencies in order to test one part of the system. You also tend to drift towards organizing together tests that may not be related, just because there is only one huge object that handles everything. If you don't pay attention, you may find yourself doing integration testing instead of unit testing
I would have prefered testing against something more abstract rather than a huge DBContext but i couldn't find the sweet spot between meaningful tests and bare-bone tests. Chalk it up to my inexperience.
So i find Effort interesting; if you need to hit the ground running it is a good tool to quickly get started and get results. However i think that something a bit more elegant and abstract should be the next step and that is what I am going to investigate next. Favoriting this post to see where it goes next :)
Edit to add: Effort do take some time to warm up, so you're looking at approx. 5 seconds at test start up. This may be a problem for you if you need your test suite to be very efficient.
Edited for clarification:
I used Effort to test a webservice app. Each message M that enters is routed to a IHandlerOf<M> via Windsor. Castle.Windsor resolves the IHandlerOf<M> which resovles the dependencies of the component. One of these dependencies is the DataContextFactory, which lets the handler ask for the factory
In my tests I instantiate the IHandlerOf component directly, mock all the sub-components of the SUT and handles the Effort-wrapped DataContextFactory to the handler.
It means that I don't unit test in a strict sense, since the DB is hit by my tests. However as I said above it let me hit the ground running and I could quickly test some points in the application
If you want to unit test code then you need to isolate your code you want to test (in this case your service) from external resources (e.g. databases). You could probably do this with some sort of in-memory EF provider, however a much more common way is to abstract away your EF implementation e.g. with some sort of repository pattern. Without this isolation any tests you write will be integration tests, not unit tests.
As for testing EF code - I write automated integration tests for my repositories that write various rows to the database during their initialization, and then call my repository implementations to make sure that they behave as expected (e.g. making sure that results are filtered correctly, or that they are sorted in the correct order).
These are integration tests not unit tests, as the tests rely on having a database connection present, and that the target database already has the latest up-to-date schema installed.
I have fumbled around sometime to reach these considerations:
1- If my application access the database, why the test should not? What if there is something wrong with data access? The tests must know it beforehand and alert myself about the problem.
2- The Repository Pattern is somewhat hard and time consuming.
So I came up with this approach, that I don't think is the best, but fulfilled my expectations:
Use TransactionScope in the tests methods to avoid changes in the database.
To do it it's necessary:
1- Install the EntityFramework into the Test Project.
2- Put the connection string into the app.config file of Test Project.
3- Reference the dll System.Transactions in Test Project.
The unique side effect is that identity seed will increment when trying to insert, even when the transaction is aborted. But since the tests are made against a development database, this should be no problem.
Sample code:
[TestClass]
public class NameValueTest
{
[TestMethod]
public void Edit()
{
NameValueController controller = new NameValueController();
using(var ts = new TransactionScope()) {
Assert.IsNotNull(controller.Edit(new Models.NameValue()
{
NameValueId = 1,
name1 = "1",
name2 = "2",
name3 = "3",
name4 = "4"
}));
//no complete, automatically abort
//ts.Complete();
}
}
[TestMethod]
public void Create()
{
NameValueController controller = new NameValueController();
using (var ts = new TransactionScope())
{
Assert.IsNotNull(controller.Create(new Models.NameValue()
{
name1 = "1",
name2 = "2",
name3 = "3",
name4 = "4"
}));
//no complete, automatically abort
//ts.Complete();
}
}
}
I would not unit test code I don't own. What are you testing here, that the MSFT compiler works?
That said, to make this code testable, you almost HAVE to make your data access layer separate from your business logic code. What I do is take all of my EF stuff and put it in a (or multiple) DAO or DAL class which also has a corresponding interface. Then I write my service which will have the DAO or DAL object injected in as a dependency (constructor injection preferably) referenced as the interface. Now the part that needs to be tested (your code) can easily be tested by mocking out the DAO interface and injecting that into your service instance inside your unit test.
//this is testable just inject a mock of IProductDAO during unit testing
public class ProductService : IProductService
{
private IProductDAO _productDAO;
public ProductService(IProductDAO productDAO)
{
_productDAO = productDAO;
}
public List<Product> GetAllProducts()
{
return _productDAO.GetAll();
}
...
}
I would consider live Data Access Layers to be part of integration testing, not unit testing. I have seen guys run verifications on how many trips to the database hibernate makes before, but they were on a project that involved billions of records in their datastore and those extra trips really mattered.
So here's the thing, Entity Framework is an implementation so despite the fact that it abstracts the complexity of database interaction, interacting directly is still tight coupling and that's why it's confusing to test.
Unit testing is about testing the logic of a function and each of its potential outcomes in isolation from any external dependencies, which in this case is the data store. In order to do that, you need to be able to control the behavior of the data store. For example, if you want to assert that your function returns false if the fetched user doesn't meet some set of criteria, then your [mocked] data store should be configured to always return a user that fails to meet the criteria, and vice versa for the opposite assertion.
With that said, and accepting the fact that EF is an implementation, I would likely favor the idea of abstracting a repository. Seem a bit redundant? It's not, because you are solving a problem which is isolating your code from the data implementation.
In DDD, the repositories only ever return aggregate roots, not DAO. That way, the consumer of the repository never has to know about the data implementation (as it shouldn't) and we can use that as an example of how to solve this problem. In this case, the object that is generated by EF is a DAO and as such, should be hidden from your application. This another benefit of the repository that you define. You can define a business object as its return type instead of the EF object. Now what the repo does is hide the calls to EF and maps the EF response to that business object defined in the repos signature. Now you can use that repo in place of the DbContext dependency that you inject into your classes and consequently, now you can mock that interface to give you the control that you need in order to test your code in isolation.
It's a bit more work and many thumb their nose at it, but it solves a real problem. There's an in-memory provider that was mentioned in a different answer that could be an option (I have not tried it), and its very existence is evidence of the need for the practice.
I completely disagree with the top answer because it sidesteps the real issue which is isolating your code and then goes on a tangent about testing your mapping. By all means test your mapping if you want to, but address the actual issue here and get some real code coverage.
In short I would say no, the juice is not worth the squeeze to test a service method with a single line that retrieves model data. In my experience people who are new to TDD want to test absolutely everything. The old chestnut of abstracting a facade to a 3rd party framework just so you can create a mock of that frameworks API with which you bastardise/extend so that you can inject dummy data is of little value in my mind. Everyone has a different view of how much unit testing is best. I tend to be more pragmatic these days and ask myself if my test is really adding value to the end product, and at what cost.
I want to share an approach commented about and briefly discussed but show an actual example that I am currently using to help unit test EF-based services.
First, I would love to use the in-memory provider from EF Core, but this is about EF 6. Furthermore, for other storage systems like RavenDB, I'd also be a proponent of testing via the in-memory database provider. Again--this is specifically to help test EF-based code without a lot of ceremony.
Here are the goals I had when coming up with a pattern:
It must be simple for other developers on the team to understand
It must isolate the EF code at the barest possible level
It must not involve creating weird multi-responsibility interfaces (such as a "generic" or "typical" repository pattern)
It must be easy to configure and setup in a unit test
I agree with previous statements that EF is still an implementation detail and it's okay to feel like you need to abstract it in order to do a "pure" unit test. I also agree that ideally, I would want to ensure the EF code itself works--but this involves a sandbox database, in-memory provider, etc. My approach solves both problems--you can safely unit test EF-dependent code and create integration tests to test your EF code specifically.
The way I achieved this was through simply encapsulating EF code into dedicated Query and Command classes. The idea is simple: just wrap any EF code in a class and depend on an interface in the classes that would've originally used it. The main issue I needed to solve was to avoid adding numerous dependencies to classes and setting up a lot of code in my tests.
This is where a useful, simple library comes in: Mediatr. It allows for simple in-process messaging and it does it by decoupling "requests" from the handlers that implement the code. This has an added benefit of decoupling the "what" from the "how". For example, by encapsulating the EF code into small chunks it allows you to replace the implementations with another provider or totally different mechanism, because all you are doing is sending a request to perform an action.
Utilizing dependency injection (with or without a framework--your preference), we can easily mock the mediator and control the request/response mechanisms to enable unit testing EF code.
First, let's say we have a service that has business logic we need to test:
public class FeatureService {
private readonly IMediator _mediator;
public FeatureService(IMediator mediator) {
_mediator = mediator;
}
public async Task ComplexBusinessLogic() {
// retrieve relevant objects
var results = await _mediator.Send(new GetRelevantDbObjectsQuery());
// normally, this would have looked like...
// var results = _myDbContext.DbObjects.Where(x => foo).ToList();
// perform business logic
// ...
}
}
Do you start to see the benefit of this approach? Not only are you explicitly encapsulating all EF-related code into descriptive classes, you are allowing extensibility by removing the implementation concern of "how" this request is handled--this class doesn't care if the relevant objects come from EF, MongoDB, or a text file.
Now for the request and handler, via MediatR:
public class GetRelevantDbObjectsQuery : IRequest<DbObject[]> {
// no input needed for this particular request,
// but you would simply add plain properties here if needed
}
public class GetRelevantDbObjectsEFQueryHandler : IRequestHandler<GetRelevantDbObjectsQuery, DbObject[]> {
private readonly IDbContext _db;
public GetRelevantDbObjectsEFQueryHandler(IDbContext db) {
_db = db;
}
public DbObject[] Handle(GetRelevantDbObjectsQuery message) {
return _db.DbObjects.Where(foo => bar).ToList();
}
}
As you can see, the abstraction is simple and encapsulated. It's also absolutely testable because in an integration test, you could test this class individually--there are no business concerns mixed in here.
So what does a unit test of our feature service look like? It's way simple. In this case, I'm using Moq to do mocking (use whatever makes you happy):
[TestClass]
public class FeatureServiceTests {
// mock of Mediator to handle request/responses
private Mock<IMediator> _mediator;
// subject under test
private FeatureService _sut;
[TestInitialize]
public void Setup() {
// set up Mediator mock
_mediator = new Mock<IMediator>(MockBehavior.Strict);
// inject mock as dependency
_sut = new FeatureService(_mediator.Object);
}
[TestCleanup]
public void Teardown() {
// ensure we have called or expected all calls to Mediator
_mediator.VerifyAll();
}
[TestMethod]
public void ComplexBusinessLogic_Does_What_I_Expect() {
var dbObjects = new List<DbObject>() {
// set up any test objects
new DbObject() { }
};
// arrange
// setup Mediator to return our fake objects when it receives a message to perform our query
// in practice, I find it better to create an extension method that encapsulates this setup here
_mediator.Setup(x => x.Send(It.IsAny<GetRelevantDbObjectsQuery>(), default(CancellationToken)).ReturnsAsync(dbObjects.ToArray()).Callback(
(GetRelevantDbObjectsQuery message, CancellationToken token) => {
// using Moq Callback functionality, you can make assertions
// on expected request being passed in
Assert.IsNotNull(message);
});
// act
_sut.ComplexBusinessLogic();
// assertions
}
}
You can see all we need is a single setup and we don't even need to configure anything extra--it's a very simple unit test. Let's be clear: This is totally possible to do without something like Mediatr (you would simply implement an interface and mock it for tests, e.g. IGetRelevantDbObjectsQuery), but in practice for a large codebase with many features and queries/commands, I love the encapsulation and innate DI support Mediatr offers.
If you're wondering how I organize these classes, it's pretty simple:
- MyProject
- Features
- MyFeature
- Queries
- Commands
- Services
- DependencyConfig.cs (Ninject feature modules)
Organizing by feature slices is beside the point, but this keeps all relevant/dependent code together and easily discoverable. Most importantly, I separate the Queries vs. Commands--following the Command/Query Separation principle.
This meets all my criteria: it's low-ceremony, it's easy to understand, and there are extra hidden benefits. For example, how do you handle saving changes? Now you can simplify your Db Context by using a role interface (IUnitOfWork.SaveChangesAsync()) and mock calls to the single role interface or you could encapsulate committing/rolling back inside your RequestHandlers--however you prefer to do it is up to you, as long as it's maintainable. For example, I was tempted to create a single generic request/handler where you'd just pass an EF object and it would save/update/remove it--but you have to ask what your intention is and remember that if you wanted to swap out the handler with another storage provider/implementation, you should probably create explicit commands/queries that represent what you intend to do. More often than not, a single service or feature will need something specific--don't create generic stuff before you have a need for it.
There are of course caveats to this pattern--you can go too far with a simple pub/sub mechanism. I've limited my implementation to only abstracting EF-related code, but adventurous developers could start using MediatR to go overboard and message-ize everything--something good code review practices and peer reviews should catch. That's a process issue, not an issue with MediatR, so just be cognizant of how you're using this pattern.
You wanted a concrete example of how people are unit testing/mocking EF and this is an approach that's working successfully for us on our project--and the team is super happy with how easy it is to adopt. I hope this helps! As with all things in programming, there are multiple approaches and it all depends on what you want to achieve. I value simplicity, ease of use, maintainability, and discoverability--and this solution meets all those demands.
In order to unit test code that relies on your database you need to setup a database or mock for each and every test.
Having a database (real or mocked) with a single state for all your tests will bite you quickly; you cannot test all records are valid and some aren't from the same data.
Setting up an in-memory database in a OneTimeSetup will have issues where the old database is not cleared down before the next test starts up. This will show as tests working when you run them individually, but failing when you run them all.
A Unit test should ideally only set what affects the test
I am working in an application that has a lot of tables with a lot of connections and some massive Linq blocks. These need testing. A simple grouping missed, or a join that results in more than 1 row will affect results.
To deal with this I have setup a heavy Unit Test Helper that is a lot of work to setup, but enables us to reliably mock the database in any state, and running 48 tests against 55 interconnected tables, with the entire database setup 48 times takes 4.7 seconds.
Here's how:
In the Db context class ensure each table class is set to virtual
public virtual DbSet<Branch> Branches { get; set; }
public virtual DbSet<Warehouse> Warehouses { get; set; }
In a UnitTestHelper class create a method to setup your database. Each table class is an optional parameter. If not supplied, it will be created through a Make method
internal static Db Bootstrap(bool onlyMockPassedTables = false, List<Branch> branches = null, List<Products> products = null, List<Warehouses> warehouses = null)
{
if (onlyMockPassedTables == false) {
branches ??= new List<Branch> { MakeBranch() };
warehouses ??= new List<Warehouse>{ MakeWarehouse() };
}
For each table class, each object in it is mapped to the other lists
branches?.ForEach(b => {
b.Warehouse = warehouses.FirstOrDefault(w => w.ID == b.WarehouseID);
});
warehouses?.ForEach(w => {
w.Branches = branches.Where(b => b.WarehouseID == w.ID);
});
And add it to the DbContext
var context = new Db(new DbContextOptionsBuilder<Db>().UseInMemoryDatabase(Guid.NewGuid().ToString()).Options);
context.Branches.AddRange(branches);
context.Warehouses.AddRange(warehouses);
context.SaveChanges();
return context;
}
Define a list of IDs to make is easier to reuse them and make sure joins are valid
internal const int BranchID = 1;
internal const int WarehouseID = 2;
Create a Make for each table to setup the most basic, but connected version it can be
internal static Branch MakeBranch(int id = BranchID, string code = "The branch", int warehouseId = WarehouseID) => new Branch { ID = id, Code = code, WarehouseID = warehouseId };
internal static Warehouse MakeWarehouse(int id = WarehouseID, string code = "B", string name = "My Big Warehouse") => new Warehouse { ID = id, Code = code, Name = name };
It's a lot of work, but it only needs doing once, and then your tests can be very focused because the rest of the database will be setup for it.
[Test]
[TestCase(new string [] {"ABC", "DEF"}, "ABC", ExpectedResult = 1)]
[TestCase(new string [] {"ABC", "BCD"}, "BC", ExpectedResult = 2)]
[TestCase(new string [] {"ABC"}, "EF", ExpectedResult = 0)]
[TestCase(new string[] { "ABC", "DEF" }, "abc", ExpectedResult = 1)]
public int Given_SearchingForBranchByName_Then_ReturnCount(string[] codesInDatabase, string searchString)
{
// Arrange
var branches = codesInDatabase.Select(x => UnitTestHelpers.MakeBranch(code: $"qqqq{x}qqq")).ToList();
var db = UnitTestHelpers.Bootstrap(branches: branches);
var service = new BranchService(db);
// Act
var result = service.SearchByName(searchString);
// Assert
return result.Count();
}
There is Effort which is an in memory entity framework database provider. I've not actually tried it... Haa just spotted this was mentioned in the question!
Alternatively you could switch to EntityFrameworkCore which has an in memory database provider built-in.
https://blog.goyello.com/2016/07/14/save-time-mocking-use-your-real-entity-framework-dbcontext-in-unit-tests/
https://github.com/tamasflamich/effort
I used a factory to get a context, so i can create the context close to its use. This seems to work locally in visual studio but not on my TeamCity build server, not sure why yet.
return new MyContext(#"Server=(localdb)\mssqllocaldb;Database=EFProviders.InMemory;Trusted_Connection=True;");
I like to separate my filters from other portions of the code and test those as I outline on my blog here http://coding.grax.com/2013/08/testing-custom-linq-filter-operators.html
That being said, the filter logic being tested is not identical to the filter logic executed when the program is run due to the translation between the LINQ expression and the underlying query language, such as T-SQL. Still, this allows me to validate the logic of the filter. I don't worry too much about the translations that happen and things such as case-sensitivity and null-handling until I test the integration between the layers.
It is important to test what you are expecting entity framework to do (i.e. validate your expectations). One way to do this that I have used successfully, is using moq as shown in this example (to long to copy into this answer):
https://learn.microsoft.com/en-us/ef/ef6/fundamentals/testing/mocking
However be careful... A SQL context is not guaranteed to return things in a specific order unless you have an appropriate "OrderBy" in your linq query, so its possible to write things that pass when you test using an in-memory list (linq-to-entities) but fail in your uat / live environment when (linq-to-sql) gets used.

Creating generic DbContext Factory in Entity Framework

I'm using .Net Core 2.1. I'm using more than one DbContext. I'm creating a DbContextFactory for every context. But, I want to do this in a generic way. I want to create only one DbContextFactory. How can I achieve this?
MyDbContextFactory.cs
public interface IDbContextFactory<TContext> where TContext : DbContext
{
DbContext Create();
}
public class MyDbContextFactory : IDbContextFactory<MyDbContext>
{
public IJwtHelper JwtHelper { get; set; }
public MyDbContextCreate()
{
return new MyDbContext(this.JwtHelper);
}
DbContext IDbContextFactory<MyDbContext>.Create()
{
throw new NotImplementedException();
}
}
UnitOfWork.cs
public class UnitOfWork<TContext> : IUnitOfWork<TContext> where TContext : DbContext
{
public static Func<TContext> CreateDbContextFunction { get; set; }
protected readonly DbContext DataContext;
public UnitOfWork()
{
DataContext = CreateDbContextFunction();
}
}
MyDbContext.cs
public class MyDbContext : DbContext
{
private readonly IJwtHelper jwtHelper;
public MyDbContext(IJwtHelper jwtHelper) : base()
{
this.jwtHelper = jwtHelper;
}
}
So you have a database, and a class that represent this database: your DbContext, it should represent the tables and the relations between the tables that are in your database, nothing more.
You decided to separate the operations on your database from the database itself. That is a good thing, because if several users of your database want to do the same thing, they can re-use the code to do it.
For instance, if you want to create "an Order for a Customer with several OrderLines, containing ordered Products, agreed Prices, amount, etc", you'll need to do several things with your database: check if the customer already exists, check if all products already exist, check if there are enough items, etc.
These things are typically things that you should not implement in your DbContext, but in a separate class.
If you add a function like: CreateOrder, then several users can re-use this function. You'll only have to test this only once, and if you decide to change something in your order model, there is only one place where you'll have to change the creation of an Order.
Other advantages of separating the class that represents your database (DbContext)
from the class that handles this data is that will be easier to change the internal structure without having to change the users of your database. You can even decide to change from Dapper to Entity Framework without having to change usage. This makes it also easier to mock the database for test purposes.
Functions like CreateOrder, QueryOrder, UpdateOrder already indicate that they are not generic database actions, they are meant for an Ordering database, not for a School database.
This might have the effect that unit-of-work might not be a proper name for the functionality you want in the separate class. A few years ago, unit-of-work was mainly meant to represent actions on a database, not really a specific database, I'm not really sure about this, because I saw fairly soon that a real unit-of-work class would not enhance functionality of my DbContext.
Quite often you see the following:
A DbContext class that represents your Database: the database that you created, not any generic idea of databases
A Repository class that represent the idea of storing your data somewhere, how this is stored, it could be a DbContext, but also a CSV-file, or a collection of Dictionary classes created for Test purposes. This Repository has an IQueryable, and functions to Add / Update / Remove (as far as needed
A class that represents your problem: the ordering model: Add Order / Update Order / Get Order of a Customer: this class really knows everything about an Order, for instance that it has an OrderTotal, which is probably nowhere to be found in your Ordering database.
Outside DbContext you sometimes may need SQL, for instance to improve efficiency of a call. Outside Repository it should not be seen that you are using SQL
Consider to separate the concerns: how to save your data (DbContext), how to CRUD (create, fetch, update, etc) the data (Repository), how to use the data (combine the tables)
I think what you want to do in your unit-of-work should be done inside the repository. Your Ordering class should create the Repository (which creates the DbContext), query several items to check the data it has to Add / Update, do the Add and Update and save the changes. After that your ordering class should Dispose the Repository, which in turn will Dispose the DbContext.
The Repository class will look very similar to the DbContext class. It has several sets that represent the tables. Every set will implement IQueryable<...> and allow to Add / Update / Remove, whatever is needed.
Because of this similarity in functions you could omit the Repository class and let your Ordering class use the DbContext directly. However, keep in mind, that changes will be bigger if in future you decide that you don't want to use entity framework anymore but some newer concept, or maybe return back to Dapper, or even more low level. SQL will seep through into your Ordering class
What to choose
I think you should answer several questions for yourself:
Is there really only one database that should be represented by your DbContext, could it be that the same DbContext should be used in a 2nd database with the same layout. Think of a test database, or a development database. Wouldn't it be easer / better testable / better changeable, to let your program create the DbContext that is to be used?
Is there really one group of Users of your DbContext: should everyone have the possibility to Delete? to Create? Could it be that some programs only want to query data (the program that e-mails the orders), and that order programs need to add Customers. And maybe another program needs to Add and Update Products, and the amount of Products in the warehouse. Consider Creating different Repository classes for them. Each Repository gets the same DbContext, because they are all accessing the same database
Similarly: only one data processing class (the above mentioned ordering class): should the process that handles Orders, be able to change product prices and add items to the stock?
The reason that you need the factories, is because you don't want to let your "main" program decide what items it should create for the purpose it is running right now. Code would be much easier if you created the items yourself:
Creation sequence for an Ordering Process:
IJwtHelper jwtHelper = ...;
// The product database: all functionality to do everything with Products and Orders
ProductDbContext dbContext = new ProductDbContext(...)
{
JwtHelper = jwtHelper,
...
};
// The Ordering repository: everything to place Orders,
// It can't change ProductPrices, nor can it stock the wharehouse
// So no AddProduct, not AddProductCount,
// of course it has TakeNrOfProducts, to decrease Stock of ordered Products
OrderingRepository repository = new OrderingRepository(...) {DbContext = dbContext};
// The ordering process: Create Order, find Order, ...
// when an order is created, it checks if items are in stock
// the prices of the items, if the Customer exists, etc.
using (OrderingProcess process = new OrderingProcess(...) {Repository = repository})
{
... // check if Customer exists, check if all items in stock, create the Order
process.SaveChanges();
}
When the Process is Disposed, the Repository is Disposed, which in turns Disposes the DbContext.
Something similar for the process that e-mails the Orders: It does not have to check the products, nor create customers, it only has to fetch data, and maybe update that an order has been e-mailed, or that e-mailing failed.
IJwtHelper jwtHelper = ...;
// The product database: all functionality to do everything with Products and Orders
ProductDbContext dbContext = new ProductDbContext(...) {JwtHelper = jwtHelper};
// The E-mail order repository: everything to fetch order data
// It can't change ProductPrices, nor can it stock the wharehouse
// It can't add Customers, nor create Orders
// However it can query a lot: customers, orders, ...
EmailOrderRepository repository = new EmailOrderRepository(...){DbContext = dbContext};
// The e-mail order process: fetch non-emailed orders,
// e-mail them and update the e-mail result
using (EmailOrderProcess process = new EmailOrderProcess(...){Repository = repository}
{
... // fetch the orders that are not e-mailed yet
// email the orders
// warning about orders that can't be emailed
// update successfully logged orders
repository.SaveChanges();
See how much easier you make the creation process, how much more versatile you make it: give the DbContext a different JwtHelper, and the data is logged differently, give the Repository a different DbContext and the data is saved in a different database, give the Process a different Repository, and you'll use Dapper to execute your queries.
Testing will be easier: create a Repository that uses Lists to save the tables, and testing your process with test data will be easy
Changes in databases will be easier. If for instance you later decide to separate your databases into one for your stock and stock prices and one for Customers and Orders, only one Repository needs to change. None of the Processes will notice this change.
Conclusion
Don't let the classes decide which objects they need. Let the creator of the class say: "hey, you need a DbContext? Use this one!" This will omit the need of factories and such
Separate your actual database (DbContext) from the concept of storing and retrieving data (Repository), use a separate class that handles the data without knowing how this data is stored or retrieved (The process class)
Create several Repositories that can only access the data they need to perform the task (+data that can be foreseen in future after expected changed). Don't make too much Repositories, but also not one that can do everything.
Create process classes that do only what they are meant to do. Don't create one process class with 20 different tasks. It will only make it more difficult to describe what it should do, more difficult to test it and more difficult to change the task
If you want to reuse the existing implementation, EF Core 5 provides DbContext Factory out of the box now:
https://learn.microsoft.com/en-us/ef/core/what-is-new/ef-core-5.0/whatsnew#dbcontextfactory
Make sure you dispose it correctly, as it's instances are not managed by the application's service provider.
See Microsoft documentation
Using a DbContext factory

Using DbContext SaveChanges with Transactions

As MSDN confirms, in EF 5 and on, the DbContext class is "a combination of the Unit-Of-Work and Repository patterns." In the web applications I build, I tend to implement the Repository and Unit-Of-Work patterns on top of the existing DbContext class. Lately, like many others out there, I've found that this is overkill in my scenario. I am not worried about the underlying storage mechanism ever changing from SQL Server, and while I appreciate the benefits that unit testing would bring, I still have a lot to learn about it before actually implementing it in a live application.
Thus, my solution is to use the DbContext class directly as the Repository and Unit-Of-Work, and then use StructureMap to inject one instance per request to individual service classes, allowing them to do work on the context. Then in my controllers, I inject each service I need and call the methods necessary by each action accordingly. Also, each request is wrapped in a transaction created off of the DbContext at the beginning of the request and either rolled back if any type of exception occurred (whether it be an EF error or application error) or committed if all is well. A sample code scenario is below.
This sample uses the Territory and Shipper tables from the Northwind sample database. In this sample admin controller, a territory and a shipper are being added at the same time.
Controller
public class AdminController : Controller
{
private readonly TerritoryService _territoryService;
private readonly ShipperService _shipperService;
public AdminController(TerritoryService territoryService, ShipperService shipperService)
{
_territoryService = territoryService;
_shipperService = shipperService;
}
// all other actions omitted...
[HttpPost]
public ActionResult Insert(AdminInsertViewModel viewModel)
{
if (!ModelState.IsValid)
return View(viewModel);
var newTerritory = // omitted code to map from viewModel
var newShipper = // omitted code to map from viewModel
_territoryService.Insert(newTerritory);
_shipperService.Insert(newShipper);
return RedirectToAction("SomeAction");
}
}
Territory Service
public class TerritoryService
{
private readonly NorthwindDbContext _dbContext;
public TerritoryService(NorthwindDbContext dbContext)
{
_dbContext = dbContext;
}
public void Insert(Territory territory)
{
_dbContext.Territories.Add(territory);
}
}
Shipper Service
public class ShipperService
{
private readonly NorthwindDbContext _dbContext;
public ShipperService(NorthwindDbContext dbContext)
{
_dbContext = dbContext;
}
public void Insert(Shipper shipper)
{
_dbContext.Shippers.Add(shipper);
}
}
Creation of Transaction on Application_BeginRequest()
// _dbContext is an injected instance per request just like in services
HttpContext.Items["_Transaction"] = _dbContext.Database.BeginTransaction(System.Data.IsolationLevel.ReadCommitted);
Rollback or Commit of Transaction on Application_EndRequest
var transaction = (DbContextTransaction)HttpContext.Items["_Transaction"];
if (HttpContext.Items["_Error"] != null) // populated on Application_Error() in global
{
transaction.Rollback();
}
else
{
transaction.Commit();
}
Now this all seems to work well, but the only question I have now is where is it best to call the SaveChanges() function on the DbContext? Should I call it in each Service layer method?
public class TerritoryService
{
// omitted code minus changes to Insert() method below
public void Insert(Territory territory)
{
_dbContext.Territories.Add(territory);
_dbContext.SaveChanges(); // <== Call it here?
}
}
public class ShipperService
{
// omitted code minus changes to Insert() method below
public void Insert(Shipper shipper)
{
_dbContext.Shippers.Add(shipper);
_dbContext.SaveChanges(); // <== Call it here?
}
}
Or should I leave the service class Insert() methods as is and just call SaveChanges() right before the transaction is committed?
var transaction = (DbContextTransaction)HttpContext.Items["_Transaction"];
// HttpContext.Items["_Error"] populated on Application_Error() in global
if (HttpContext.Items["_Error"] != null)
{
transaction.Rollback();
}
else
{
// _dbContext is an injected instance per request just like in services
_dbContext.SaveChanges(); // <== Call it here?
transaction.Commit();
}
Is either way okay? Is it safe to call SaveChanges() more than once since it is wrapped in a transaction? Are there any issues I may run into by doing so? Or is it best to call SaveChanges() just once right before the transaction is actually committed? I personally would rather just call it at the end right before the transaction is committed, but I want to be sure I am not missing any gotcha's with transactions or doing something wrong? If you read this far, thanks for taking the time to help. I know this was a long question.
You would call SaveChanges() when it's time to commit a single, atomic persistence operation. Since your services don't really know about each other or depend on each other, internally they have no way to guarantee one or the other is going to commit the changes. So in this setup I imagine they would each have to commit their changes.
This of course leads to the problem that these operations might not be individually atomic. Consider this scenario:
_territoryService.Insert(newTerritory); // success
_shipperService.Insert(newShipper); // error
In this case you've partially committed the data, leaving the system in a bit of an unknown state.
Which object in this scenario is in control over the atomicity of the operation? In web applications I think that's usually the controller. The operation, after all, is the request made by the user. In most scenarios (there are exceptions, of course) I imagine one would expect the entire request to succeed or fail.
If this is the case and your atomicity belongs at the request level then what I would recommend is getting the DbContext from the IoC container at the controller level and passing it to the services. (They already require it on their constructors, so not a big change there.) Those services can operate on the context, but never commit the context. The consuming code (the controller) can then commit it (or roll it back, or abandon it, etc.) once all of the services have completed their operations.
While different business objects, services, etc. should each internally maintain their own logic, I find that usually the objects which own the atomicity of operations are at the application level, governed by the business processes being invoked by the users.
You're basically creating a repository here, rather than a service.
To answer your question you could just ask yourself another question. "How will I be using this functionality?"
You're adding a couple of records, removing some records, updating some records. We could say that you're calling your various methods about 30 times. If you call SaveChanges 30 times you're making 30 round-trips to the database, causing a lot of traffic and overhead which COULD be avoided.
I usually recommend doing as few database round-trips as possible, and limit the amount of calls to SaveChanges(). Therefore I recommend that you add a Save() method to your repository/service layer and call it in the layer which calls your repository/service layer.
Unless it is absolutely required to save something before doing something else you shouldn't call it 30 times. You should call it 1 single time. If it is necessary to save something before doing something else you could still call SaveChanges in that absolute moment of requirement in the layer calling your repository/service layer.
Summary/TL;DR: Make a Save() method in your repository/service layer instead of calling SaveChanges() in each repository/service method. This will boost your performance and spare you the unnecessary overhead.

How are people unit testing with Entity Framework 6, should you bother?

I am just starting out with Unit testings and TDD in general. I have dabbled before but now I am determined to add it to my workflow and write better software.
I asked a question yesterday that kind of included this, but it seems to be a question on its own. I have sat down to start implementing a service class that I will use to abstract away the business logic from the controllers and map to specific models and data interactions using EF6.
The issue is I have roadblocked myself already because I didn't want to abstract EF away in a repository (it will still be available outside the services for specific queries, etc) and would like to test my services (EF Context will be used).
Here I guess is the question, is there a point to doing this? If so, how are people doing it in the wild in light of the leaky abstractions caused by IQueryable and the many great posts by Ladislav Mrnka on the subject of unit testing not being straightforward because of the differences in Linq providers when working with an in memory implementation as apposed to a specific database.
The code I want to test seems pretty simple. (this is just dummy code to try and understand what i am doing, I want to drive the creation using TDD)
Context
public interface IContext
{
IDbSet<Product> Products { get; set; }
IDbSet<Category> Categories { get; set; }
int SaveChanges();
}
public class DataContext : DbContext, IContext
{
public IDbSet<Product> Products { get; set; }
public IDbSet<Category> Categories { get; set; }
public DataContext(string connectionString)
: base(connectionString)
{
}
}
Service
public class ProductService : IProductService
{
private IContext _context;
public ProductService(IContext dbContext)
{
_context = dbContext;
}
public IEnumerable<Product> GetAll()
{
var query = from p in _context.Products
select p;
return query;
}
}
Currently I am in the mindset of doing a few things:
Mocking EF Context with something like this approach- Mocking EF When Unit Testing or directly using a mocking framework on the interface like moq - taking the pain that the unit tests may pass but not necessarily work end to end and back them up with Integration tests?
Maybe using something like Effort to mock EF - I have never used it and not sure if anyone else is using it in the wild?
Not bother testing anything that simply calls back to EF - so essentially service methods that call EF directly (getAll etc) are not unit tested but just integration tested?
Anyone out there actually doing this out there without a Repo and having success?
This is a topic I'm very interested in. There are many purists who say that you shouldn't test technologies such as EF and NHibernate. They are right, they're already very stringently tested and as a previous answer stated it's often pointless to spend vast amounts of time testing what you don't own.
However, you do own the database underneath! This is where this approach in my opinion breaks down, you don't need to test that EF/NH are doing their jobs correctly. You need to test that your mappings/implementations are working with your database. In my opinion this is one of the most important parts of a system you can test.
Strictly speaking however we're moving out of the domain of unit testing and into integration testing but the principles remain the same.
The first thing you need to do is to be able to mock your DAL so your BLL can be tested independently of EF and SQL. These are your unit tests. Next you need to design your Integration Tests to prove your DAL, in my opinion these are every bit as important.
There are a couple of things to consider:
Your database needs to be in a known state with each test. Most systems use either a backup or create scripts for this.
Each test must be repeatable
Each test must be atomic
There are two main approaches to setting up your database, the first is to run a UnitTest create DB script. This ensures that your unit test database will always be in the same state at the beginning of each test (you may either reset this or run each test in a transaction to ensure this).
Your other option is what I do, run specific setups for each individual test. I believe this is the best approach for two main reasons:
Your database is simpler, you don't need an entire schema for each test
Each test is safer, if you change one value in your create script it doesn't invalidate dozens of other tests.
Unfortunately your compromise here is speed. It takes time to run all these tests, to run all these setup/tear down scripts.
One final point, it can be very hard work to write such a large amount of SQL to test your ORM. This is where I take a very nasty approach (the purists here will disagree with me). I use my ORM to create my test! Rather than having a separate script for every DAL test in my system I have a test setup phase which creates the objects, attaches them to the context and saves them. I then run my test.
This is far from the ideal solution however in practice I find it's a LOT easier to manage (especially when you have several thousand tests), otherwise you're creating massive numbers of scripts. Practicality over purity.
I will no doubt look back at this answer in a few years (months/days) and disagree with myself as my approaches have changed - however this is my current approach.
To try and sum up everything I've said above this is my typical DB integration test:
[Test]
public void LoadUser()
{
this.RunTest(session => // the NH/EF session to attach the objects to
{
var user = new UserAccount("Mr", "Joe", "Bloggs");
session.Save(user);
return user.UserID;
}, id => // the ID of the entity we need to load
{
var user = LoadMyUser(id); // load the entity
Assert.AreEqual("Mr", user.Title); // test your properties
Assert.AreEqual("Joe", user.Firstname);
Assert.AreEqual("Bloggs", user.Lastname);
}
}
The key thing to notice here is that the sessions of the two loops are completely independent. In your implementation of RunTest you must ensure that the context is committed and destroyed and your data can only come from your database for the second part.
Edit 13/10/2014
I did say that I'd probably revise this model over the upcoming months. While I largely stand by the approach I advocated above I've updated my testing mechanism slightly. I now tend to create the entities in in the TestSetup and TestTearDown.
[SetUp]
public void Setup()
{
this.SetupTest(session => // the NH/EF session to attach the objects to
{
var user = new UserAccount("Mr", "Joe", "Bloggs");
session.Save(user);
this.UserID = user.UserID;
});
}
[TearDown]
public void TearDown()
{
this.TearDownDatabase();
}
Then test each property individually
[Test]
public void TestTitle()
{
var user = LoadMyUser(this.UserID); // load the entity
Assert.AreEqual("Mr", user.Title);
}
[Test]
public void TestFirstname()
{
var user = LoadMyUser(this.UserID);
Assert.AreEqual("Joe", user.Firstname);
}
[Test]
public void TestLastname()
{
var user = LoadMyUser(this.UserID);
Assert.AreEqual("Bloggs", user.Lastname);
}
There are several reasons for this approach:
There are no additional database calls (one setup, one teardown)
The tests are far more granular, each test verifies one property
Setup/TearDown logic is removed from the Test methods themselves
I feel this makes the test class simpler and the tests more granular (single asserts are good)
Edit 5/3/2015
Another revision on this approach. While class level setups are very helpful for tests such as loading properties they are less useful where the different setups are required. In this case setting up a new class for each case is overkill.
To help with this I now tend to have two base classes SetupPerTest and SingleSetup. These two classes expose the framework as required.
In the SingleSetup we have a very similar mechanism as described in my first edit. An example would be
public TestProperties : SingleSetup
{
public int UserID {get;set;}
public override DoSetup(ISession session)
{
var user = new User("Joe", "Bloggs");
session.Save(user);
this.UserID = user.UserID;
}
[Test]
public void TestLastname()
{
var user = LoadMyUser(this.UserID); // load the entity
Assert.AreEqual("Bloggs", user.Lastname);
}
[Test]
public void TestFirstname()
{
var user = LoadMyUser(this.UserID);
Assert.AreEqual("Joe", user.Firstname);
}
}
However references which ensure that only the correct entites are loaded may use a SetupPerTest approach
public TestProperties : SetupPerTest
{
[Test]
public void EnsureCorrectReferenceIsLoaded()
{
int friendID = 0;
this.RunTest(session =>
{
var user = CreateUserWithFriend();
session.Save(user);
friendID = user.Friends.Single().FriendID;
} () =>
{
var user = GetUser();
Assert.AreEqual(friendID, user.Friends.Single().FriendID);
});
}
[Test]
public void EnsureOnlyCorrectFriendsAreLoaded()
{
int userID = 0;
this.RunTest(session =>
{
var user = CreateUserWithFriends(2);
var user2 = CreateUserWithFriends(5);
session.Save(user);
session.Save(user2);
userID = user.UserID;
} () =>
{
var user = GetUser(userID);
Assert.AreEqual(2, user.Friends.Count());
});
}
}
In summary both approaches work depending on what you are trying to test.
Effort Experience Feedback here
After a lot of reading I have been using Effort in my tests: during the tests the Context is built by a factory that returns a in memory version, which lets me test against a blank slate each time. Outside of the tests, the factory is resolved to one that returns the whole Context.
However i have a feeling that testing against a full featured mock of the database tends to drag the tests down; you realize you have to take care of setting up a whole bunch of dependencies in order to test one part of the system. You also tend to drift towards organizing together tests that may not be related, just because there is only one huge object that handles everything. If you don't pay attention, you may find yourself doing integration testing instead of unit testing
I would have prefered testing against something more abstract rather than a huge DBContext but i couldn't find the sweet spot between meaningful tests and bare-bone tests. Chalk it up to my inexperience.
So i find Effort interesting; if you need to hit the ground running it is a good tool to quickly get started and get results. However i think that something a bit more elegant and abstract should be the next step and that is what I am going to investigate next. Favoriting this post to see where it goes next :)
Edit to add: Effort do take some time to warm up, so you're looking at approx. 5 seconds at test start up. This may be a problem for you if you need your test suite to be very efficient.
Edited for clarification:
I used Effort to test a webservice app. Each message M that enters is routed to a IHandlerOf<M> via Windsor. Castle.Windsor resolves the IHandlerOf<M> which resovles the dependencies of the component. One of these dependencies is the DataContextFactory, which lets the handler ask for the factory
In my tests I instantiate the IHandlerOf component directly, mock all the sub-components of the SUT and handles the Effort-wrapped DataContextFactory to the handler.
It means that I don't unit test in a strict sense, since the DB is hit by my tests. However as I said above it let me hit the ground running and I could quickly test some points in the application
If you want to unit test code then you need to isolate your code you want to test (in this case your service) from external resources (e.g. databases). You could probably do this with some sort of in-memory EF provider, however a much more common way is to abstract away your EF implementation e.g. with some sort of repository pattern. Without this isolation any tests you write will be integration tests, not unit tests.
As for testing EF code - I write automated integration tests for my repositories that write various rows to the database during their initialization, and then call my repository implementations to make sure that they behave as expected (e.g. making sure that results are filtered correctly, or that they are sorted in the correct order).
These are integration tests not unit tests, as the tests rely on having a database connection present, and that the target database already has the latest up-to-date schema installed.
I have fumbled around sometime to reach these considerations:
1- If my application access the database, why the test should not? What if there is something wrong with data access? The tests must know it beforehand and alert myself about the problem.
2- The Repository Pattern is somewhat hard and time consuming.
So I came up with this approach, that I don't think is the best, but fulfilled my expectations:
Use TransactionScope in the tests methods to avoid changes in the database.
To do it it's necessary:
1- Install the EntityFramework into the Test Project.
2- Put the connection string into the app.config file of Test Project.
3- Reference the dll System.Transactions in Test Project.
The unique side effect is that identity seed will increment when trying to insert, even when the transaction is aborted. But since the tests are made against a development database, this should be no problem.
Sample code:
[TestClass]
public class NameValueTest
{
[TestMethod]
public void Edit()
{
NameValueController controller = new NameValueController();
using(var ts = new TransactionScope()) {
Assert.IsNotNull(controller.Edit(new Models.NameValue()
{
NameValueId = 1,
name1 = "1",
name2 = "2",
name3 = "3",
name4 = "4"
}));
//no complete, automatically abort
//ts.Complete();
}
}
[TestMethod]
public void Create()
{
NameValueController controller = new NameValueController();
using (var ts = new TransactionScope())
{
Assert.IsNotNull(controller.Create(new Models.NameValue()
{
name1 = "1",
name2 = "2",
name3 = "3",
name4 = "4"
}));
//no complete, automatically abort
//ts.Complete();
}
}
}
I would not unit test code I don't own. What are you testing here, that the MSFT compiler works?
That said, to make this code testable, you almost HAVE to make your data access layer separate from your business logic code. What I do is take all of my EF stuff and put it in a (or multiple) DAO or DAL class which also has a corresponding interface. Then I write my service which will have the DAO or DAL object injected in as a dependency (constructor injection preferably) referenced as the interface. Now the part that needs to be tested (your code) can easily be tested by mocking out the DAO interface and injecting that into your service instance inside your unit test.
//this is testable just inject a mock of IProductDAO during unit testing
public class ProductService : IProductService
{
private IProductDAO _productDAO;
public ProductService(IProductDAO productDAO)
{
_productDAO = productDAO;
}
public List<Product> GetAllProducts()
{
return _productDAO.GetAll();
}
...
}
I would consider live Data Access Layers to be part of integration testing, not unit testing. I have seen guys run verifications on how many trips to the database hibernate makes before, but they were on a project that involved billions of records in their datastore and those extra trips really mattered.
So here's the thing, Entity Framework is an implementation so despite the fact that it abstracts the complexity of database interaction, interacting directly is still tight coupling and that's why it's confusing to test.
Unit testing is about testing the logic of a function and each of its potential outcomes in isolation from any external dependencies, which in this case is the data store. In order to do that, you need to be able to control the behavior of the data store. For example, if you want to assert that your function returns false if the fetched user doesn't meet some set of criteria, then your [mocked] data store should be configured to always return a user that fails to meet the criteria, and vice versa for the opposite assertion.
With that said, and accepting the fact that EF is an implementation, I would likely favor the idea of abstracting a repository. Seem a bit redundant? It's not, because you are solving a problem which is isolating your code from the data implementation.
In DDD, the repositories only ever return aggregate roots, not DAO. That way, the consumer of the repository never has to know about the data implementation (as it shouldn't) and we can use that as an example of how to solve this problem. In this case, the object that is generated by EF is a DAO and as such, should be hidden from your application. This another benefit of the repository that you define. You can define a business object as its return type instead of the EF object. Now what the repo does is hide the calls to EF and maps the EF response to that business object defined in the repos signature. Now you can use that repo in place of the DbContext dependency that you inject into your classes and consequently, now you can mock that interface to give you the control that you need in order to test your code in isolation.
It's a bit more work and many thumb their nose at it, but it solves a real problem. There's an in-memory provider that was mentioned in a different answer that could be an option (I have not tried it), and its very existence is evidence of the need for the practice.
I completely disagree with the top answer because it sidesteps the real issue which is isolating your code and then goes on a tangent about testing your mapping. By all means test your mapping if you want to, but address the actual issue here and get some real code coverage.
In short I would say no, the juice is not worth the squeeze to test a service method with a single line that retrieves model data. In my experience people who are new to TDD want to test absolutely everything. The old chestnut of abstracting a facade to a 3rd party framework just so you can create a mock of that frameworks API with which you bastardise/extend so that you can inject dummy data is of little value in my mind. Everyone has a different view of how much unit testing is best. I tend to be more pragmatic these days and ask myself if my test is really adding value to the end product, and at what cost.
I want to share an approach commented about and briefly discussed but show an actual example that I am currently using to help unit test EF-based services.
First, I would love to use the in-memory provider from EF Core, but this is about EF 6. Furthermore, for other storage systems like RavenDB, I'd also be a proponent of testing via the in-memory database provider. Again--this is specifically to help test EF-based code without a lot of ceremony.
Here are the goals I had when coming up with a pattern:
It must be simple for other developers on the team to understand
It must isolate the EF code at the barest possible level
It must not involve creating weird multi-responsibility interfaces (such as a "generic" or "typical" repository pattern)
It must be easy to configure and setup in a unit test
I agree with previous statements that EF is still an implementation detail and it's okay to feel like you need to abstract it in order to do a "pure" unit test. I also agree that ideally, I would want to ensure the EF code itself works--but this involves a sandbox database, in-memory provider, etc. My approach solves both problems--you can safely unit test EF-dependent code and create integration tests to test your EF code specifically.
The way I achieved this was through simply encapsulating EF code into dedicated Query and Command classes. The idea is simple: just wrap any EF code in a class and depend on an interface in the classes that would've originally used it. The main issue I needed to solve was to avoid adding numerous dependencies to classes and setting up a lot of code in my tests.
This is where a useful, simple library comes in: Mediatr. It allows for simple in-process messaging and it does it by decoupling "requests" from the handlers that implement the code. This has an added benefit of decoupling the "what" from the "how". For example, by encapsulating the EF code into small chunks it allows you to replace the implementations with another provider or totally different mechanism, because all you are doing is sending a request to perform an action.
Utilizing dependency injection (with or without a framework--your preference), we can easily mock the mediator and control the request/response mechanisms to enable unit testing EF code.
First, let's say we have a service that has business logic we need to test:
public class FeatureService {
private readonly IMediator _mediator;
public FeatureService(IMediator mediator) {
_mediator = mediator;
}
public async Task ComplexBusinessLogic() {
// retrieve relevant objects
var results = await _mediator.Send(new GetRelevantDbObjectsQuery());
// normally, this would have looked like...
// var results = _myDbContext.DbObjects.Where(x => foo).ToList();
// perform business logic
// ...
}
}
Do you start to see the benefit of this approach? Not only are you explicitly encapsulating all EF-related code into descriptive classes, you are allowing extensibility by removing the implementation concern of "how" this request is handled--this class doesn't care if the relevant objects come from EF, MongoDB, or a text file.
Now for the request and handler, via MediatR:
public class GetRelevantDbObjectsQuery : IRequest<DbObject[]> {
// no input needed for this particular request,
// but you would simply add plain properties here if needed
}
public class GetRelevantDbObjectsEFQueryHandler : IRequestHandler<GetRelevantDbObjectsQuery, DbObject[]> {
private readonly IDbContext _db;
public GetRelevantDbObjectsEFQueryHandler(IDbContext db) {
_db = db;
}
public DbObject[] Handle(GetRelevantDbObjectsQuery message) {
return _db.DbObjects.Where(foo => bar).ToList();
}
}
As you can see, the abstraction is simple and encapsulated. It's also absolutely testable because in an integration test, you could test this class individually--there are no business concerns mixed in here.
So what does a unit test of our feature service look like? It's way simple. In this case, I'm using Moq to do mocking (use whatever makes you happy):
[TestClass]
public class FeatureServiceTests {
// mock of Mediator to handle request/responses
private Mock<IMediator> _mediator;
// subject under test
private FeatureService _sut;
[TestInitialize]
public void Setup() {
// set up Mediator mock
_mediator = new Mock<IMediator>(MockBehavior.Strict);
// inject mock as dependency
_sut = new FeatureService(_mediator.Object);
}
[TestCleanup]
public void Teardown() {
// ensure we have called or expected all calls to Mediator
_mediator.VerifyAll();
}
[TestMethod]
public void ComplexBusinessLogic_Does_What_I_Expect() {
var dbObjects = new List<DbObject>() {
// set up any test objects
new DbObject() { }
};
// arrange
// setup Mediator to return our fake objects when it receives a message to perform our query
// in practice, I find it better to create an extension method that encapsulates this setup here
_mediator.Setup(x => x.Send(It.IsAny<GetRelevantDbObjectsQuery>(), default(CancellationToken)).ReturnsAsync(dbObjects.ToArray()).Callback(
(GetRelevantDbObjectsQuery message, CancellationToken token) => {
// using Moq Callback functionality, you can make assertions
// on expected request being passed in
Assert.IsNotNull(message);
});
// act
_sut.ComplexBusinessLogic();
// assertions
}
}
You can see all we need is a single setup and we don't even need to configure anything extra--it's a very simple unit test. Let's be clear: This is totally possible to do without something like Mediatr (you would simply implement an interface and mock it for tests, e.g. IGetRelevantDbObjectsQuery), but in practice for a large codebase with many features and queries/commands, I love the encapsulation and innate DI support Mediatr offers.
If you're wondering how I organize these classes, it's pretty simple:
- MyProject
- Features
- MyFeature
- Queries
- Commands
- Services
- DependencyConfig.cs (Ninject feature modules)
Organizing by feature slices is beside the point, but this keeps all relevant/dependent code together and easily discoverable. Most importantly, I separate the Queries vs. Commands--following the Command/Query Separation principle.
This meets all my criteria: it's low-ceremony, it's easy to understand, and there are extra hidden benefits. For example, how do you handle saving changes? Now you can simplify your Db Context by using a role interface (IUnitOfWork.SaveChangesAsync()) and mock calls to the single role interface or you could encapsulate committing/rolling back inside your RequestHandlers--however you prefer to do it is up to you, as long as it's maintainable. For example, I was tempted to create a single generic request/handler where you'd just pass an EF object and it would save/update/remove it--but you have to ask what your intention is and remember that if you wanted to swap out the handler with another storage provider/implementation, you should probably create explicit commands/queries that represent what you intend to do. More often than not, a single service or feature will need something specific--don't create generic stuff before you have a need for it.
There are of course caveats to this pattern--you can go too far with a simple pub/sub mechanism. I've limited my implementation to only abstracting EF-related code, but adventurous developers could start using MediatR to go overboard and message-ize everything--something good code review practices and peer reviews should catch. That's a process issue, not an issue with MediatR, so just be cognizant of how you're using this pattern.
You wanted a concrete example of how people are unit testing/mocking EF and this is an approach that's working successfully for us on our project--and the team is super happy with how easy it is to adopt. I hope this helps! As with all things in programming, there are multiple approaches and it all depends on what you want to achieve. I value simplicity, ease of use, maintainability, and discoverability--and this solution meets all those demands.
In order to unit test code that relies on your database you need to setup a database or mock for each and every test.
Having a database (real or mocked) with a single state for all your tests will bite you quickly; you cannot test all records are valid and some aren't from the same data.
Setting up an in-memory database in a OneTimeSetup will have issues where the old database is not cleared down before the next test starts up. This will show as tests working when you run them individually, but failing when you run them all.
A Unit test should ideally only set what affects the test
I am working in an application that has a lot of tables with a lot of connections and some massive Linq blocks. These need testing. A simple grouping missed, or a join that results in more than 1 row will affect results.
To deal with this I have setup a heavy Unit Test Helper that is a lot of work to setup, but enables us to reliably mock the database in any state, and running 48 tests against 55 interconnected tables, with the entire database setup 48 times takes 4.7 seconds.
Here's how:
In the Db context class ensure each table class is set to virtual
public virtual DbSet<Branch> Branches { get; set; }
public virtual DbSet<Warehouse> Warehouses { get; set; }
In a UnitTestHelper class create a method to setup your database. Each table class is an optional parameter. If not supplied, it will be created through a Make method
internal static Db Bootstrap(bool onlyMockPassedTables = false, List<Branch> branches = null, List<Products> products = null, List<Warehouses> warehouses = null)
{
if (onlyMockPassedTables == false) {
branches ??= new List<Branch> { MakeBranch() };
warehouses ??= new List<Warehouse>{ MakeWarehouse() };
}
For each table class, each object in it is mapped to the other lists
branches?.ForEach(b => {
b.Warehouse = warehouses.FirstOrDefault(w => w.ID == b.WarehouseID);
});
warehouses?.ForEach(w => {
w.Branches = branches.Where(b => b.WarehouseID == w.ID);
});
And add it to the DbContext
var context = new Db(new DbContextOptionsBuilder<Db>().UseInMemoryDatabase(Guid.NewGuid().ToString()).Options);
context.Branches.AddRange(branches);
context.Warehouses.AddRange(warehouses);
context.SaveChanges();
return context;
}
Define a list of IDs to make is easier to reuse them and make sure joins are valid
internal const int BranchID = 1;
internal const int WarehouseID = 2;
Create a Make for each table to setup the most basic, but connected version it can be
internal static Branch MakeBranch(int id = BranchID, string code = "The branch", int warehouseId = WarehouseID) => new Branch { ID = id, Code = code, WarehouseID = warehouseId };
internal static Warehouse MakeWarehouse(int id = WarehouseID, string code = "B", string name = "My Big Warehouse") => new Warehouse { ID = id, Code = code, Name = name };
It's a lot of work, but it only needs doing once, and then your tests can be very focused because the rest of the database will be setup for it.
[Test]
[TestCase(new string [] {"ABC", "DEF"}, "ABC", ExpectedResult = 1)]
[TestCase(new string [] {"ABC", "BCD"}, "BC", ExpectedResult = 2)]
[TestCase(new string [] {"ABC"}, "EF", ExpectedResult = 0)]
[TestCase(new string[] { "ABC", "DEF" }, "abc", ExpectedResult = 1)]
public int Given_SearchingForBranchByName_Then_ReturnCount(string[] codesInDatabase, string searchString)
{
// Arrange
var branches = codesInDatabase.Select(x => UnitTestHelpers.MakeBranch(code: $"qqqq{x}qqq")).ToList();
var db = UnitTestHelpers.Bootstrap(branches: branches);
var service = new BranchService(db);
// Act
var result = service.SearchByName(searchString);
// Assert
return result.Count();
}
There is Effort which is an in memory entity framework database provider. I've not actually tried it... Haa just spotted this was mentioned in the question!
Alternatively you could switch to EntityFrameworkCore which has an in memory database provider built-in.
https://blog.goyello.com/2016/07/14/save-time-mocking-use-your-real-entity-framework-dbcontext-in-unit-tests/
https://github.com/tamasflamich/effort
I used a factory to get a context, so i can create the context close to its use. This seems to work locally in visual studio but not on my TeamCity build server, not sure why yet.
return new MyContext(#"Server=(localdb)\mssqllocaldb;Database=EFProviders.InMemory;Trusted_Connection=True;");
I like to separate my filters from other portions of the code and test those as I outline on my blog here http://coding.grax.com/2013/08/testing-custom-linq-filter-operators.html
That being said, the filter logic being tested is not identical to the filter logic executed when the program is run due to the translation between the LINQ expression and the underlying query language, such as T-SQL. Still, this allows me to validate the logic of the filter. I don't worry too much about the translations that happen and things such as case-sensitivity and null-handling until I test the integration between the layers.
It is important to test what you are expecting entity framework to do (i.e. validate your expectations). One way to do this that I have used successfully, is using moq as shown in this example (to long to copy into this answer):
https://learn.microsoft.com/en-us/ef/ef6/fundamentals/testing/mocking
However be careful... A SQL context is not guaranteed to return things in a specific order unless you have an appropriate "OrderBy" in your linq query, so its possible to write things that pass when you test using an in-memory list (linq-to-entities) but fail in your uat / live environment when (linq-to-sql) gets used.

Mixing Repository implementations for different data sources

A Repository as defined by Martin Fowler is supposed to act like an in-memory domain object collection. This allows the application (in theory) to be ignorant of the persistence mechanism.
So under normal circumstances you'd have something like this:
public void MyBusinessLogicMethod () {
...
IRepository<Customer> repository = myIocContainer.Resolve<IRepository<Customer>>();
repository.Add(customer);
}
If however you have a series of inserts/updates that you wish to do and want a mechanism to roll back should any of them fail you'd need some sort of UnitOfWork implementation:
public void MyBusinessLogicMethod () {
...
using (IUnitOfWork uow = new UnitOfWork()){
IRepository<Customer> customerRepo = myIocContainer.Resolve<IRepository<Customer>>(uow);
customerRepo.Add(customer);
IRepository<Order> orderRepo = myIocContainer.Resolve<IRepository<Order>>(uow);
orderRepo.Add(order);
IRepository<Invoice> invoiceRepo = myIocContainer.Resolve<IRepository<Invoice>>(uow);
invoiceRepo.Update(invoice);
uow.Save();
}
}
However if you had some bizarre requirement that your Customer Repository was acting against a SqlServer database, your Order Repository against a MySql database and your Invoice Repository against a PostgreSQL database, how would you go about handling the Transactions for each database session?
Now this is a bit of contrived example for sure but every Repository implementation I've come across seems to know at some level that it's really a particular database and ORM being used.
Imagine another scenario where you have 2 repositories where one is going to a database and the other is calling a web service. The whole point of Repositories is that the application shouldn't care what data source you are going to but without jumping through some massive hoops I don't see how these scenarios can be accounted for without the application knowing at some level "FYI this is going to data source x so we'd better treat it differently".
Is there a pattern or implementation that addresses this issue? It seems to me if you are using Database x and ORM y for your entire application then Repositories work splendidly, but if due to technical debt that course deviates then the benefits of repositories are greatly reduced.
In your UnitOfWork, as suggested, you should use a TransactionScope transaction.
It elevates, in your case, to MSDTC and ensure all enlisted operations are correctly executed before commit or otherwise rollback.

Categories