I am trying to use unitofwork and repository pattern and I have the following "update" method which works fine if I am replacing all the elements in a table row (id, color, year).
public virtual void Update(TEntity entityToUpdate)
{
dbSet.Attach(entityToUpdate);
(entityToUpdate).State = EntityState.Modified;
}
But I want to update just the specific columns that I am passing (id & color). it will overwrite the other elements (year).
So for example, I have a database record in my Cars table:
Id = 1,
color = "red"
year = 2010
if I update it like so...
var location = new Car
{
Id = 1,
color = "blue"
};
unitOfWork.CarRepository.Update(car);
the record is now:
Id = 1,
color = "blue"
year = null
How can I rewrite my generic repository method to just change what I feed it? (ie keep the year value)
You will not be able to do so reasonably with the generic repository pattern. You really have no need to use this pattern. EntityFramework is already a generic repository why would you need to wrap it in another generic repository? This type of abstraction adds negative value.
You do want to encapsulate your database usage from your controller (there shouldn't ever be a DbContext in a MVC controller) you however don't need any special patterns to do so. Just inject the DbContext into a class that does work.
Also the unit of work pattern for the most part is an anti-pattern if you pass the UOW around. This creates some really insane coupling issues in your application that wholly unrelated code is able to impact vastly different segment of code.
Dropping the generic repository and using EF directly inside your service/DAL/resource class (whatever you want to call it) will allow you the full functionality of EF. This will allow doing partial updates very trivially.
To do partial updates with a generic repository you will need some heavy duty dynamic code for dealing with mapping. Honestly I could theoretically write this, but I know enough to know not to write this. The more and more abstract you make mapping the more brittle it becomes, it is next to impossible to predict the future for how to deal with mapping. This is why there are entire libraries like AutoMapper for dealing with the infinite numbers of combinations of how mapping can be done. AutoMapper is also somewhat if a misnomer too, while it can do basic automapping for the most part the use case of AutoMapper is still all static mapping not dynamic mapping. You would need to create dynamic mapping aka crystal ball mapping.
The update method you've written assumes you have started with an existing entity object with properties set. The repository assumes that your business logic works like this:
var car = repo.GetCar(id);
car.prop1 = "new value";
car.prop2 = "another new value";
repo.update(car);
This will keep any previous values you had set.
Related
I am maintaining an application which uses EF Core to persist data to a SQL database.
I am trying to implement a new feature which requires me to retrieve an object from the database (Lets pretend its an order) manipulate it and some of the order lines which are attached to it and save it back into the database. Which wouldn't be a problem but I have inherited some of this code so need to try to stick to the existing way of doing things.
The basic process for data access is :
UI -> API -> Service -> Repository -> DataContext
The methods in the repo follow this pattern (Though I have simplified it for the purposes of this question)
public Order GetOrder(int id)
{
return _context.Orders.Include(o=>o.OrderLines).FirstOrDefault(x=>x.Id == id);
}
The service is where business logic and mapping to DTOs are applied, this is what the GetOrder method would look like :
public OrderDTO GetOrder(int id)
{
var ord = _repo.GetOrder(id);
return _mapper.Map<OrderDto>(ord);
}
So to retrieve and manipulate an order my code would look something like this
public void ManipulateAnOrder()
{
// Get the order DTO from the service
var order = _service.GetOrder(3);
// Manipulate the order
order.UpdatedBy = "Daneel Olivaw";
order.OrderLines.ForEach(ol=>ol.UpdatedBy = "Daneel Olivaw");
_service.SaveOrder(order);
}
And the method in the service which allows this to be saved back to the DB would look something like this:
public void SaveOrder(OrderDTO order)
{
// Get the original item from the database
var original = _repo.GetOrder(order.Id);
// Merge the original and the new DTO together
_mapper.Map(order, original);
_repo.Save(original);
}
Finally the repositories save method looks like this
public void Save(Order order){
_context.Update(order)
_context.SaveChanges();
}
The problem that I am encountering is using this method of mapping the Entities from the context into DTOs and back again causes the nested objects (in this instance the OrderLines) to be changed (or recreated) by AutoMapper in such a way that EF no longer recognises them as being the entities that it has just given to us.
This results in errors when updating along the lines of
InvalidOperationException the instance of ProductLine cannot be tracked because another instance with the same key value for {'Id'} is already being tracked.
Now to me, its not that there is ANOTHER instance of the object being tracked, its the same one, but I understand that the mapping process has broken that link and EF can no longer determine that they are the same object.
So, I have been looking for ways to rectify this, There are two ways that have jumped out at me as being promising,
the answer mentioned here EF & Automapper. Update nested collections
Automapper.Collection
Automapper.collection seems to be the better route, but I cant find a good working example of it in use, and the implementation that I have done doesn't seem to work.
So, I'm looking for advice from anyone who has either used automapper collections before successfully or anyone that has any suggestions as to how best to approach this.
Edit, I have knocked up a quick console app as an example, Note that when I say quick I mean... Horrible there is no DI or anything like that, I have done away with the repositories and services to keep it simple.
I have also left in a commented out mapper profile which does work, but isn't ideal.. You will see what I mean when you look at it.
Repo is here https://github.com/DavidDBD/AutomapperExample
Ok, after examining every scenario and counting on the fact that i did what you're trying to do in my previous project and it worked out of the box.
Updating your EntityFramework Core nuget packages to the latest stable version (3.1.8) solved the issue without modifying your code.
AutoMapper in fact "has broken that link" and the mapped entities you are trying to save are a set of new objects, not previously tracked by your DbContext. If the mapped entities were the same objects, you wouldn't have get this error.
In fact, it has nothing to do with AutoMapper and the mapping process, but how the DbContext is being used and how the entity states are being managed.
In your ManipulateAnOrder method after getting the mapped entities -
var order = _service.GetOrder(3);
your DbContext instance is still alive and at the repository layer it is tracking the entities you just retrieved, while you are modifying the mapped entities -
order.UpdatedBy = "Daneel Olivaw";
order.OrderLines.ForEach(ol=>ol.UpdatedBy = "Daneel Olivaw");
Then, when you are trying to save the modified entities -
_service.SaveOrder(order);
this mapped entities reach the repository layer and DbContext tries to add them to its tracking list, but finds that it already has entities of same type with same Ids in the list (the previously fetched ones). EF can track only one instance of a specific type with a specific key. Hence, the complaining message.
One way to solve this, is when fetching the Order, tell EF not to track it, like at your repository layer -
public Order GetOrder(int id, bool tracking = true) // optional parameter
{
if(!tracking)
{
return _context.Orders.Include(o=>o.OrderLines).AsNoTracking().FirstOrDefault(x=>x.Id == id);
}
return _context.Orders.Include(o=>o.OrderLines).FirstOrDefault(x=>x.Id == id);
}
(or you can add a separate method for handling NoTracking calls) and then at your Service layer -
var order = _repo.GetOrder(id, false); // for this operation tracking is false
I'm struggling a little bit with following problem. Let's say I want to manage dependencies in my project, so my domain won't depend on any external stuff - in this problem on repository. In this example let's say my domain is in project.Domain.
To do so I declared interface for my repository in project.Domain, which I implement in project.Infrastructure. Reading DDD Red Book by Vernon I noticed, that he suggests that method for creating new ID for aggregate should be placed in repository like:
public class EntityRepository
{
public EntityId NextIdentity()
{
// create new instance of EntityId
}
}
Inside this EntityId object would be GUID but I want to explicitly model my ID, so that's why I'm not using plain GUIDs. I also know I could skip this problem completely and generate GUID on the database side, but for sake of this argument let's assume that I really want to generate it inside my application.
Right now I'm just thinking - are there any specific reasons for this method to be placed inside repository like Vernon suggests or I could implement identity creation for example inside entity itself like
public class Entity
{
public static EntityId NextIdentity()
{
// create new instance of EntityId
}
}
You could place it in the repository as Vernon says, but another idea would be to place a factory inside the constructor of your base entity that creates the identifier. In this way you have identifiers before you even interact with repositories and you could define implementation per your ID generation strategy. Repository could include a connection to something, like a web service or a database which can be costly and unavailable.
There are good strategies (especially with GUID) that allow good handling of identifiers. This also makes your application fully independent of the outside world.
This also enables you to have different identifier types throughout your application if the need arises.
For eg.
public abstract class Entity<TKey>
{
public TKey Id { get; }
protected Entity() { }
protected Entity(IIdentityFactory<TKey> identityFactory)
{
if (identityFactory == null)
throw new ArgumentNullException(nameof(identityFactory));
Id = identityFactory.CreateIdentity();
}
}
Yes, you could bypass the call to the repository and just generate the identity on the Entity. The problem, however, is that you've broken the core idea behind the repository: keeping everything related to entity storage isolated from the entity itself.
I would say keep the NextIdentity method in the respository, and still use it, even if you are only generating the GUID's client-side. The benefit is that in some future where you want to change how the identity's are being seeded, you can support that through the repository. Whereas, if you go with the approach directly on the Entity, then you would have to refactor later to support such a change.
Also, consider scenarios where you would use different repositories in such cases like testing. ie. you might want to generate two identities with the same ID and perform clash testing or "does this fail properly". Having a repository handle the generation gives you opportunity to get creative in such ways, without making completely unique test cases that don't mimic what actual production calls would occur.
TLDR; Keep it in the repository, even if your identifier can be client-side generated.
So I am currently extending the classes that Entity Framework automatically generated for each of the tables in my database. I placed some helpful methods for processing data inside these partial classes that do the extending.
My question, however, is concerning the insertion of rows in the database. Would it be good form to include a method in my extended classes to handle this?
For example, in the Product controller's Create method have something like this:
[HttpPost]
public ActionResult Create(Product p)
{
p.InsertThisProductIntoTheDatabase(); //my custom method for inserting into db
return View();
}
Something about this feels wrong to me, but I can't put my finger on it. It feels like this functionality should instead be placed inside a generic MyHelpers.cs class, or something, and then just do this:
var h = new MyHelpers();
h.InsertThisProductIntoTheDatabase(p);
What do you guys think? I would prefer to do this the "correct" way.
MVC 5, EF 6
edit: the InsertThisProductIntoTheDatabase method might look something like:
public partial class Product()
{
public void InsertThisProductIntoTheDatabase()
{
var context = MyEntities();
this.CreatedDate = DateTime.Now;
this.CreatedByID = SomeUserClass.ID;
//some additional transformation/preparation of the object's data would be done here too. My goal is to bring all of this out of the controller.
context.Products.Add(this);
}
}
One of the problems I see is that the entity framework DBContext is a unit of work. if you create a unit of work on Application_BeginRequest when you pass it into controller constructor it acts as a unit of work for the entire request. maybe it's only updating 1 entity in your scenario, but you could be writing more information to your database. unless you are wrapping everything in a TransactionScope, all these Saves are going to be independent which could leave your database in an inconsistent state. And even if you are wrapping everything with a TransactionScope, I'm pretty sure that transaction is going to be promoted to the DTC because you are making multiple physical connections in a single controller and sql server isn't that smart.
Going the BeginRequest route seems like less work than adding methods to all of your entities to save itself. Another issue here is that an EF entity is supposed to be a not really know anything about it's own persistence. That's what the DbContext is for. So putting a reference back to the DbContext breaks this isolation.
Your second reason, adding audit information to the entity, again adding this to each entity is a lot of work. You could override SaveChanges on the context and do it once for every entity. See this SO answer.
By going down this road I think that you are breaking SOLID design principles because your entities violate SRP. introduce a bunch of cohesion and you are ending up writing more code than you need. So i'd advocate against doing it your way.
Why don't you simply use:
db.Products.Add(p);
db.SaveChanges();
Your code would be much cleaner and it will certainly be easier for you to manage it and get help in the future. Most of samples available in internet use this schema. Extension methods and entities does not look pleasnt.
BTW: Isn't InsertThisProductIntoTheDatabase() method name too long?
I've been trying to find a good solution to this but with no luck, so either I'm not searching for the right keywords, or we're doing things wrong from the start so the problem shouldn't really exist.
Update for clarification: I would like this to work as a unit test rather than as an integration test, so I don't want this to hit the database, but I want to mock the associations made when EF persists changes in my unit test.
Original question:
Say you are testing a service method like so:
[Test]
public void AssignAuthorToBook_NewBookNewAuthor_SuccessfullyAssigned()
{
IBookService service = new BookService();
var book = new Book();
var Author = new Author() { Id = 123 };
service.AssignAuthorToBook(book, author);
Assert.AreEqual(book.AuthorId, 123);
}
Now lets say that this test fails because AssignAuthorToBook actually works using the code book.Author = author; so it is not assigning the AuthorId, it is assigning the entity. When this is persisted using the Entity Framework SaveChanges() method on the context it will associate the entities and the IDs will correlate. However, in my example above the logic of Entity Framework would not have been applied. What I am saying is the code will work once SaveChanges() has been called, but the unit test will fail.
In this simple example, you'd probably know straight away why your test had failed as you had just written the test immediately before the code and could easily fix it. However, for more complicated operations and for operations where future changes that may change the way entities are associated, which will break tests but may not break functionality, how is unit testing best approached?
My thoughts are:
The service layer should be ignorant of the persistence layer - should we mock the Data Context in the unit tests to mock the manner that it works? Is there an easy way to do this that will automatically tie up the associations (i.e. assign to correct entity if Id is used or assign the correct Id if the entity is used)?
Or should the tests be structured in a slightly different manner?
The tests that exist in the current project I have inherited work as in my example above, but it niggles with me that there is something wrong with the approach and that I haven't managed to find a simple solution to a possibly common problem. I believe the Data Context should be mocked, but this seems like a lot of code will need to be added to the mock to dynamically create the associations - surely this has already been solved?
Update: These are the closest answers I've found so far, but they're not quite what I'm after. I don't want to test EF as such, I just wondered what was best practise for testing service methods that access repositories (either directly or via navigation properties through other repositories sharing the same context).
How Do I Mock Entity Framework's Navigational Property Intelligence?
Mocked datacontext and foreign keys/navigation properties
Fake DbContext of Entity Framework 4.1 to Test
Navigation properties not set when using ADO.NET Mocking Context Generator
Conclusion so far: That this is not possible using unit testing, and only possible using integration testing with a real DB. You can get close and probably code something to dynamically associate the navigation properties, but your mock data context will never quite replicate the real context. I would be happy if any solution enabled me to automatically associate the navigation properties which would allow my unit tests to be better, if not perfect (the nature of a successful unit test doesn't by any means guarantee functionality anyway) The ADO.NET Mocking Context Generator comes close, but it appears that I'll have to have a mock version of every entity which will not work for me in case functioanlity is added to them using partial classes in my implementation.
I'd argue that you are expecting a result from your test that implies the use of several dependencies, arguably not qualifying it as a unit test, especially because of an implied dependency on EF.
The idea here is that if you acknowledge that your BookService has a dependency on EF you should use a mock to assert it interacts correctly with it, unfortunately EF doesn't seem to like to be mocked, so we can always put it under a repository, here's an example of how that test could be written using Moq:
[Test]
public void AssignAuthorToBook_NewBookNewAuthor_CreatesNewBookAndAuthorAndAssociatesThem()
{
var bookRepositoryMock = new Mock<IBookRepository>(MockBehavior.Loose);
IBookService service = new BookService(bookRepositoryMock.Object);
var book = new Book() {Id = 0}
var author = new Author() {Id = 0};
service.AssignAuthorToBook(book, author);
bookRepositoryMock.Verify(repo => repo.AddNewBook(book));
bookRepositoryMock.Verify(repo => repo.AddNewAuthor(author));
bookRepositoryMock.Verfify(repo => repo.AssignAuthorToBook(book, author));
}
The id being set is something that you would use an integration test for, but I'd argue that you shouldn't worry about the EF failing to set the Id, I say this for the same reason you should not worry about testing if the .net framework does what it's supposed to do.
I've written about interaction testing in the past (which I think is the right way to go in this scenario, you are testing the interaction between the BookService and the Repository), hope it helps: http://blinkingcaret.wordpress.com/2012/11/20/interaction-testing-fakes-mocks-and-stubs/
I was having the same problem as you and came across your post. What I found after was a in memory database called Effort.
Take a look at Effort
The following test works correctly
EntityConnection conn = Effort.EntityConnectionFactory.CreateTransient("name=MyEntities");
MyEntities ctx = new MyEntities(conn);
JobStatus js = new JobStatus();
js.JobStatusId = 1;
js.Description= "New";
ctx.JobStatuses.Add(js);
Job j = new Job();
j.JobId = 1;
j.JobStatus = js;
ctx.Jobs.Add(j);
ctx.SaveChanges();
Assert.AreEqual(j.JobStatusId, 1);
Where MyEntities is a DbContext created with a Effort connection string.
You still need to create you in memory objects, but, save changes on the context sets the objects associations as a database does.
That will never work the way you have it. The data context is wrapped up behind the scenes in your service layer, and the association of the BookID is never going to get updated on your local variable.
When I have done TDD on something using EF, I generally wrap up all my EF logic into some kind of DAO and have CRUD methods for the entities.
Then you could do something like this:
[Test]
public void AssignAuthorToBook_NewBookNewAuthor_SuccessfullyAssigned()
{
IBookService service = new BookService();
var book = new Book();
var bookID = 122;
book.ID = bookId;
var Author = new Author() { Id = 123 };
service.AssignAuthorToBook(book, author);
//ask the service for the book, which uses EF to get the book and populate the navigational properties, etc...
book = service.GetBook(bookID)
Assert.AreEqual(book.AuthorId, 123);
}
Your question is: if you mock database operation, you cannot test the correct function of AssignAuthorToBook because this class is high coupled to Entity Framework and the behaviour will change.
So my solution:
Decouple the classes (using a DAO, an interface with all database operations), now AssignAuthorToBook is easy to test because use functions of SetBook / GetBookId.
And do a test for your operations (SetBook / GetBookId) testing the database (ok, it's not a clear unittesting, but your question is exactly this: who can I test a database operation?, so... testing it, but in a separate test.
The general solution is to split it into layers.
The persistence layer / repository pattern is in charge of writing out/reading in information from whatever store you choose. ORMs are supposed to be boxed inside the persistance layer. Above this layer, there should be no trace of the ORM. This layer returns entities/value objects as defined in the DDD book (I dont intend anything related to the EF)
Next the service layer calls onto the Repository interface to obtain these POCO entities/value objects and is in charge of domain/business logic.
As for the testing,
the basic purpose of the persistence layer is to persist the desired information.. e.g. write customer information to a file or DB. Hence integration testing this layer with the implementation specific tech (e.g. SQL) makes the most sense. If the tests pass but the layer isn't able to read/write to the actual SQL DB.. they are useless.
The service layer tests can mock the persistence layer entry interface (e.g. Repository interface) and verify the domain logic without any dependency on the persistence tech - which is the way it should be. Unit tests here.
If you want to test that the "AssignAuthorToBook" works as expected then you can do it being completely DB ignorant - you a mock of the Book object and verify that the correct setter was called with correct value. Everything persistence related should be stubbed. To verify that a setter or a method was called you can use Moq or RhinoMocks.
An example can be found here: Rhino Mocks: AAA Synax: Assert property was set with a given type
Here is an example of how to verify alternative expectations: Using RhinoMocks, how can I assert that one of several methods was called?
I may be wrong, but as far as I know your domain model should be consistent even without a persistence layer.
So, in this case, your service, should ensure that the property AuthorID is equal to the Assigned Author class, even before persisting it to the database. Or your Book AuthorID property getter get that info from its inner Author class ID property assigned by the service.
public class Book {
public Author Author { get; set; }
public int AuthorId {
get { return Author.ID; }
set { Author.ID = value; }
}
}
public class Author {
public int Id { get; set; }
}
public class BookService {
public void AssignAuthorToBook(Book book, Author author)
{
book.Author = author;
}
}
I read this post about mocking Entity Framework(EF).
Shouldn't we abstract the entities' types as well? In order to preserve decoupling between the Data Access Layer (DAL) and the Business Layer (BL)?
In the above post, he used EF concrete generated entities types:
[TestMethod]
public void GetCustomer()
{
ContextContainerMock container = new ContextContainerMock();
IMyEntities en = container.Current;
**Customer c = new Customer { ID = 1, FirstName = "John", LastName = "Doe" };**
en.Customers.AddObject(c);
CustomerService service = new CustomerService(container);
var a = service.GetCustomer(1);
Assert.AreEqual(c.FirstName, a.FirstName);
Assert.AreEqual(c.LastName, a.LastName);
}
Personally, I don't mock these. I create, test and cleanup directly. This has helped me catch problem that are more real world scenarios when dealing with the database. Mocking is fantastic for testing integrations where you may not have access to a resource like a DB. If this is your case, then you may have no choice. Hope that helps.
The short answer is no.
The entities, if done correctly, don't depend on any other code, except other entities and value objects (such as string, int, or your own value object). Therefore, there's no need to mock them.
Also, the entities are part of the core of your system. They are what your system is all about. You'd typically want to test what the system behaves like when operating on these classes rather than testing when the system behaves like when operating on what the tests say they behave like.
(As a side note, your entities ought to look like and behave like something that exists in the real world. That is, you should be able to reason about the behavior with a non-technical person from the business side of the organisation. From your example I see that it is possible to create a Customer without a name. Is that ok from a business point of view? If not, I'd say you should have the constructor take the first and last name as argument.)