Reusable model in repository pattern - c#

I've recently come across the problem of having multiple database types which should be swappable. My solution for this would be the repository pattern.
Having models like these.
class Book {
public string Title { get; set; }
public virtual Author Author { get; set; }
}
class Author {
public string Name { get; set; }
public virtual ICollection<Book> Books { get; set; }
}
Having these two classes as my model.
And the following method in my repository.
class AuthorRepository {
IEnumerable<Author> GetAll() {
return Context.Set<Author>().ToList();
}
}
Now I've got a few problems. First would be that using the repository like this.
using(var unitOfWork = new UnitOfWork(new MyContext())) {
MyObservableCollection = new ObservableCollection<Author>(unitOfWork.Authors.GetAll());
}
If I were to try and access the books inside the author model I would get a ObjectDisposedException. Which is obvious since books can only be accessed inside of the DbContext so the property should really only be used inside of the repository and not outside.
Now my second issue is that when I want to change from entity framework to another persistence framework the virtual methods would not work since (again as far as I am aware) this is only used in entity framework.
The setup shown above is how I've seen the repository pattern implemented just about everywhere, but I don't see the use in the pattern when I need to change my model whenever I want to change the persistence framework.
My fix would be the following.
class Author {
public string Name { get; set; }
}
class EntityFrameworkAuthor : Author {
public virtual ICollection<Book> Books { get; set; }
}
The EF author would only be used in the repositories and the Author would be returned to the business layer.
Now to my questions.
Is the method shown above the right way to use the repository pattern if I want to be able to switch frameworks easily (which I assumed the repository pattern was for).
Is my fix a good way to improve my current model? Or does it break the pattern in some way?
If not how would I go about making my model reusable for different persistence frameworks.

I don't think breaking your model in two is a good idea. Personally I always think in terms of having the business logic completely unaware of the persistence logic.
You can do that via repository pattern which should be used in this way: you create the repository class with just the basic methods and then you add new methods as you need it.
The reason for this is that you want each repository to be an abstraction of the mechanism used by your persistence layer to load the data.
(here's a link that I found usefull about the repository pattern)
What I mean is that from the business layer point of view the method used to load, for example, "all Author that wrote a book in 1974" could be implemented within your code, as a stored procedure or everything else; as long as the business object that requested that set of data gets what it wants, it doesn't (and shouldn't) care.
With your solution however, you're making domain objects aware of the way data are accessed.
Personally, if the need is to be able to change ORM in the future, I would prefer to have something like a FacadeActorRepository that uses a concrete implementation EfActorRepository that you'll be able to switch as you change your ORM.
As for the lazy loading issue, as plalx pointend out in the comments, (quote)
"If the data is not used in business logic conditions to protect the
state of the aggregate (data cluster) to which it belongs and this
data could exist on it's own then it shouldn't be aggregated"
which means that if Author really needs a list of Books, you shouldn't allow it to be lazy loaded, but you should have it as soon as the Author object is finished being created.
I hope this helps :)

Related

Using interfaces in models with SQLite

Let's say I have an interface like this:
public interface IUser
{
int Id { get; }
string Name { get; }
List<IMonthlyBudget> MonthlyBudget { get; }
}
and then I have a model that implements this:
public class User : IUser
{
public int Id { get; set; }
public string Name { get; set; }
public List<IMonthlyBudget> MonthlyBudget { get; set; }
}
and here I have the IMonthlyBudget:
public interface IMonthlyBudget
{
int Id { get; }
float MonthlyMax { get; }
float CurrentSpending { get; }
float MonthlyIncome { get; }
}
Now I have my models. But the issue comes with using SQLite. SQLite can't understand what is the real implementation of IMonthlyBudget. I understand why, but I really don't want remove the interface and expose the real implementation to all the clients that use these models. In my project structure I have a Core project that has all the model interfaces, and the model implementation are in a data access project.
Is there something wrong with how I'm approaching this problem? I assume i'm not the first one to run into a issue like this. Isn't it completely normal practice to keep model interfaces (what repositories etc then use as their return types, parameters and stuff like that) and implement the actual concrete models in a data access project?
And can someone explain why I can't do this:
public class User : IUser
{
public int Id { get; set; }
public string Name { get; set; }
public List<MonthlyBudget> MonthlyBudget { get; set; }
}
MonthlyBudget implements IMonthlyBudget, shouldn't it be completely fine to use the concrete model as the type instead of the the interface when the concrete model actually implements the interface?
A few questions here, so I'll break it down into sections:
Use of Interfaces
It is definitely good practice to interface classes that perform operations. For example, you may have a data service (i.e. data access layer) interface that allows you to do operations to read and modify data in your persistent store. However, you may have several implementations of that data service. One implementation may save to the file system, another to a DBMS, another is a mock for unit testing, etc.
However, in many cases you do not need to interface your model classes. If you're using an anemic business object approach (as opposed to rich business objects), then model classes in general should just be containers for data, or Plain Old CLR Objects (POCO). Meaning these objects don't have any real functionality to speak of and they don't reference any special libraries or classes. The only "functionality" I would put in a POCO is one that is dependent only upon itself. For example, if you have a User object that has a FirstName and LastName property, you could create a read-only property called FullName that returns a concatenation of the two.
POCOs are agnostic as to how they are populated and therefore can be utilized in any implementation of your data service.
This should be your default direction when using an anemic business object approach, but there is at least one exception I can think of where you may want to interface your models. You may want to support for example a SQLite data service, and a Realm (NoSQL) data service. Realm objects happen to require your models to derive from RealmObject. So, if you wanted to switch your data access layer between SQLite and Realm then you would have to interface your models as you are doing. I'm just using Realm as an example, but this would also hold true if you wanted to utilize your models across other platforms, like creating an observable base class in a UWP app for example.
The key litmus test to determining whether you should create interfaces for your models is to ask yourself this question:
"Will I need to consume these models in various consumers and will those consumers require me to define a specific base class for my models to work properly in those consumers?"
If the answer to this is "yes", then you should make interfaces for your models. If the answer is "no", then creating model interfaces is extraneous work and you can forego it and let your data service implementations deal with the specifics of their underlying data stores.
SQLite Issue
Whether you continue to use model interfaces or not, you should still have a data access implementation for SQLite which knows that it's dealing with SQLite-specific models and then you can do all your CRUD operations directly on those specific implementations of your model. Then since you're referring to a specific model implementation, SQLite should work as usual.
Type Compatibility
To answer your final question the type system does not see this...
List<IMonthlyBudget> MonthlyBudget
as being type-compatible with this...
List<MonthlyBudget> MonthlyBudget
In our minds it seems like if I have a list of apples, then it should be type-compatible with a list of fruit. The compiler sees an apple as a type of fruit, but not a list of apples as a type of a list of fruit. So you can't cast between them like this...
List<IMonthlyBudget> myMonthlyBudget = (List<IMonthlyBudget>) new List<MonthlyBudget>();
but you CAN add a MonthlyBudget object to a list of IMonthlyBudget objects like this...
List<IMonthlyBudget> myMonthlyBudget = new List<IMonthlyBudget>();
myMonthlyBudget.Add(new MonthlyBudget());
Also you can use the LINQ .Cast() method if you want to cast an entire list at once.
The reason behind this has to do with type variance. There's a good article on it here that can shed some light as to why:
Covariance and Contravariance
I hope that helps! :-)

Factory Pattern where should this live in DDD?

I have debated this for a while now and still have not come to a conclusion.
While most examples I see have the factories code in the application layer I tend to think it should be in the domain layer.
Reasons for this:
I sometimes have initial validation done in my factory where I want all creation of objects to go through.
I want this code to be used on all instantiates of my object.
Sometimes an operation requires parameter information which feels unnatural to pass to a constructor.
And a few more not as important reasons.
Are there reasons why this is a bad practice?
Does this break other patterns?
A factory in DDD is just an instance of the factory pattern and as such it should be used where it makes the most sense. Another principle to consider is the information expert pattern which essentially states that behavior should be assigned to classes closest to the information. Therefore, if you have some domain specific rules and logic you would like to enforce, place the factory in the domain layer - after all, the factory creates domain objects. Note however that you may have other types of factories in other layers.
From memory, Eric Evans' book has examples where object factories are very much part of the domain layer.
For me, it makes perfect sense to locate your factories here.
+1 for doing that. Accessibility would be a good reason, I would keep the creational code at least close to the domain model layer. Otherwise users of the domain model will get simply confused how to instantiate it specially when finding restricted access constructors. Actually one sound reason to separate it would be that you have different valid ways to create the same thing e.g. which is the case usually when employing the Abstract Factory.
If I had to separate it I would put it in e.g. a package (in the case of Java) at least the same level of the domain model and ship it always along with it e.g.
upper
--> domain
--> domain_factory
I prefer Factories in the Application Layer.
If you keep the Factories in the Domain Layer, they will not help you when you need complex types as parameters (C# code example):
Application Layer:
//this Factory resides in the Domain Layer and cannot reference anything else outside it
Person person = PersonAggregateFactory.CreateDeepAndLargeAggregate(
string name, string code, string streetName,...
and lots of other parameters...);
//these ones reside in Application Layer, thus can be much more simple and readable:
Person person = PersonAggregateFactory.CreateDeepAndLargeAggregate(CreatePersonCommand);
Person person = PersonAggregateFactory.CreateDeepAndLargeAggregate(PersonDTO);
Domain Layer:
public class Person : Entity<Person>
{
public Address Address {get;private set;}
public Account Account {get;private set;}
public Contact Contact {get;private set;}
public string Name {get;private set;}
public Person(string name, Address address,Account account, Contact contact)
{
//some validations & assigning values...
this.Address = address;
//and so on...
}
}
public class Address:Entity<Address>{
public string Code {get;private set;}
public string StreetName {get;private set;}
public int Number {get;private set;}
public string Complement {get;private set;}
public Address(string code, string streetName, int number, string complement?)
{
//some validations & assigning values...
code = code;
}
}
public class Account:Entity<Account>{
public int Number {get;private set;}
public Account(int number)
{
//some validations & assigning values...
this.Number = number;
}
}
//yout get the idea:
//public class Contact...
Also, there is no obligation on keeping Factories inside the Domain Layer (from Domain Driven Design Quickly):
Therefore, shift the responsibility for creating instances of complex
objects and Aggregates to a separate object, which may itself have
no responsibility in the domain model but is still part of the
domain design. Provide an interface that encapsulates all complex
assembly and that does not require the client to reference the
concrete classes of the objects being instantiated. Create entire
Aggregates as a unit, enforcing their invariants.
As I don't use Factories to load persisted objects into memory, they don't have to be accessible from other layers than Application's. Here's why (from Domain Driven Design Quickly):
Another observation is that Factories need to create new objects
from scratch, or they are required to reconstitute objects which
previously existed, but have been probably persisted to a
database. Bringing Entities back into memory from their resting
place in a database involves a completely different process than
creating a new one. One obvious difference is that the new
object does not need a new identity. The object already has one.
Violations of the invariants are treated differently. When a new
object is created from scratch, any violation of invariants ends
up in an exception. We can’t do that with objects recreated from
a database. The objects need to be repaired somehow, so they
can be functional, otherwise there is data loss.
If builders/factories only have dependencies on domain classes and primitives, place them in the domain layer, otherwise place them outside the domain layer.
CAREFUL with placing 'implementation' in the Domain Layer.
Your domain code doesn't have dependencies. So, you are in trouble if you need to have complex factories.
For example:
// DOMAIN LAYER
public interface IAggregateFactory<TAgg, in TInput>
{
Task<TAgg> CreateAsync(TInput input);
}
public class AvailabilityFactoryParameters
{
public string SomeInputParameter { get; set; }
public string ZipCode { get; set; }
}
// INFRASTRUCTURE/APPLICATION LAYER
public class AvailabilityFactory : IAggregateFactory<GamePredictorAggregate,
GamePredictorFactoryParameters>
{
private readonly HttpClient _httpClient;
public AvailabilityFactory(IHttpClientFactory factory)
{
_httpClient = factory.CreateClient("weatherApi");
}
public async Task<GamePredictorAggregate> CreateAsync(GamePredictorFactoryParameters input)
{
var weather = await _httpClient.GetFromJsonAsync<WeatherDto>($"/weather/{input.ZipCode}");
return new GamePredictorAggregate(weather.CurrentTemperature, input.SomeInputParameter);
}
}
public class WeatherDto
{
public double CurrentTemperature { get; set; }
}
As you can see, now you have a myriad of objects and dependencies available to enrich your factory experience.
So, when you use it in your Application Service, it is easy...
public class GamePredictionService : ApplicationService
{
private readonly IAggregateFactory<GamePredictorAggregate, GamePredictorFactoryParameters> _factory;
public GamePredictionService(IAggregateFactory<GamePredictorAggregate, GamePredictorFactoryParameters> factory)
{
_factory = factory;
}
public async Task CreateNewPredictor(string zipCode, int someOtherParamater)
{
var input = new GamePredictorFactoryParameters();
input.ZipCode = zipCode;
input.SomeInputParameter = someOtherParamater;
var aggregate = await _factory.CreateAsync(input);
// Do your biz operations
// Persist using repository
}
}
Now your application service doesn't need to worry about the internals, and your domain objects need to understand how the factory gives them 'birth.'
Summary: Having your implementation in the Domain layer makes only sense if your factory only needs primitive types and nothing else. In cases where you may need to gather data from external services or other application services' DTOs, you want to move the implementation outside.
The only 'drawback' is that you need to 'inject' the factory into your application service, but that's not a big deal.
I hope this answer helps to clarify 'where to place Factories.'

Using DI with a shared library across applications

I'm facing a design challenge that I just can't seem to solve in a satisfactory way. I've got a class library assembly that contains all of my shared ORM objects (using EntitySpaces framework). These objects are used in 2 or more different applications which is why they are in their own assembly. This setup has worked fine for 4+ years for me.
I also have a couple of applications built on the Composite Application Block (CAB) from Microsoft's Patterns & Practices group (P&P). Yes, I know this is really old but I'm a part time developer, one-man-shop and can't afford to update to whatever the current framework is.
Here is where my problem comes in: I have been exercising my OO design skills and whenever doing a substantial refactoring I try to shift from a procedural approach to a more OO approach. Of course a major aspect of OO design is placing the operations close to the data they work with, this means that my ORM objects need to have functionality added to them where appropriate. This is proving a real head scratcher when I also consider that I'm using P&P's Object Builder DI container within CAB and that much of the functionality I would move into my ORM objects will need access to the services exposed by my applications.
In other words, let's say I have a shared business object called "Person" (original, I know) and I have two applications that do ENTIRELY different things with a person. Application A provides a set of services that the Person object would need to have DI'ed in order for it to take on some of the methods that are currently littered throughout my services layers. Application B also has a different set of services that IT needs to have DI'ed into the person object.
Considering how the P&P Object Builder resolves dependencies using attribute decoration and Type reflection I don't see how I can accomplish this. In a nutshell, I have a shared object that when used in various applications I would need to inject dependencies so that it can perform certain operations specific to that application.
The only approach I can come up with is to inherit a new Type in Application A & B from the Person object. I would then add my non-shared functionality and DI code into this application-specific specialized Person object. Now that I write that it seems so obvious, however it's still my only solution I can come up with and I wanted to ask here to see if anyone else had a different solution they would like to propose?
One problem I would have with my solution is that I can see myself getting caught up on naming my inherited type - I mean... it's a person, so what else would you call it? Anyways, hopefully you will have some ideas for me.
Also, I'm not hip on the current technologies that are out there and really, to be honest only barely grasp the ones I'm currently using. So if I've said something contradictory or confusing I hope you can understand enough from the rest of the post to get what I'm asking.
It sounds like you're breaking the Single Responsibility Principle.
A Person object should just be holding the data for a person record. The services would then take in a Person object and manipulate it rather than having methods on the Person object that did that manipulation.
A classic example of this would be populating the Person object. Lets say app A grabs the data from a WebService, and app B grabs it from a database. In these cases I'd have some sort of Storage service that you call to get your Person object. Then implementation of that storage can be specific to each application, and be put into your IOC by the app, rather than trying to have a common interface in your shared assembly.
I agree with Cameron MacFarland on this: You are breaking SRP.
Of course a major aspect of OO design
is placing the operations close to the
data they work with, this means that
my ORM objects need to have
functionality added to them where
appropriate
Placing data AND functionality from A AND functionality from B is two responsibilities too much. Adhearing to SRP will almost always result in seperating data and functionality in seperate classes (data structures and objects). Thus, using Cameron MacFarlands sugestion is probably the best way to go.
I could think of couple of approaches to address this.
Separate out the behavior specific to each person/application separately. Perform dependency injection using setter in the application itself.
Apporach1
public interface IPerson
{
IPersonApp1 Person1 {get; set;}
IPersonApp2 person2 {get; set;}
}
class Person : IPerson
{
IPerson1 Person1 {get; set;}
IPerson2 Person2 {get; set;}
}
public interface IPerson1
{
// App1 specific behavior here
void App1SpecificMethod1();
}
class Person1: IPerson1
{
void App1SpecificMethod1()
{
// implementation
}
}
class App1
{
IPerson objPerson;
// Dependency injection using framework
App1(IPerson objPerson)
{
this.objPerson = objPerson;
// Dependency injection using setter
this.objPerson.Person1 = new Person1();
}
}
Separate out the behavior specific to each person/application separately. Perform dependency injection in the Person constructor.
Apporach2
class Person : IPerson
{
IPerson1 Person1 {get; private set;}
IPerson2 Person2 {get; private set;}
// DI through constructor. If the type IPerson1 or IPerson2 are not registered, it will be set to null.
Person(IPerson1 objPerson1, IPerson2 objPerson2)
{
this.Person1 = objPerson1;
this.Person2 = objPerson2;
}
}
Person interface project need to have reference to IPerson1 and IPerson2 or you can declare IPerson1 and IPerson2 in the Person interface project itself.

DI: Associating entities with repository

I'm pretty new to the concept. What I'm trying to do is create a factory that will return an object that is used for repository functions. No problems there. So I create the instance of a concrete factory in main() and store it in a static property of App but my entities are in a separate dll. Does it make sense to pass the repository to each entity class in the constructor? This doesn't feel right. My question is: how is the best make my entities aware of which repository they should be using?
My App partial class looks like
public partial class App : Application
{
private static ICalDataAccess _daqFactory;
public static ICalDataAccess DataAccessFactory
{
set { _daqFactory = value; }
get { return _daqFactory; }
}
}
Maybe a little more code is in order.
public class Widget
{
public string Description { get; set; }
public int ID { get; set; }
private IWidgetRepository _widgetRepository;
public Widget(IWidgetRepository WidgetRepository)
{
_widgetRepository = WidgetRepository;
}
public void Save()
{
_widgetRepository.Save(this);
}
}
Am I doing anything egregious here?
I think the general recommendation is to keep your entities free from persistence concerns. That is, you have some code that retrieves the entities and uses them to perform whatever work needs to be done, resulting in new, deleted or modified entities, which the calling code then submits to the appropriate repository (or asks to be saved if you have something which tracks or detects modified entities, like EF or NHibernate).
That way your entities do not need to know about repositories at all.
I usually create a UnitOfWork helper class which exposes all of my repositories through a "public RepositoryFactory Repositories { get; }" property, so that simply by supplying an instance of the UnitOfWork class I have access to all of my data sources. UnitOfWork can then be injected via IoC to whatever class needs to have data access.
Some recommended reading on this topic:
Persistence Patterns
Discussion on this same topic elsewhere
Your description sounds more like the service locator pattern than dependency injection. Dependency injection typically looks like any object that needs some service object (such as data access) to do its work receives that service as parameter to its constructor.

Is it right to use IoC for the extensibility of my entities or domain model?

I've came across a dilemma which I think is worth discussing here.
I have a set of domain objects (you can also call them entities, if you like), which get some data from a separate DAL which is resolved with an IoC.
I was thinking about making my system very extensible, and I'm wandering if it is right to also resolve these entities by the IoC.
Let me present a dumb example.
Let's say I have a web site for which I have the following interface:
public interface IArticleData
{
int ID { get; }
string Text { get; set; }
}
The concept is, that the DAL implements such interfaces, and also a generic IDataProvider<TData> inteface, after which the DAL becomes easily replaceable. And there is the following class, which uses it:
public class Article
{
private IArticleData Data { get; set; }
public int ID
{
get { return Data.ID; }
}
public int Text
{
get { return Data.Text; }
set { Data.Text = value; }
}
private Article(IArticleData data)
{
Data = data;
}
public static FindByID(int id)
{
IDataProvider<IArticleData> provider = IoC.Resolve<IDataProvider<IArticleData>>();
return new Article(provider.FindByID(id));
}
}
This makes the entire system independent of the actual DAL implementation (which would be in the example, IDataProvider<IArticleData>).
Then imagine a situation in which this functionality is not really enough, and I'd like to extend it. In the above example, I don't have any options to do it, but if I make it implement an interface:
public interface IArticle
{
int ID { get; }
string Text { get; set; }
}
public class Article : IArticle
{
...
}
And then, I remove all dependencies to the Article class and start resolving it as a transient IArticle component with an IoC.
For example, in Castle: <component id="ArticleEntity" service="IArticle" type="Article" lifestyle="transient" />
After this, if I have to extend it, that would be this simple:
public class MyArticle : Article
{
public string MyProperty { ..... }
}
And all I have to do is change the configuration to this: <component id="ArticleEntity" service="IArticle" type="Article" lifestyle="transient" />
So anyone who would use the system in question would be able to replace all classes as simply as rewriting a line in the configuration. All the other entities would work correctly also, because the new one would implement the same functionality as the old one.
By the way, this seems to be a good solution for the "separation of concerns" philosophy.
My question is, is this the right thing to do?
After some serious thinking, I couldn't figure out any better way to do this. I also considered MEF, but it seems to be oriented to making plugins but not to replace or extend already complete parts of a system like this.
I read many SO questions (and also other sources) about the topic, the most notable are these:
How should I handle my Entity/Domain Objects using IoC/Dependency Injection? and
IoC, Where do you put the container?
And I'm also afraid that I'm falling to the problems described on the following pages:
http://martinfowler.com/bliki/AnemicDomainModel.html and
http://hendryluk.wordpress.com/2008/05/10/should-domain-entity-be-managed-by-ioc/
And one more thing: this would increase the testability of the entire system, isn't it?
What do you think?
EDIT:
Another option would be to create a Factory pattern for these entities, but IoC.Resolve<IArticle> is way simpler than IoC.Resolve<IArticleFactory>().CreateInstance()
I think you may be overcomplicating things. Would you ever have a need to replace Article with another type that implemented IArticle?
IoC containers are best used when you have a higher-level component that depends on a lower-level component, and you want the higher-level component to depend on an abstraction of that component because the lower-level component performs some operations internally that make it difficult to test the higher-level component e.g. database access. Or the lower-level component might represent a particular strategy in your application that can be interchangeable with other strategies e.g. a database gateway that abstracts out the details of working with vendor-specific database APIs.
As Article is a simple, POCO-style class, it's unlikely that you would gain any benefits creating instances of it though an IoC container.

Categories