I've got problems with joining DDD and EF Core.
I'm making project using DDD architecture. As Data Access level I use generic Unit of Work pattern taken from here.
public interface IUnitOfWork
{
IRepository<TDomain> Repository<TDomain>() where TDomain : class;
}
public interface IRepository<TDomain>
{
TDomain Get(Expression<Func<TDomain, bool>> predicate);
}
Realizing these interfaces I use EF Core.
I've got some domain model with 2 classes
public class MainClass
{
public int Id { get; set; }
public List<RelatedItem> Items { get; set; }
}
public class RelatedItem
{
public int Id { get; set; }
public MainClass Parent { get; set; }
public DateTime Date { get; set; }
public string SomeProperty { get; set; }
}
In real life of my project MainClass has collection with hundreds of RelatedItems. In order to perform some operations I need only one RelatedItem per request with some date. It can be done by searching through Items property.
Incapsulating perfomance of EF Core in unit of work I have to load explicitly entities from DB with related items, because business login layer doesn't know anything about realization of UnitOfWork's repository. But this operation is very slow.
So I decided to create MainClassService which injects in costructor unitOfWork and have method which returns only one RelatedItem, and it works fine.
public class MainClassService
{
IUnitOfWork unitOfWork;
public MainClassService(IUnitOfWork unitOfWork)
{
this.unitOfWork = unitOfWork ?? throw new ArgumentNullException();
}
public RelatedItem GetRelatedItemByDate(int mainClassId, DateTime date)
{
return unitOfWork.Repository<RelatedItem>().Get(c => c.Parent.Id == mainClassId && c.Date == date);
}
}
So I've got situation when I cannot use property Items directly because of EF Core, but I should use them because of DDD architecture.
And my question is: is it ok to use such a construction?
From what it seems from your question, the MainClass is an Aggregate root and RelatedItem is a nested entity. This design decision should be based on the business rules/invariants that must be protected. When an Aggregate needs to mutate, it must be fully loaded from the repository, that is, the Aggregate root and all its nested entities and value object must be in memory before it execute the mutating command, no matter how big it is.
Also, it is not a good practice to inject infrastructure services into Aggregates (or in nested entities). If you need to do this, then you must think again on your architecture.
So, from what I wrote you can see that the problem manifest itself only when you try to mutate the Aggregate. If you only need to read it or find it, you could create some dedicated services that find the data using infrastructure components. Your MainClassService seems to be such a case, where you need only to read/find some RelatedItem entities.
In order to be clear that the purpose is only reading, the MainClassService needs to return a readonly representation of the RelatedItem entities.
So, you just madee some first steps towards CQRS, where the models are split into two: READ model and WRITE model.
Related
I have seen few articles however I need some suggestions/improvements if any, based on my current architecture.
I have created a Repository layer with a Generic Repository pattern, underneath it would be called DynamoDB.
The DynamoDB deals with the Model names and structures that are as good as table names and structures.
My Service Layer references the Contract(domain) layer for Dtos and the repository layer for calling the repo methods.
However the repository layer does not reference the Contract layer, it is required only if I need the mapping from Dtos to model (entity).
Considering the current design, for me the correct place to do mapping of model to dtos is the Service Layer, however, I'm confused about the correct place to do it, as my peers asked me to make a decoupled architecture and they were aligned to do it in the repository layer so that if the repository layer changes it should not affect your other layers.
My question is, whether my architecture is correct, and secondly where the Dto conversion should happen?? Repository layer or Service layer.
My repository layer:
public interface IDbContext<T> where T : class
{
Task CreateBatchWriteAsync(IEnumerable<T> entities, DynamoDBOperationConfig dynamoDBOperationConfig = null);
Task<List<T>> GetAllItemsAsync(DynamoDBOperationConfig dynamoDBOperationConfig = null);
}
public class DbContext<T> : IDbContext<T> where T : class
{
private readonly Amazon.DynamoDBv2.DataModel.IDynamoDBContext context;
public DbContext(IDynamoDBFactory dynamoDBFactory)
{
//
}
public async Task CreateBatchWriteAsync(IEnumerable<T> entities, DynamoDBOperationConfig dynamoDBOperationConfig = null)
{
// connect to dynamodb
}
public async Task<List<T>> GetAllItemsAsync(DynamoDBOperationConfig dynamoDBOperationConfig = null)
{
// connect to dynamodb
}
}
public interface IStoreRepository: IDbContext<Store>
{
}
public class StoreRepository : IStoreRepository
{
private readonly IDbContext<Store> _dbContext;
public TransitSessionRepository(IDbContext<Store> dbContext)
{
_dbContext = dbContext;
}
public async Task CreateBatchWriteAsync(IEnumerable<Store> entities, DynamoDBOperationConfig dynamoDBOperationConfig = null)
{
await _dbContext.CreateBatchWriteAsync(entities,dynamoDBOperationConfig);
}
public Task<List<Store>> GetAllItemsAsync(DynamoDBOperationConfig dynamoDBOperationConfig = null)
{
await _dbContext.GetAllItemsAsync();
}
}
Here is my Model in Respository Layer
[DynamoDBTable("Store")]
public class Store
{
[DynamoDBProperty("Code")]
public string Code { get; set; }
[DynamoDBProperty("Details")]
public TransitDetails Details { get; set; }
}
public class Details
{
[DynamoDBProperty("ClientName")]
public string ClientName { get; set; }
[DynamoDBProperty("RequestedBy")]
public string RequestedBy { get; set; }
[DynamoDBProperty("CreateDate")]
public string CreateDate { get; set; }
}
Please remember that this is an individual assumption for each project.
The IMO service layer will be the best way to do this in your architecture.
To make your code cleaner, you can create extension methods like ToEntityModel and ToDTOModel, so you can hide object creation.
The repository layer is the worst place to do this because of the single responsibility principle - the repository should support communication with the database - not parse one model to another.
There is not one agreed way to do this. Individual (per person, per organisatin) styles matter here a lot.
Here are two things to think about:
The smaller the objects a method exposes and accepts the easier it is to refactor. In other words, if you don't expose field X you don't need to worry how it's used.
If the repository returns the full db model the contract changes when the db model changes. If you expose a tailored dto then you have to change the dto if you want to expose more/less information. The 1st requires less work but gives less control and you may end up exposing more than you want.
Repository layer should only return exact data not the DTO.
Main reasons to use repository pattern is to abstracting communication with the database. If you return DTO from repository layer you will violate single responsibility of repository pattern and usage of DTO
Common approach to DTO conversion is "Convert it when you need it" so in your case the best layer to make conversion would be service layer since. Service layer is where your business needs resides
I`m in process of learning C# & .NET and EF (with aspnetboilerplate) and I came up with idea to create some dummy project so I can practice. But last 4 hour Im stuck with this error and hope someone here can help me.
What I create( well at least I think I create it correctly ) is 2 class called "Ingredient" and "Master"
I want to use it for categorize Ingredient with "Master" class.
For example ingredient like
Chicken breast
chicken drumstick
Both of them belong to Meat ( witch is input in "Master" database ) and here is my code
Ingredient.cs
public class Ingrident : Entity
{
public string Name { get; set; }
public Master Master { get; set; }
public int MasterId { get; set; }
}
Master.cs
public class Master : Entity
{
public string Name { get; set; }
public List<Ingrident> Ingridents { get; set; } = new();
}
IngridientAppService.cs
public List<IngridientDto> GetIngWithParent()
{
var result = _ingRepository.GetAllIncluding(x => x.Master);
//Also I try this but doesn`t work
// var result = _ingRepository.GetAll().Where(x => x.MasterId == x.Master.Id);
return ObjectMapper.Map<List<IngridientDto>>(result);
}
IngridientDto.cs
[AutoMap(typeof(IndexIngrident.Entities.Ingrident))]
public class IngridientDto : EntityDto
{
public string Name { get; set; }
public List<MasterDto> Master { get; set; }
public int MasterId { get; set; }
}
MasterDto.cs
[AutoMap(typeof(IndexIngrident.Entities.Master))]
public class MasterDto : EntityDto
{
public string Name { get; set; }
}
When I created ( for last practice ) M -> M relationship this approach with .getAllIncluding work but now when I have One -> Many it won`t work.
Hope someone will be able to help me or at least give me some good hint.
Have a nice day !
Straight up the examples you are probably referring to (regarding the repository etc.) are overcomplicated and for most cases, not what you'd want to implement.
The first issue I see is that while your entities are set up for a 1-to-many relationship from Master to Ingredients, your DTOs are set up from Ingredient to Masters which definitely won't map properly.
Start with the simplest thing. Get rid of the Repository and get rid of the DTOs. I'm not sure what the base class "Entity" does, but I'm guessing it exposes a common key property called "Id". For starters I'd probably ditch that as well. When it comes to primary keys there are typically two naming approaches, every table uses a PK called "Id", or each table uses a PK with the TableName suffixed with "Id". I.e. "Id" vs. "IngredientId". Personally I find the second option makes it very clear when pairing FKs and PKs given they'd have the same name.
When it comes to representing relationships through navigation properties one important detail is ensuring navigation properties are linked to their respective FK properties if present, or better, use shadow properties for the FKs.
For example with your Ingredient table, getting rid of the Entity base class:
[Table("Ingredients")]
public class Ingredient : Entity
{
[Key, DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int IngredientId { get; set; }
public string Name { get; set; }
public int MasterId { get; set; }
[ForeignKey("MasterId")]
public virtual Master Master { get; set; }
}
This example uses EF attributes to aid in telling EF how to resolve the entity properties to respective tables and columns, as well as the relationship between Ingredient and Master. EF can work much of this out by convention, but it's good to understand and apply it explicitly because eventually you will come across situations where convention doesn't work as you expect.
Identifying the (Primary)Key and indicating it is an Identity column also tells EF to expect that the database will populate the PK automatically. (Highly recommended)
On the Master side we do something similar:
[Table("Masters")]
public class Master : Entity
{
[Key, DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int MasterId { get; set; }
public string Name { get; set; }
[InverseProperty("Master")]
public virtual ICollection<Ingredient> Ingredients { get; set; } = new List<Ingredient>();
}
Again we denote the Primary Key, and for our Ingredients collection, we tell EF what property on the other side (Ingredient) it should use to associate to this Master's list of Ingredients using the InverseProperty attribute.
Attributes are just one option to set up the relationships etc. The other options are to use configuration classes that implement IEntityConfiguration<TEntity> (EF Core), or to configure them as part of the OnModelCreating event in the DbContext. That last option I would only recommend for very small projects as it can start to become a bit of a God method quickly. You can split it up into calls to various private methods, but you may as well just use IEntityConfiguration classes then.
Now when you go to fetch Ingredients with it's Master, or a Master with its Ingredients:
using (var context = new AppDbContext())
{
var ingredients = context.Ingredients
.Include(x => x.Master)
.Where(x => x.Master.Name.Contains("chicken"))
.ToList();
// or
var masters = context.Master
.Include(x => x.Ingredients)
.Where(x => x.Name.Contains("chicken"))
.ToList();
// ...
}
Repository patterns are a more advanced concept that have a few good reasons to implement, but for the most part they are not necessary and an anti-pattern within EF implementations. I consider Generic repositories to always be an anti-pattern for EF implementations. I.e. Repository<Ingredient> The main reason not to use repositories, especially Generic repositories with EF is that you are automatically increasing the complexity of your implementation and/or crippling the capabilities that EF can bring to your solution. As you see from working with your example, simply getting across an eager load through to the repository means writing in complex Expression<Func<TEntity>> parameters, and that just covers eager loading. Supporting projection, pagination, sorting, etc. adds even more boiler-plate complexity or limits your solution and performance without these capabilities that EF can provide out of the box.
Some good reasons to consider studying up on repository implementations /w EF:
Facilitate unit testing. (Repositories are easier to mock than DbContexts/DbSets)
Centralizing low-level data rules such as tenancy, soft deletes, and authorization.
Some bad (albeit very common) reasons to consider repositories:
Abstracting code from references or knowledge of the dependency on EF.
Abstracting the code so that EF could be substituted out.
Projecting to DTOs or ViewModels is an important aspect to building efficient and secure solutions with EF. It's not clear what "ObjectMapper" is, whether it is an Automapper Mapper instance or something else. I would highly recommend starting to grasp projection by using Linq's Select syntax to fill in a desired DTO from the models. The first key difference when using Projection properly is that when you project an object graph, you do not need to worry about eager loading related entities. Any related entity / property referenced in your projection (Select) will automatically be loaded as necessary. Later, if you want to leverage a tool like Automapper to help remove the clutter of Select statements, you will want to configure your mapping configuration then use Automapper's ProjectTo method rather than Map. ProjectTo works with EF's IQueryable implementation to resolve your mapping down to the SQL just like Select does, where Map would need to return everything eager loaded in order to populate related data. ProjectTo and Select can result in more efficient queries that can better take advantage of indexing than Eager Loading entire object graphs. (Less data over the wire between database and server/app) Map is still very useful such as scenarios where you want to copy values back from a DTO into a loaded entity.
Do it like this
public class Ingrident:Entity
{
public string Name { get; set; }
[ForeignKey(nameof(MasterId))]
public Master Master { get; set; }
public int MasterId { get; set; }
}
For my thesis I decided to create something in MVC and to challenge myself I added a DAL and BL layer. I created "services" in BL that allow me to work with my Entities.
I am really wondering if I understood the pattern correctly, because I am having issues dealing with many-to-many relationships - and especially how to use them properly.
This is my current implementation (simplified, to get the general idea):
PersonService: this class is my abstraction for using my entities (I have several entity factories as well). Whenever I need to add a Person to my DB, I use my service. I just noticed that mPersonRepository should probably be named differently.
public class PersonService : IService<Person> {
private UnitOfWork mPersonRepository;
public PersonService() => mPersonRepository = new UnitOfWork();
public void Add(Person aPerson) {
mPersonRepository.PersonRepository.Insert(aPerson);
mPersonRepository.Safe();
}
public void Delete(Guid aGuid) {
mPersonRepository.PersonRepository.Delete(aGuid);
mPersonRepository.Safe();
}
public Person Find(Expression<Func<Person, bool>> aFilter = null) {
var lPerson = mPersonRepository.PersonRepository.Get(aFilter).FirstOrDefault();
return lPerson;
}
public void Update(Person aPerson) {
mPersonRepository.PersonRepository.Update(aPerson);
mPersonRepository.Safe();
}
}
public interface IService<TEntity> where TEntity : class {
void Add(TEntity aEntity);
void Update(TEntity aEntity);
void Delete(Guid aGuid);
TEntity Find(Expression<Func<TEntity, bool>> aExpression);
TEntity FindByOid(Guid aGuid);
IEnumerable<TEntity> FindAll(Expression<Func<TEntity, bool>> aExpression);
int Count();
}
UnitOfWork: pretty much similar as the way Microsoft implemented it.
public class UnitOfWork : IUnitOfWork {
private readonly DbContextOptions<PMDContext> mDbContextOptions = new DbContextOptions<PMDContext>();
public PMDContext mContext;
public UnitOfWork() => mContext = new PMDContext(mDbContextOptions);
public void Safe() => mContext.SaveChanges();
private bool mDisposed = false;
protected virtual void Dispose(bool aDisposed) {
if (!mDisposed)
if (aDisposed) mContext.Dispose();
mDisposed = true;
}
public void Dispose() {
Dispose(true);
GC.SuppressFinalize(this);
}
private GenericRepository<Person> mPersonRepository;
private GenericRepository<Project> mProjectRepository;
public GenericRepository<Person> PersonRepository => mPersonRepository ?? new GenericRepository<Person>(mContext);
public GenericRepository<Project> ProjectRepository => mProjectRepository ?? new GenericRepository<Project>(mContext);
GenericRepository: just as before, it is very similar.
public class GenericRepository<TEntity> : IGenericRepository<TEntity> where TEntity : class {
internal PMDContext mContext;
internal DbSet<TEntity> mDbSet;
public GenericRepository(PMDContext aContext) {
mContext = aContext;
mDbSet = aContext.Set<TEntity>();
}
public virtual IEnumerable<TEntity> Get(
Expression<Func<TEntity, bool>> aFilter = null,
Func<IQueryable<TEntity>, IOrderedQueryable<TEntity>> aOrderBy = null,
string aProperties = "") {
var lQuery = (IQueryable<TEntity>)mDbSet;
if (aFilter != null) lQuery = lQuery.Where(aFilter);
foreach (var lProperty in aProperties.Split
(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries)) {
lQuery = lQuery.Include(lProperty);
}
return aOrderBy != null ? aOrderBy(lQuery).ToList() : lQuery.ToList();
}
public virtual TEntity GetById(object aId) => mDbSet.Find(aId);
public virtual void Insert(TEntity aEntity) => mDbSet.Add(aEntity);
public virtual void Delete(object aId) {
var lEntity = mDbSet.Find(aId);
Delete(lEntity);
}
public virtual void Delete(TEntity aEntity) {
if (mContext.Entry(aEntity).State == EntityState.Detached) mDbSet.Attach(aEntity);
mDbSet.Remove(aEntity);
}
public virtual void Update(TEntity aEntity) {
mDbSet.Attach(aEntity);
mContext.Entry(aEntity).State = EntityState.Modified;
}
}
PMDContext: an implementation of DbContext.
public class PMDContext : DbContext {
public PMDContext(DbContextOptions<PMDContext> aOptions) : base(aOptions) { }
public DbSet<Person> Persons { get; set; }
public DbSet<Project> Projects { get; set; }
protected override void OnConfiguring(DbContextOptionsBuilder aOptions) {
if (!aOptions.IsConfigured) aOptions.UseSqlServer("<snip>");
}
}
Entities
public class Person {
public Person(<args>) {}
public Guid Oid { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
public class Project {
public Project(<args>) {}
public Guid Oid { get; set; }
public string Name { get; set; }
}
I use it all like the following:
var lPerson = Factory.CreatePerson(<args>);
var lPersonService = new PersonService();
lPersonService.Add(lPerson);
<..do some work..>
lPersonService.Update(lPerson)
Now I do not need to worry about calling Safe, or whatever. It works just fine, but now I ran into an issue: how do I deal with many-to-many relations in my Entities. For example my Person can have multiple Projects and my Project can have multiple Persons.
I updated my PMDContext to get a link table:
protected override void OnModelCreating(ModelBuilder aModelBuilder) {
aModelBuilder.Entity<PersonProject>().HasKey(x => new { x.PersonOid, x.ProjectOid });
}
Link table
public class PersonProject {
public Guid PersonOid { get; set; }
public Guid ProjectOid { get; set; }
}
And updated both my entities with the following property.
public ICollection<PersonProject> PersonProjects { get; } = new List<PersonProject>();
Now I am confused on how to use my linked table. I thought I could follow a similar approach like this:
var lPerson = PersonService.FindByOid(aPersonOid);
var lProject = ProjectService.FindByOid(aProjectOid);
var lPersonProject = new PersonProject() { PersonOid = aPersonOid,
ProjectOid = aProjectOid };
lPerson.PersonProjects.Add(lPersonProject);
lProject.PersonProjects.Add(lPersonProject);
PersonService.Update(lPerson);
ProjectService.Update(lProject);
But this ends up not doing anything to the PersonProject table in my DB. My guess is that I lack the code to actually write to that table, since I do not have a PersonProject service that handles this. I am confused.
How would I advance using my current approach, or what do I have to change? I am only a beginner w/ entity frameworks and already happy I got this far.
Any input is appreciated especially on the services -> pattern implementation. I must be doing something wrong.
Thanks!
You're not really using a service layer pattern. Your "service" is just a repository, which then uses your unit of work to access another repository. In short, you've got multiple layers of meaningless abstraction here, which will absolutely kill you in an app you have to maintain for any amount of time.
In general, you should not use the unit of work / repository patterns with ORMs like Entity Framework. The reason why is simple: these ORMs already implement these patterns. In the case of EF, the DbContext is your unit of work and each DbSet is a repository.
If you're going to use something like Entity Framework, my best advice is to just use it. Reference it in your app, inject your context into your controllers and such, and actually use the EF API to do the things you need to do. Isn't this creating a tight coupling? Yes. Yes it is. However, the point so many miss (even myself for a long time) is that coupling is already there. Even if you abstract everything, you're still dealing with a particular domain that you can never fully abstract. If you change your database, that will bubble up to your application at some point, even if it's DTOs you're changing instead of entities. And, of course you'll still have to change those entities as well. The layers just add more maintenance and entropy to your application, which is actually the antithesis of the "clean code" architecture abstractions are supposed to be about.
But what if you need to switch out EF with something else? Won't you have to rewrite a bunch of code? Well, yeah. However, that pretty much never happens. Making a choice on something like an ORM has enough momentum that you're not likely to be able to reverse that course no matter what you do, regardless of how many layers of abstractions you use. It's simply going to require too much time and effort and will never be a business priority. And, importantly, a bunch of code will have to be rewritten regardless. It's only a matter of what layer it's going to be done in.
Now, all that said, there is value in certain patterns like CQRS (Command Query Responsibility Segregation), which is an abstraction (and not a meaningless one, that). However, that only makes sense in large projects or domains where you need clear cut separation between things like reads and writes and/or event sourcings (which goes naturally with CQRS). It's overkill for the majority of applications.
What I would recommend beyond anything else if you want to abstract EF from your main application is to actually create microservices. These microservices are basically just little APIs (though they don't have to be) that deal with just a single unit of functionality for your application. Your application, then, makes requests or otherwise access the microservices to get the data it needs. The microservice would just use EF directly, while the application would have no dependency on EF at all (the holy grail developers think they want).
With a microservice architecture, you can actually check all the boxes you think this faux abstraction is getting you. Want to switch out EF with something else? No problem. Since each microservice only works with a limited subset of the domain, there's not a ton of code typically. Even using EF directly, it would be relatively trivial to rewrite those portions. Better yet, each microservice is completely independent, so you can switch EF out on one, but continue using EF on another. Everything keeps working and the application couldn't care less. This gives you the ability to handle migrations over time and at a pace that is manageable.
Long and short, don't over-engineer. That's the bane of even developers who've been in the business for a while, but especially of new developers, fresh out of the gates with visions of code patterns dancing in their heads. Remember that the patterns are there as recommended ways to solve specific problems. First, you need to ensure that you actually have the problem, then you need to focus on whether that pattern is actually the best way to solve that problem your specific circumstance. This is a skill - one you'll learn over time. The best way to get there is to start small. Build the bare minimum functionality in the most straight-forward way possible. Then, refactor. Test, profile, throw it to the wolves and drag back the blood-soaked remains. Then, refactor. Eventually, you might end up implementing all kinds of different layers and patterns, but you also might not. It's those "might not" times that matter, because in those cases, you've got simple, effortlessly maintainable code that just works and that didn't waste a ton of development time.
I'm a junior web developer trying to learn more every day.
What it the best practice for you guys to performe MVC repository pattern with Linq?
The one I use:
Create extra clases with the exact name of my .tt files with CRUD method like getAll(), getOne(), Update(), Delete() filling my own class with the entity framework and returning this, or using the entity framework crude
this is an example of what I'm actually doing.
this is my getAll method of my class for example User
public class CEmployee : CResult
{
public string name{get;set;}
public string lastname{get;set;}
public string address{get;set;}
//Extracode
public string Fullname // this code is not in the .tt or database
{
get
{
return name + lastname;
}
}
public <List>CEmployee getAll()
{
try
{
var result = (from n in db.Employee
select new CEmployee // this is my own class I fill it using the entity
{
name = n.name,
lastname = n.lastname,
address = n.address
}).ToList();
if (result.Count > 0)
{
return result;
}
else
{
return new List<CResult>
{
new CResult
{
has_Error = true,
msg_Error = "Element not found!!!!"
}
}
}
}
catch
{
return Exception();
}
}
}
that the way I do all thing I return a filled of my type, but on the web I see that people return the entity type normaly, But I do this to manipulate my response, And if I want to return extra information I just have to neste a list for example, whats the best way guys, return mytype or return the entity type ?
PD, I also use this class like my ViewModel.And I do this for all my classes.
One of the projects I am currently one uses Dependency Injection to setup the DAL (Data Access Layer.) We also are using an n-Tier approach; this separates the concern of the repository from the Business Logic and Front End.
So we would start with 4 or so base projects in the application that link to each other. One of that handles the Data Access, this would be your repository; read up on Ninject for more info on this. Our next tier is our Domain which houses the Entities built by the t4 template(.tt files) and also our DTO's (data transfer objects which are flat objects for moving data between layers.) Then we have a service layer, the service layer or business logic layer holds service objects that handle CRUD operations and any data manipulation needed. Lastly we have our front end which is the Model-View-ViewModel layer and handles the controllers and page building.
The MVVM calls the services, the service objects call the data access layer and Entity Framework works with Ninject to access the data and its stored in the DTO's as it is moved across layers.
Now this may seem overly complex depending on the application you are writing, this is built for a highly scalable and expandable web application.
I would highly recommend going with a generic repository implementation. The layers between your repository and the controller vary depending on a number of factors (which is kind of a broader/bigger topic) but the generic repository gets you going on a good implementation that is lightweight. Check out this article for a good description of the approach:
http://www.asp.net/mvc/tutorials/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
Ideally in a MVC application, you will want to repositories in a different layer like in a separate project, let's call it Data layer.
You will have an IRepository interface that contain generic method signatures like GetAll, GetById, Create or UpdateById. You will also have abstract RepositoryBase class that contain shared implementation such as Add, Update, Delete, GetById, etc.
The reason that you use an IRepository Interface is, there are contracts for which your inherited repository class, such as EmployeeRepository in your case, need to provide concrete implementations. The abstract class serves as a common place for your shared implementation (and override them as you need to).
So in your case, what you are doing using LINQ with your DbContext is basically correct, but implementation like your GetAll method should be part of the generic/shared implementation in your abstract class RepositoryBase:
public abstract class RepositoryBase<T> where T : class
{
private YourEntities dataContext;
private readonly IDbSet<T> dbset;
protected RepositoryBase(IDatabaseFactory databaseFactory)
{
DatabaseFactory = databaseFactory;
dbset = DataContext.Set<T>();
}
protected IDatabaseFactory DatabaseFactory
{
get;
private set;
}
protected YourEntities DataContext
{
get { return dataContext ?? (dataContext = DatabaseFactory.Get()); }
}
public virtual T GetById(long id)
{
return dbset.Find(id);
}
public virtual T GetById(string id)
{
return dbset.Find(id);
}
public virtual IEnumerable<T> GetAll()
{
return dbset.ToList();
}
}
I would suggest you need to think about whether or not to return an error result object like CResult, and think about if your CEmployee and CResult should exist in this parent-child relationship. Also think about what you want to do with your CResult Class. It seems to me your CEmployee handles too many tasks in this case.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Let's say we have a project that will handle lots of data (employees, schedules, calendars....and lots more). Client is Windows App, Server side is WCF. Database is MS SQL Server. I am confused regarding which approach to use. I read few articles and blogs they all seem nice but I am confused. I don't want to start with one approach and then regret not choosing the other. The project will have around 30-35 different object types. A lot of Data retrieving to populate different reports...etc
Approach 1:
// classes that hold data
public class Employee
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
.....
}
public class Assignment
{
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
.....
}
.....
Then Helper classes to deal with data saving and retrieving:
public static class Employees
{
public static int Save(Employee emp)
{
// save the employee
}
public static Employee Get(int empId)
{
// return the ugly employee
}
.....
}
public static class Assignments
{
public static int Save(Assignment ass)
{
// save the Assignment
}
.....
}
FYI, The object classes like Employees and Assignment will be in a separate Assembly to be shared between Sever and Client.
Anyway, with this approach I will have a cleaner objects. The Helper classes will do most of the job.
Approach 2:
// classes that hold data and methods for saving and retrieving
public class Employee
{
// constructors
public Employee()
{
// Construct a new Employee
}
public Employee(int Id)
{
// Construct a new Employee and fills the data from db
}
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
.....
public int Save()
{
// save the Employee
}
.....
}
public class Assignment
{
// constructors
public Assignment()
{
// Construct a new assignment
}
public Assignment(int Id)
{
// Construct a new assignment and fills the data from db
}
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
.....
public int Save()
{
// save the Assignment
}
.....
}
.....
With this approach, Each object will do its own job.. Data still can be transferred from WCF to client easily since WCF will only share properties.
Approach 3:
Using Entity Framework.. beside the fact that I never worked with it (which is nice since I have to learn something new) I will need to create POCOs to transfer data between client and WCF..
Now, Which is better? more options?
Having peristence logic in object itself is always a bad idea.
I would use first aproach. It looks like Repository pattern. This way, you can easily debug peristing of data, because it will be clearly separated from rest of the logic of the object.
I would suggest using Entity Framework + Repository pattern. This way your entities are simple objects without any logic in them. All retrieve-save logic stays in repository. I have some successful experience with using generic repository, which is typed with entity, something similar is described here (generic repository part of the article). This way you write repository code only once and you can reuse it for all entities you have. E.g.:
interface IRepositry<T>
{
T GetById(long id);
bool Save(T entity);
}
public class Repository<T> : IRepository<T> {...}
var repository = new Repository<MyEntity>();
var myEntity = repository.GetById(1);
var repository2 = new Repository<MySecondEntity>();
var mySecondEntity = repository.GetById(1);
Whenever an entity needs some very specific operation, you can add this operation to a concrete typed implementation of IRepository:
interface IMySuperRepositry : IRepository<MySuperEntity>
{
MySuperEntity GetBySuperProperty(SuperProperty superProperty);
}
public class MySuperEntityRepository : Repository, IMySuperRepository
{...}
To create repositories it is nice to use a factory, which is based for example on configuration file. This way you can switch implementation of repositories, e.g. for unit testing, when you do not want to use repository that really accesses DB:
public class RepositoryFactory
{
IRepository<T> GetRepository<T>()
{
if (config == production)
return new Repository<T>(); // this is implemented with DB access through EF
if (config == test)
return new TestRepository<T>(); // this is implemented with test values without DB access
}
}
}
}
You can add validation rules for saving and further elaborate on this. EF also lets you add some simple methods or properties to generated entities, because all of them are partial classes.
Furthermore using POCOs or STEs (see later) it is possible to have EDMX DB model in one project, and all your entities in another project and thus distribute this DLL to client (which will contain ONLY your entities). As I understood, that's what you also want to achieve.
Also seriously consider using Self tracking entities (and not just POCOs). In my opinion they are great for usage with WCF. When you get an entity from DB and pass it to the client, client changes it and gives it back, you need to know, if entity was changed and what was changed. STEs handle all this work for you and are designed specifically for WCF. You get entity from client, say ApplyChanges and Save, that's it.
What about implementing the Save as an extension method? That way your classes are clean as in the first option, but the methods can be called on the object as in the second option.
public static class Employee
{
public static int Save(this Employee emp)
{
// save the employee
}
public static Employee Get(int empId)
{
// return the ugly employee
}
}
you're over thinking this. trying to apply technologies and patterns "just because" or "that's what they say" only makes the solution complicated. The key is designing the application so that it can easily adapt to change. that's probably an ambiguous answer, but it's what it all comes down to. how much effort is required to maintain and/or modify the code base.
currently it sounds like the patterns and practices are the end result, instead of a means to an end.
Entity Framework is a great tool but is not necessarily the best choice in all cases. It will depend on how much you expect to read/write from the database vs how much you expect to read/write to your WCF services. Perhaps someone better-versed in the wonderful world of EF will be able to help you. To speak from experience, I have used LINQ-TO-SQL in an application that features WCF service endpoints and had no issues (and in fact came to LOVE Linq-To-Sql as an ORM).
Having that said, if you decide that EF is not the right choice for you, it looks like you're on the right track with Approach 1. However, I would recommend implementing a Data Access Layer. That is, implement a Persist method in your business classes that then calls methods in a separate DAO (Data Access Object, or a class used to persist data from a business object) to actually save it to your database.
A sample implementation might look like this:
public class Employee
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public void Persist()
{
EmployeeDAO.Persist(this);
}
}
public class Assignment
{
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
public void Persist()
{
AssignmentDAO.Persist(this);
}
}
public static class EmployeeDAO
{
public static int Persist(Employee emp)
{
// insert if new, else update
}
public static Employee Get(int empId)
{
// return the ugly employee
}
.....
}
public static class AssignmentDAO
{
public static int Persist(Assignment ass)
{
// insert if new, else update
}
.....
}
The benefit to a pattern like this is that you get to keep your business classes clean, your data-access logic separate, while still giving the objects the easy syntax of being able to write new Employee(...).Persist(); in your code.
If you really want to go nuts, you could even consider implementing interfaces on your Persistable classes, and have your DAO(s) accept those IPersistable instances as arguments.