Mixed architecural approach between layered or shared entities - c#

We are developing an application with the following layers:
UI
Business Layer (BL)
Data Layer (DL): Contains generic CRUD queries and custom queries
Physical Data Layer (PDL): e.g. Entity Framework
We are looking for a way to share the entities of the physical data layer to the DL and the BL.
These points are important in deciding the best architecure:
Reusability: the database fields should be migrated to the other layers as easy as possible
Fast implementation: adding a field to the database should not result in mapping entities between all layers
Extensibility: a BL entity can be extended with properties specific to the BL (likewise for a DL entity)
I've come across architectures that share entities for all layers (+ fast implementation, - extensibility) or architectures with an entity (DTO) per layer (+ extensibility, - fast implementation/reusability).
This blogpost describes these two architectures.
Is there an approach that combines these architectures and takes our requirements into account?
For now we've come up with the following classes and interfaces:
Interfaces:
// Contains properties shared for all entities
public interface I_DL
{
bool Active { get; set; }
}
// Contains properties specific for a customer
public interface I_DL_Customer : I_DL
{
string Name { get; set; }
}
PDL
// Generated by EF or mocking object
public partial class Customer
{
public bool Active { get; set; }
public string Name { get; set; }
}
DL
// Extend the generated entity with custom behaviour
public partial class Customer : I_DL_Customer
{
}
BL
// Store a reference to the DL entity and define the properties shared for all entities
public abstract class BL_Entity<T> where T : I_DL
{
private T _entity;
public BL_Entity(T entity)
{
_entity = entity;
}
protected T entity
{
get { return _entity; }
set { _entity = value; }
}
public bool Active
{
get
{
return entity.Active;
}
set
{
entity.Active = value;
}
}
}
// The BL customer maps directly to the DL customer
public class BL_Customer : BL_Entity<I_DL_Customer>
{
public BL_Customer (I_DL_Customer o) : base(o) { }
public string Name
{
get
{
return entity.Name;
}
set
{
entity.Name = value;
}
}
}

The DTO-per-layer design is the most flexible and modular. Hence, it is also the most reusable: don't confuse the convenience of reusing the same entities with the reusability of the different modules which is the main concern at the architectural level. However, as you pointed out, this approach is neither the fastest to develop nor the most agile if your entities change often.
If you want to share entities among the layers I wouldn't go through the hassle of specifying a hierarchy through the different layers; I'd either let all layers use the EF entities directly, or define those entities in a different assembly shared by all the layers -- the physical data layer included, which may directly persist those entities through EF code-first, or translate to/from those shared entities to the EF ones.

Related

Which layer is the correct place to convert Model to Dto?

I have seen few articles however I need some suggestions/improvements if any, based on my current architecture.
I have created a Repository layer with a Generic Repository pattern, underneath it would be called DynamoDB.
The DynamoDB deals with the Model names and structures that are as good as table names and structures.
My Service Layer references the Contract(domain) layer for Dtos and the repository layer for calling the repo methods.
However the repository layer does not reference the Contract layer, it is required only if I need the mapping from Dtos to model (entity).
Considering the current design, for me the correct place to do mapping of model to dtos is the Service Layer, however, I'm confused about the correct place to do it, as my peers asked me to make a decoupled architecture and they were aligned to do it in the repository layer so that if the repository layer changes it should not affect your other layers.
My question is, whether my architecture is correct, and secondly where the Dto conversion should happen?? Repository layer or Service layer.
My repository layer:
public interface IDbContext<T> where T : class
{
Task CreateBatchWriteAsync(IEnumerable<T> entities, DynamoDBOperationConfig dynamoDBOperationConfig = null);
Task<List<T>> GetAllItemsAsync(DynamoDBOperationConfig dynamoDBOperationConfig = null);
}
public class DbContext<T> : IDbContext<T> where T : class
{
private readonly Amazon.DynamoDBv2.DataModel.IDynamoDBContext context;
public DbContext(IDynamoDBFactory dynamoDBFactory)
{
//
}
public async Task CreateBatchWriteAsync(IEnumerable<T> entities, DynamoDBOperationConfig dynamoDBOperationConfig = null)
{
// connect to dynamodb
}
public async Task<List<T>> GetAllItemsAsync(DynamoDBOperationConfig dynamoDBOperationConfig = null)
{
// connect to dynamodb
}
}
public interface IStoreRepository: IDbContext<Store>
{
}
public class StoreRepository : IStoreRepository
{
private readonly IDbContext<Store> _dbContext;
public TransitSessionRepository(IDbContext<Store> dbContext)
{
_dbContext = dbContext;
}
public async Task CreateBatchWriteAsync(IEnumerable<Store> entities, DynamoDBOperationConfig dynamoDBOperationConfig = null)
{
await _dbContext.CreateBatchWriteAsync(entities,dynamoDBOperationConfig);
}
public Task<List<Store>> GetAllItemsAsync(DynamoDBOperationConfig dynamoDBOperationConfig = null)
{
await _dbContext.GetAllItemsAsync();
}
}
Here is my Model in Respository Layer
[DynamoDBTable("Store")]
public class Store
{
[DynamoDBProperty("Code")]
public string Code { get; set; }
[DynamoDBProperty("Details")]
public TransitDetails Details { get; set; }
}
public class Details
{
[DynamoDBProperty("ClientName")]
public string ClientName { get; set; }
[DynamoDBProperty("RequestedBy")]
public string RequestedBy { get; set; }
[DynamoDBProperty("CreateDate")]
public string CreateDate { get; set; }
}
Please remember that this is an individual assumption for each project.
The IMO service layer will be the best way to do this in your architecture.
To make your code cleaner, you can create extension methods like ToEntityModel and ToDTOModel, so you can hide object creation.
The repository layer is the worst place to do this because of the single responsibility principle - the repository should support communication with the database - not parse one model to another.
There is not one agreed way to do this. Individual (per person, per organisatin) styles matter here a lot.
Here are two things to think about:
The smaller the objects a method exposes and accepts the easier it is to refactor. In other words, if you don't expose field X you don't need to worry how it's used.
If the repository returns the full db model the contract changes when the db model changes. If you expose a tailored dto then you have to change the dto if you want to expose more/less information. The 1st requires less work but gives less control and you may end up exposing more than you want.
Repository layer should only return exact data not the DTO.
Main reasons to use repository pattern is to abstracting communication with the database. If you return DTO from repository layer you will violate single responsibility of repository pattern and usage of DTO
Common approach to DTO conversion is "Convert it when you need it" so in your case the best layer to make conversion would be service layer since. Service layer is where your business needs resides

EF Core incapsulation in Unit Of Work pattern

I've got problems with joining DDD and EF Core.
I'm making project using DDD architecture. As Data Access level I use generic Unit of Work pattern taken from here.
public interface IUnitOfWork
{
IRepository<TDomain> Repository<TDomain>() where TDomain : class;
}
public interface IRepository<TDomain>
{
TDomain Get(Expression<Func<TDomain, bool>> predicate);
}
Realizing these interfaces I use EF Core.
I've got some domain model with 2 classes
public class MainClass
{
public int Id { get; set; }
public List<RelatedItem> Items { get; set; }
}
public class RelatedItem
{
public int Id { get; set; }
public MainClass Parent { get; set; }
public DateTime Date { get; set; }
public string SomeProperty { get; set; }
}
In real life of my project MainClass has collection with hundreds of RelatedItems. In order to perform some operations I need only one RelatedItem per request with some date. It can be done by searching through Items property.
Incapsulating perfomance of EF Core in unit of work I have to load explicitly entities from DB with related items, because business login layer doesn't know anything about realization of UnitOfWork's repository. But this operation is very slow.
So I decided to create MainClassService which injects in costructor unitOfWork and have method which returns only one RelatedItem, and it works fine.
public class MainClassService
{
IUnitOfWork unitOfWork;
public MainClassService(IUnitOfWork unitOfWork)
{
this.unitOfWork = unitOfWork ?? throw new ArgumentNullException();
}
public RelatedItem GetRelatedItemByDate(int mainClassId, DateTime date)
{
return unitOfWork.Repository<RelatedItem>().Get(c => c.Parent.Id == mainClassId && c.Date == date);
}
}
So I've got situation when I cannot use property Items directly because of EF Core, but I should use them because of DDD architecture.
And my question is: is it ok to use such a construction?
From what it seems from your question, the MainClass is an Aggregate root and RelatedItem is a nested entity. This design decision should be based on the business rules/invariants that must be protected. When an Aggregate needs to mutate, it must be fully loaded from the repository, that is, the Aggregate root and all its nested entities and value object must be in memory before it execute the mutating command, no matter how big it is.
Also, it is not a good practice to inject infrastructure services into Aggregates (or in nested entities). If you need to do this, then you must think again on your architecture.
So, from what I wrote you can see that the problem manifest itself only when you try to mutate the Aggregate. If you only need to read it or find it, you could create some dedicated services that find the data using infrastructure components. Your MainClassService seems to be such a case, where you need only to read/find some RelatedItem entities.
In order to be clear that the purpose is only reading, the MainClassService needs to return a readonly representation of the RelatedItem entities.
So, you just madee some first steps towards CQRS, where the models are split into two: READ model and WRITE model.

Mapping Domain Objects to persistent objects

This is not a question with a clear answer, but I need some advice for my architecture. There might be a lot of different opinions about this topic.
I am trying to move my architecture from stupid entities to rich domain objects. In my current version I have abstract domain objects with readonly properties and methods that represent the business logic:
abstract class Project
{
public string PropertyName { get; protected set; }
public void Setup(SetupData data)
{
...
Save();
}
protected abstract void Save();
}
I derive from them to implement the mapping to the persistence entities and to implement the save logic:
class MongoProject
{
MongoProject(ProjectDocument document, Action<ProjectDocument> save)
{
MapFrom(document);
}
public override Save()
{
MapTo(document);
save(document);
}
}
This works very easily, the project is always valid because it has no public setter and it can be tested, even the mapping with the document.
But I also realized some problems:
I always forgot to map some properties, there is no way to tell the MongoProject what it must serialize.
Sometimes it is relativly complex to implement the mapping because you dont know inside the save method what has been changed, especially in a case where the project is a complex aggregate root. In my situation it was very easy to implement persistency with mongodb, but it was a nightmare with entity framework.
How do you solve persistency in your application and are there some other solutions for the mapping problem?
This is a bit of a broad question.
I dont ever implement my domain objects and my data access layer (saving to db, file, cloud, etc..) in the same class. There needs to be seperation of concerns. Your domain objects dont have to know how to save themselves to the database.
What i would do is create my domain objects inside a class, and create a seperate Data Access Layer, which is responsible for saving my data to whichever source i want it to be saved to.
That way, each time you want to save your entities to a different location, your object model doesn't care and you dont have to change it at all.
For mapping POCO objects to DB entities, use AutoMapper, it will save you alot of boilerplate code, and you'll only have to configure it once
For example:
// Domain object
public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
}
// Data Access Layer
public class MongoAccessLayer : IDal
{
public void SaveEntity<T>(T entity)
{
// Save logic here
}
public void LoadEntity<T>(T entity)
{
// Load logic here
}
}
// Interface defining what the access layer should look like
public interface IDal
{
void SaveEntity<T>(T entity);
void LoadEntity<T>(T entity);
}

Whats the best practice of linq ASP.NET MVC respository pattern

I'm a junior web developer trying to learn more every day.
What it the best practice for you guys to performe MVC repository pattern with Linq?
The one I use:
Create extra clases with the exact name of my .tt files with CRUD method like getAll(), getOne(), Update(), Delete() filling my own class with the entity framework and returning this, or using the entity framework crude
this is an example of what I'm actually doing.
this is my getAll method of my class for example User
public class CEmployee : CResult
{
public string name{get;set;}
public string lastname{get;set;}
public string address{get;set;}
//Extracode
public string Fullname // this code is not in the .tt or database
{
get
{
return name + lastname;
}
}
public <List>CEmployee getAll()
{
try
{
var result = (from n in db.Employee
select new CEmployee // this is my own class I fill it using the entity
{
name = n.name,
lastname = n.lastname,
address = n.address
}).ToList();
if (result.Count > 0)
{
return result;
}
else
{
return new List<CResult>
{
new CResult
{
has_Error = true,
msg_Error = "Element not found!!!!"
}
}
}
}
catch
{
return Exception();
}
}
}
that the way I do all thing I return a filled of my type, but on the web I see that people return the entity type normaly, But I do this to manipulate my response, And if I want to return extra information I just have to neste a list for example, whats the best way guys, return mytype or return the entity type ?
PD, I also use this class like my ViewModel.And I do this for all my classes.
One of the projects I am currently one uses Dependency Injection to setup the DAL (Data Access Layer.) We also are using an n-Tier approach; this separates the concern of the repository from the Business Logic and Front End.
So we would start with 4 or so base projects in the application that link to each other. One of that handles the Data Access, this would be your repository; read up on Ninject for more info on this. Our next tier is our Domain which houses the Entities built by the t4 template(.tt files) and also our DTO's (data transfer objects which are flat objects for moving data between layers.) Then we have a service layer, the service layer or business logic layer holds service objects that handle CRUD operations and any data manipulation needed. Lastly we have our front end which is the Model-View-ViewModel layer and handles the controllers and page building.
The MVVM calls the services, the service objects call the data access layer and Entity Framework works with Ninject to access the data and its stored in the DTO's as it is moved across layers.
Now this may seem overly complex depending on the application you are writing, this is built for a highly scalable and expandable web application.
I would highly recommend going with a generic repository implementation. The layers between your repository and the controller vary depending on a number of factors (which is kind of a broader/bigger topic) but the generic repository gets you going on a good implementation that is lightweight. Check out this article for a good description of the approach:
http://www.asp.net/mvc/tutorials/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
Ideally in a MVC application, you will want to repositories in a different layer like in a separate project, let's call it Data layer.
You will have an IRepository interface that contain generic method signatures like GetAll, GetById, Create or UpdateById. You will also have abstract RepositoryBase class that contain shared implementation such as Add, Update, Delete, GetById, etc.
The reason that you use an IRepository Interface is, there are contracts for which your inherited repository class, such as EmployeeRepository in your case, need to provide concrete implementations. The abstract class serves as a common place for your shared implementation (and override them as you need to).
So in your case, what you are doing using LINQ with your DbContext is basically correct, but implementation like your GetAll method should be part of the generic/shared implementation in your abstract class RepositoryBase:
public abstract class RepositoryBase<T> where T : class
{
private YourEntities dataContext;
private readonly IDbSet<T> dbset;
protected RepositoryBase(IDatabaseFactory databaseFactory)
{
DatabaseFactory = databaseFactory;
dbset = DataContext.Set<T>();
}
protected IDatabaseFactory DatabaseFactory
{
get;
private set;
}
protected YourEntities DataContext
{
get { return dataContext ?? (dataContext = DatabaseFactory.Get()); }
}
public virtual T GetById(long id)
{
return dbset.Find(id);
}
public virtual T GetById(string id)
{
return dbset.Find(id);
}
public virtual IEnumerable<T> GetAll()
{
return dbset.ToList();
}
}
I would suggest you need to think about whether or not to return an error result object like CResult, and think about if your CEmployee and CResult should exist in this parent-child relationship. Also think about what you want to do with your CResult Class. It seems to me your CEmployee handles too many tasks in this case.

How to implement 3 tier approach using Entity Framework?

I know this question is asked many times, but I couldnt get a clear picture of what I need.
I have a WPF application which I need to redo using 3- Tier approach.
I have used Entity Framework for creating datamodel and using Linq queries for querying the data.
objCustomer = dbContext.Customers.Where(c => c.CustCode == oLoadDtl.CustNo).First();
I use Linq queries where ever I need in the program to get records from the database.
So, I just would like to know which all stuff comes under DAL, Business logic and UI layers.
Also, how do I separate them?
Can the entity datamodel considered as a DAL?
Is it a better idea to put the entity model in a separate class library?
It's better to create special class called DataAccess to encapsulate EntityFramework-invokes. For business logic you can create model classes, they will use DAL if needed. Other details depend on what your application should do.
For example:
//DAL
public class DataAccess
{
public static void GetCustomerByNumber(int number)
{
var objCustomer = dbContext.Customers.Where(c => c.CustCode == number).First();
return objCustomer;
}
}
//Models
public class Customer
{
public string Name { get; set; }
public int Number { get; set; }
public Customer GetCustomerByNumber(int number)
{
return DataAccess.GetCustomerByNumber(number);
}
public void ChangeProfile(ProfileInfo profile)
{
//...
}
}
Main things are extensibility, re-usability and efficiency of your solutions.

Categories