How to create BLL (Business logic Layer) when using Entity Framework? - c#

I'm learning Entity Framework, I was a bit confused between BLL and DAL, according to my search, I found that Entity Framework is DAL.
There are two ways to create BLL and DAL below:
First approach: write a separate DAO for each object (including add, remove, findAll, ...). In the BLL will call the DAO to get the data or modify the necessary data.
I have StudentManagement which inherits from DbContext and placed in the DAL.
public partial class StudentManagement : DbContext
{
public StudentManagement()
: base("name=StudentManagement")
{
}
public virtual DbSet<LOP> LOP { get; set; }
public virtual DbSet<STUDENT> STUDENT { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<LOP>()
.HasMany(e => e.STUDENT)
.WithOptional(e => e.LOP)
.HasForeignKey(e => e.CLASS_ID);
}
}
StudentDAO: query and modifying data if necessary.
class StudentDAO
{
public StudentManagement context { get; set; }
public StudentDAO()
{
context = new StudentManagement();
}
public IQueryable<STUDENT> findAll()
{
return context.STUDENT;
}
public void add(STUDENT student)
{
context.STUDENT.Add(student);
context.SaveChanges();
}
public void remove(int id)
{
STUDENT student = context.STUDENT.Find(id);
if (student != null)
{
context.STUDENT.Remove(student);
context.SaveChanges();
}
}
}
Student_BLL: call the StudentDAO to handle business and then return data to view.
class StudentBLL
{
public List<STUDENT> getStudentInClass(int ClassID)
{
return new StudentDAO().findAll().Where(student => student.CLASS_ID == ClassID).ToList();
}
public List<STUDENT> findAll()
{
return new StudentDAO().findAll().ToList();
}
public STUDENT find(int id)
{
return new StudentDAO().findAll().FirstOrDefault(student => student.ID == id);
}
public void add(STUDENT student)
{
new StudentDAO().add(student);
}
public void remove(int id)
{
new StudentDAO().remove(id);
}
}
Another approach: I don't have to create DAO for each object but use context in BLL and query directly using LINQ.
class LopSH_BLL
{
public StudentManagement context { get; set; }
public LopSH_BLL()
{
context = new StudentManagement();
}
public List<LOP> findAll()
{
return context.LOP.ToList();
}
public LOP find(int id)
{
return context.LOP.Find(id);
}
public void add(LOP lop)
{
context.LOP.Add(lop);
context.SaveChanges();
}
public void remove(int id)
{
LOP lop = context.LOP.Find(id);
context.LOP.Remove(lop);
context.SaveChanges();
}
}
Which is better and does it follow the rules of 3 layers?

Although there is nothing wrong with the way you are accessing data, there are better approaches available as best practices. However, you should always consider the type of project before planning any specific software architecture.
Ask yourself a few questions:
Is this project going to grow over time, or it's just a simple project to apply some simple logic?
How many developers are going to work on the project?
I believe these two simple questions can guide you much more accessible to deciding the architecture of your project.
Now regarding your question:
Which is better, and does it follow the rules of 3 layers?
Nothing wrong with the way you are accessing data, but:
Program to the interfaces.
Using interfaces is a crucial factor in making your code easily testable and removing unnecessary couplings between your classes.
Check this post : What does it mean to "program to an interface"?
Dependency Inversion & Dependency Injection
Understanding the meaning of these two and knowing the differences can help you so much down the road.
Check this post: Difference between dependency injection and dependency inversion
Repository Pattern
The Repository Design Pattern in C# (or any OOP-supported language) mediates between the domain and the data mapping layers using a collection-like interface for accessing the domain objects. In other words, we can say that a repository design pattern acts as a "middle layer" between the rest of the application and the data access logic.
I believe this is an excellent example to check and learn: The Repository Pattern Example in C#
Last but not least, there are some well-proven architecture patterns in general which are good to know if you are serious in this journey:
Domain Driven Design (DDD)
Microservices Architecture Pattern

Related

How to deal with many-to-many relationships in the general repository, unit of work pattern?

For my thesis I decided to create something in MVC and to challenge myself I added a DAL and BL layer. I created "services" in BL that allow me to work with my Entities.
I am really wondering if I understood the pattern correctly, because I am having issues dealing with many-to-many relationships - and especially how to use them properly.
This is my current implementation (simplified, to get the general idea):
PersonService: this class is my abstraction for using my entities (I have several entity factories as well). Whenever I need to add a Person to my DB, I use my service. I just noticed that mPersonRepository should probably be named differently.
public class PersonService : IService<Person> {
private UnitOfWork mPersonRepository;
public PersonService() => mPersonRepository = new UnitOfWork();
public void Add(Person aPerson) {
mPersonRepository.PersonRepository.Insert(aPerson);
mPersonRepository.Safe();
}
public void Delete(Guid aGuid) {
mPersonRepository.PersonRepository.Delete(aGuid);
mPersonRepository.Safe();
}
public Person Find(Expression<Func<Person, bool>> aFilter = null) {
var lPerson = mPersonRepository.PersonRepository.Get(aFilter).FirstOrDefault();
return lPerson;
}
public void Update(Person aPerson) {
mPersonRepository.PersonRepository.Update(aPerson);
mPersonRepository.Safe();
}
}
public interface IService<TEntity> where TEntity : class {
void Add(TEntity aEntity);
void Update(TEntity aEntity);
void Delete(Guid aGuid);
TEntity Find(Expression<Func<TEntity, bool>> aExpression);
TEntity FindByOid(Guid aGuid);
IEnumerable<TEntity> FindAll(Expression<Func<TEntity, bool>> aExpression);
int Count();
}
UnitOfWork: pretty much similar as the way Microsoft implemented it.
public class UnitOfWork : IUnitOfWork {
private readonly DbContextOptions<PMDContext> mDbContextOptions = new DbContextOptions<PMDContext>();
public PMDContext mContext;
public UnitOfWork() => mContext = new PMDContext(mDbContextOptions);
public void Safe() => mContext.SaveChanges();
private bool mDisposed = false;
protected virtual void Dispose(bool aDisposed) {
if (!mDisposed)
if (aDisposed) mContext.Dispose();
mDisposed = true;
}
public void Dispose() {
Dispose(true);
GC.SuppressFinalize(this);
}
private GenericRepository<Person> mPersonRepository;
private GenericRepository<Project> mProjectRepository;
public GenericRepository<Person> PersonRepository => mPersonRepository ?? new GenericRepository<Person>(mContext);
public GenericRepository<Project> ProjectRepository => mProjectRepository ?? new GenericRepository<Project>(mContext);
GenericRepository: just as before, it is very similar.
public class GenericRepository<TEntity> : IGenericRepository<TEntity> where TEntity : class {
internal PMDContext mContext;
internal DbSet<TEntity> mDbSet;
public GenericRepository(PMDContext aContext) {
mContext = aContext;
mDbSet = aContext.Set<TEntity>();
}
public virtual IEnumerable<TEntity> Get(
Expression<Func<TEntity, bool>> aFilter = null,
Func<IQueryable<TEntity>, IOrderedQueryable<TEntity>> aOrderBy = null,
string aProperties = "") {
var lQuery = (IQueryable<TEntity>)mDbSet;
if (aFilter != null) lQuery = lQuery.Where(aFilter);
foreach (var lProperty in aProperties.Split
(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries)) {
lQuery = lQuery.Include(lProperty);
}
return aOrderBy != null ? aOrderBy(lQuery).ToList() : lQuery.ToList();
}
public virtual TEntity GetById(object aId) => mDbSet.Find(aId);
public virtual void Insert(TEntity aEntity) => mDbSet.Add(aEntity);
public virtual void Delete(object aId) {
var lEntity = mDbSet.Find(aId);
Delete(lEntity);
}
public virtual void Delete(TEntity aEntity) {
if (mContext.Entry(aEntity).State == EntityState.Detached) mDbSet.Attach(aEntity);
mDbSet.Remove(aEntity);
}
public virtual void Update(TEntity aEntity) {
mDbSet.Attach(aEntity);
mContext.Entry(aEntity).State = EntityState.Modified;
}
}
PMDContext: an implementation of DbContext.
public class PMDContext : DbContext {
public PMDContext(DbContextOptions<PMDContext> aOptions) : base(aOptions) { }
public DbSet<Person> Persons { get; set; }
public DbSet<Project> Projects { get; set; }
protected override void OnConfiguring(DbContextOptionsBuilder aOptions) {
if (!aOptions.IsConfigured) aOptions.UseSqlServer("<snip>");
}
}
Entities
public class Person {
public Person(<args>) {}
public Guid Oid { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
public class Project {
public Project(<args>) {}
public Guid Oid { get; set; }
public string Name { get; set; }
}
I use it all like the following:
var lPerson = Factory.CreatePerson(<args>);
var lPersonService = new PersonService();
lPersonService.Add(lPerson);
<..do some work..>
lPersonService.Update(lPerson)
Now I do not need to worry about calling Safe, or whatever. It works just fine, but now I ran into an issue: how do I deal with many-to-many relations in my Entities. For example my Person can have multiple Projects and my Project can have multiple Persons.
I updated my PMDContext to get a link table:
protected override void OnModelCreating(ModelBuilder aModelBuilder) {
aModelBuilder.Entity<PersonProject>().HasKey(x => new { x.PersonOid, x.ProjectOid });
}
Link table
public class PersonProject {
public Guid PersonOid { get; set; }
public Guid ProjectOid { get; set; }
}
And updated both my entities with the following property.
public ICollection<PersonProject> PersonProjects { get; } = new List<PersonProject>();
Now I am confused on how to use my linked table. I thought I could follow a similar approach like this:
var lPerson = PersonService.FindByOid(aPersonOid);
var lProject = ProjectService.FindByOid(aProjectOid);
var lPersonProject = new PersonProject() { PersonOid = aPersonOid,
ProjectOid = aProjectOid };
lPerson.PersonProjects.Add(lPersonProject);
lProject.PersonProjects.Add(lPersonProject);
PersonService.Update(lPerson);
ProjectService.Update(lProject);
But this ends up not doing anything to the PersonProject table in my DB. My guess is that I lack the code to actually write to that table, since I do not have a PersonProject service that handles this. I am confused.
How would I advance using my current approach, or what do I have to change? I am only a beginner w/ entity frameworks and already happy I got this far.
Any input is appreciated especially on the services -> pattern implementation. I must be doing something wrong.
Thanks!
You're not really using a service layer pattern. Your "service" is just a repository, which then uses your unit of work to access another repository. In short, you've got multiple layers of meaningless abstraction here, which will absolutely kill you in an app you have to maintain for any amount of time.
In general, you should not use the unit of work / repository patterns with ORMs like Entity Framework. The reason why is simple: these ORMs already implement these patterns. In the case of EF, the DbContext is your unit of work and each DbSet is a repository.
If you're going to use something like Entity Framework, my best advice is to just use it. Reference it in your app, inject your context into your controllers and such, and actually use the EF API to do the things you need to do. Isn't this creating a tight coupling? Yes. Yes it is. However, the point so many miss (even myself for a long time) is that coupling is already there. Even if you abstract everything, you're still dealing with a particular domain that you can never fully abstract. If you change your database, that will bubble up to your application at some point, even if it's DTOs you're changing instead of entities. And, of course you'll still have to change those entities as well. The layers just add more maintenance and entropy to your application, which is actually the antithesis of the "clean code" architecture abstractions are supposed to be about.
But what if you need to switch out EF with something else? Won't you have to rewrite a bunch of code? Well, yeah. However, that pretty much never happens. Making a choice on something like an ORM has enough momentum that you're not likely to be able to reverse that course no matter what you do, regardless of how many layers of abstractions you use. It's simply going to require too much time and effort and will never be a business priority. And, importantly, a bunch of code will have to be rewritten regardless. It's only a matter of what layer it's going to be done in.
Now, all that said, there is value in certain patterns like CQRS (Command Query Responsibility Segregation), which is an abstraction (and not a meaningless one, that). However, that only makes sense in large projects or domains where you need clear cut separation between things like reads and writes and/or event sourcings (which goes naturally with CQRS). It's overkill for the majority of applications.
What I would recommend beyond anything else if you want to abstract EF from your main application is to actually create microservices. These microservices are basically just little APIs (though they don't have to be) that deal with just a single unit of functionality for your application. Your application, then, makes requests or otherwise access the microservices to get the data it needs. The microservice would just use EF directly, while the application would have no dependency on EF at all (the holy grail developers think they want).
With a microservice architecture, you can actually check all the boxes you think this faux abstraction is getting you. Want to switch out EF with something else? No problem. Since each microservice only works with a limited subset of the domain, there's not a ton of code typically. Even using EF directly, it would be relatively trivial to rewrite those portions. Better yet, each microservice is completely independent, so you can switch EF out on one, but continue using EF on another. Everything keeps working and the application couldn't care less. This gives you the ability to handle migrations over time and at a pace that is manageable.
Long and short, don't over-engineer. That's the bane of even developers who've been in the business for a while, but especially of new developers, fresh out of the gates with visions of code patterns dancing in their heads. Remember that the patterns are there as recommended ways to solve specific problems. First, you need to ensure that you actually have the problem, then you need to focus on whether that pattern is actually the best way to solve that problem your specific circumstance. This is a skill - one you'll learn over time. The best way to get there is to start small. Build the bare minimum functionality in the most straight-forward way possible. Then, refactor. Test, profile, throw it to the wolves and drag back the blood-soaked remains. Then, refactor. Eventually, you might end up implementing all kinds of different layers and patterns, but you also might not. It's those "might not" times that matter, because in those cases, you've got simple, effortlessly maintainable code that just works and that didn't waste a ton of development time.

Mixed architecural approach between layered or shared entities

We are developing an application with the following layers:
UI
Business Layer (BL)
Data Layer (DL): Contains generic CRUD queries and custom queries
Physical Data Layer (PDL): e.g. Entity Framework
We are looking for a way to share the entities of the physical data layer to the DL and the BL.
These points are important in deciding the best architecure:
Reusability: the database fields should be migrated to the other layers as easy as possible
Fast implementation: adding a field to the database should not result in mapping entities between all layers
Extensibility: a BL entity can be extended with properties specific to the BL (likewise for a DL entity)
I've come across architectures that share entities for all layers (+ fast implementation, - extensibility) or architectures with an entity (DTO) per layer (+ extensibility, - fast implementation/reusability).
This blogpost describes these two architectures.
Is there an approach that combines these architectures and takes our requirements into account?
For now we've come up with the following classes and interfaces:
Interfaces:
// Contains properties shared for all entities
public interface I_DL
{
bool Active { get; set; }
}
// Contains properties specific for a customer
public interface I_DL_Customer : I_DL
{
string Name { get; set; }
}
PDL
// Generated by EF or mocking object
public partial class Customer
{
public bool Active { get; set; }
public string Name { get; set; }
}
DL
// Extend the generated entity with custom behaviour
public partial class Customer : I_DL_Customer
{
}
BL
// Store a reference to the DL entity and define the properties shared for all entities
public abstract class BL_Entity<T> where T : I_DL
{
private T _entity;
public BL_Entity(T entity)
{
_entity = entity;
}
protected T entity
{
get { return _entity; }
set { _entity = value; }
}
public bool Active
{
get
{
return entity.Active;
}
set
{
entity.Active = value;
}
}
}
// The BL customer maps directly to the DL customer
public class BL_Customer : BL_Entity<I_DL_Customer>
{
public BL_Customer (I_DL_Customer o) : base(o) { }
public string Name
{
get
{
return entity.Name;
}
set
{
entity.Name = value;
}
}
}
The DTO-per-layer design is the most flexible and modular. Hence, it is also the most reusable: don't confuse the convenience of reusing the same entities with the reusability of the different modules which is the main concern at the architectural level. However, as you pointed out, this approach is neither the fastest to develop nor the most agile if your entities change often.
If you want to share entities among the layers I wouldn't go through the hassle of specifying a hierarchy through the different layers; I'd either let all layers use the EF entities directly, or define those entities in a different assembly shared by all the layers -- the physical data layer included, which may directly persist those entities through EF code-first, or translate to/from those shared entities to the EF ones.

Whats the best practice of linq ASP.NET MVC respository pattern

I'm a junior web developer trying to learn more every day.
What it the best practice for you guys to performe MVC repository pattern with Linq?
The one I use:
Create extra clases with the exact name of my .tt files with CRUD method like getAll(), getOne(), Update(), Delete() filling my own class with the entity framework and returning this, or using the entity framework crude
this is an example of what I'm actually doing.
this is my getAll method of my class for example User
public class CEmployee : CResult
{
public string name{get;set;}
public string lastname{get;set;}
public string address{get;set;}
//Extracode
public string Fullname // this code is not in the .tt or database
{
get
{
return name + lastname;
}
}
public <List>CEmployee getAll()
{
try
{
var result = (from n in db.Employee
select new CEmployee // this is my own class I fill it using the entity
{
name = n.name,
lastname = n.lastname,
address = n.address
}).ToList();
if (result.Count > 0)
{
return result;
}
else
{
return new List<CResult>
{
new CResult
{
has_Error = true,
msg_Error = "Element not found!!!!"
}
}
}
}
catch
{
return Exception();
}
}
}
that the way I do all thing I return a filled of my type, but on the web I see that people return the entity type normaly, But I do this to manipulate my response, And if I want to return extra information I just have to neste a list for example, whats the best way guys, return mytype or return the entity type ?
PD, I also use this class like my ViewModel.And I do this for all my classes.
One of the projects I am currently one uses Dependency Injection to setup the DAL (Data Access Layer.) We also are using an n-Tier approach; this separates the concern of the repository from the Business Logic and Front End.
So we would start with 4 or so base projects in the application that link to each other. One of that handles the Data Access, this would be your repository; read up on Ninject for more info on this. Our next tier is our Domain which houses the Entities built by the t4 template(.tt files) and also our DTO's (data transfer objects which are flat objects for moving data between layers.) Then we have a service layer, the service layer or business logic layer holds service objects that handle CRUD operations and any data manipulation needed. Lastly we have our front end which is the Model-View-ViewModel layer and handles the controllers and page building.
The MVVM calls the services, the service objects call the data access layer and Entity Framework works with Ninject to access the data and its stored in the DTO's as it is moved across layers.
Now this may seem overly complex depending on the application you are writing, this is built for a highly scalable and expandable web application.
I would highly recommend going with a generic repository implementation. The layers between your repository and the controller vary depending on a number of factors (which is kind of a broader/bigger topic) but the generic repository gets you going on a good implementation that is lightweight. Check out this article for a good description of the approach:
http://www.asp.net/mvc/tutorials/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
Ideally in a MVC application, you will want to repositories in a different layer like in a separate project, let's call it Data layer.
You will have an IRepository interface that contain generic method signatures like GetAll, GetById, Create or UpdateById. You will also have abstract RepositoryBase class that contain shared implementation such as Add, Update, Delete, GetById, etc.
The reason that you use an IRepository Interface is, there are contracts for which your inherited repository class, such as EmployeeRepository in your case, need to provide concrete implementations. The abstract class serves as a common place for your shared implementation (and override them as you need to).
So in your case, what you are doing using LINQ with your DbContext is basically correct, but implementation like your GetAll method should be part of the generic/shared implementation in your abstract class RepositoryBase:
public abstract class RepositoryBase<T> where T : class
{
private YourEntities dataContext;
private readonly IDbSet<T> dbset;
protected RepositoryBase(IDatabaseFactory databaseFactory)
{
DatabaseFactory = databaseFactory;
dbset = DataContext.Set<T>();
}
protected IDatabaseFactory DatabaseFactory
{
get;
private set;
}
protected YourEntities DataContext
{
get { return dataContext ?? (dataContext = DatabaseFactory.Get()); }
}
public virtual T GetById(long id)
{
return dbset.Find(id);
}
public virtual T GetById(string id)
{
return dbset.Find(id);
}
public virtual IEnumerable<T> GetAll()
{
return dbset.ToList();
}
}
I would suggest you need to think about whether or not to return an error result object like CResult, and think about if your CEmployee and CResult should exist in this parent-child relationship. Also think about what you want to do with your CResult Class. It seems to me your CEmployee handles too many tasks in this case.

MySQL infrastructure best practices

So, I'm writing a fairly complex C# application right now, that uses MySQL as the database system. I'm wondering, what would be the best way to use MySQL through the entire program? Creating static functions so you can use it everywhere? Refering to a SQLHandler class, which does all the communication?
Thanks!
I would abstract the data access functions inside an interface which could act as a data access layer. Then have an implementation working with MySQL. Then always pass the interface to other layers of your application that need to query the database. This way you get weak coupling between those layers and make unit testing in isolation of those layers possible.
Let's have an example. Suppose that you have a Product model:
public class Product
{
public int Id { get; set; }
public string Name { get; set; }
}
Now you could define a repository which will abstract the operations you need to perform with this model:
public interface IProductRepository
{
Product Get(int id);
}
and then you could have an implementation of this interface working with MySQL:
public class MySQLProductRepository: IProductRepository
{
private readonly string _connectionString;
public MySQLProductRepository(string connectionString)
{
_connectionString = connectionString;
}
public Product Get(int id)
{
using (var conn = new MySqlConnection(_connectionString))
using (var cmd = conn.CreateCommand())
{
conn.Open();
cmd.CommandText = "SELECT name FROM products WHERE id = #id";
cmd.Parameters.AddWithValue("#id", id);
using (var reader = cmd.ExecuteReader())
{
if (!reader.Read())
{
return null;
}
return new Product
{
Id = id,
Name = reader.GetString(reader.GetOrdinal("name"))
};
}
}
}
}
Now every layer of your application which needs to work wit products could simply take the IProductRepository as constructor parameter and call the various CRUD methods.
It is only inside the composition root of your application where you would wire the dependencies and specify that you would be working with a MySQLProductRepository. Ideally the instance of this repository should be a singleton.
You might also checkout popular ORMS such as NHibernate, Entity Framework, Dapper, ... to simplify the implementation of the various CRUD operations inside your repositories and perform the mapping to the domain models. But even if you decide to use an ORM framework it is still good practice to separate the concerns into different layers in your application. This is very important when designing complex applications if you want they to remain maintainable.
A good practice is to make a Singelton MySQLHandler if you want to keep 1 connection alive all the time.
using System;
public class MySQLHandler
{
private static MySQLHandler instance;
private MySQLHandler() {}
public static MySQLHandler Instance
{
get
{
if (instance == null)
{
instance = new MySQLHandler();
}
return instance;
}
}
}
If you dont care for the number of connections you can also make a static MySQLHelper class.

How to implement 3 tier approach using Entity Framework?

I know this question is asked many times, but I couldnt get a clear picture of what I need.
I have a WPF application which I need to redo using 3- Tier approach.
I have used Entity Framework for creating datamodel and using Linq queries for querying the data.
objCustomer = dbContext.Customers.Where(c => c.CustCode == oLoadDtl.CustNo).First();
I use Linq queries where ever I need in the program to get records from the database.
So, I just would like to know which all stuff comes under DAL, Business logic and UI layers.
Also, how do I separate them?
Can the entity datamodel considered as a DAL?
Is it a better idea to put the entity model in a separate class library?
It's better to create special class called DataAccess to encapsulate EntityFramework-invokes. For business logic you can create model classes, they will use DAL if needed. Other details depend on what your application should do.
For example:
//DAL
public class DataAccess
{
public static void GetCustomerByNumber(int number)
{
var objCustomer = dbContext.Customers.Where(c => c.CustCode == number).First();
return objCustomer;
}
}
//Models
public class Customer
{
public string Name { get; set; }
public int Number { get; set; }
public Customer GetCustomerByNumber(int number)
{
return DataAccess.GetCustomerByNumber(number);
}
public void ChangeProfile(ProfileInfo profile)
{
//...
}
}
Main things are extensibility, re-usability and efficiency of your solutions.

Categories