I created an application with this architecture:
MyProject.Model: Contains POCO. Example:
public class Car
{
public int Id { get; set; }
public string Name { get; set; }
}
MyProject.Repositories: Contains repositories and UnitOfWork
public class UnitOfWork
{
// ...
public Repository<Car> Cars { get; set; }
// ...
}
public class Repository<T>
{
// ...
// Add / Update / Delete ...
// ...
}
MyProject.Web: ASP.Net MVC application
Now I want to find a way to interact with data by using methods. For example in MyProject.Model.Car I want to add a method that will get data with non-navigation properties, a method named `GetSimilarCars()'. The problem is that the repository cannot interact with other repositories and thus cannot perform operations on the database.
I don't really know how to do this in a simple manner and what is the best place in my architecture to put this.
Another example could be UserGroup.Deactivate(), this method would deactivate each user and send them a notification by email. Of course I can put this method in the Web application Controller but I think this is no the place to put such code that could be called in many places in the application.
Note: I am using Entity Framework.
Any suggestion on how to implement such operations?
This type of stuff goes into your DAL (essentially your unit of work and repository, in this limited scenario). However, this is a pattern that bit me when I first starting working with MVC. Entity Framework already implements these patterns; your DbContext is your unit of work and your DbSet is your repository. All creating another layer on top of this does is add complexity. I personally ended up going with a service pattern instead, which merely sits on top of EF and allows me to do things like someService.GetAllFoo(). That way, the use of Entity Framework is abstracted away (I can switch out the DAL at any time. I can even remove the database completely and go with an API instead, without having to change any code in the rest of my application.) but I'm also not just reinventing the wheel.
In a service pattern, you're specifically only providing endpoints for the things you need, so it's a perfect candidate for things like GetSimilarCars, as you simply just add another method to the service to encapsulate the logic for that.
I would assume that your Business Layer (BL) would be communicating with your Data Access Layer (DAL). That way from your BL you could reach out to different repositories in DAL. That would solve your problem of repositories not being able to share data (that data would be shared through BL).
See here: N-tier architecture w/ EF and MVC
I did not quite get you question but this is how you assign the values. And add it into a collection
public class Repository<T>
{
List<car> _lstCar=new List<car>();
//Add
car cobj=new car();
cobj.Id="1234";
cobj.Name="Mercedes";
_lstCar.Add(cobj);
}
Related
I'm struggling a little bit with following problem. Let's say I want to manage dependencies in my project, so my domain won't depend on any external stuff - in this problem on repository. In this example let's say my domain is in project.Domain.
To do so I declared interface for my repository in project.Domain, which I implement in project.Infrastructure. Reading DDD Red Book by Vernon I noticed, that he suggests that method for creating new ID for aggregate should be placed in repository like:
public class EntityRepository
{
public EntityId NextIdentity()
{
// create new instance of EntityId
}
}
Inside this EntityId object would be GUID but I want to explicitly model my ID, so that's why I'm not using plain GUIDs. I also know I could skip this problem completely and generate GUID on the database side, but for sake of this argument let's assume that I really want to generate it inside my application.
Right now I'm just thinking - are there any specific reasons for this method to be placed inside repository like Vernon suggests or I could implement identity creation for example inside entity itself like
public class Entity
{
public static EntityId NextIdentity()
{
// create new instance of EntityId
}
}
You could place it in the repository as Vernon says, but another idea would be to place a factory inside the constructor of your base entity that creates the identifier. In this way you have identifiers before you even interact with repositories and you could define implementation per your ID generation strategy. Repository could include a connection to something, like a web service or a database which can be costly and unavailable.
There are good strategies (especially with GUID) that allow good handling of identifiers. This also makes your application fully independent of the outside world.
This also enables you to have different identifier types throughout your application if the need arises.
For eg.
public abstract class Entity<TKey>
{
public TKey Id { get; }
protected Entity() { }
protected Entity(IIdentityFactory<TKey> identityFactory)
{
if (identityFactory == null)
throw new ArgumentNullException(nameof(identityFactory));
Id = identityFactory.CreateIdentity();
}
}
Yes, you could bypass the call to the repository and just generate the identity on the Entity. The problem, however, is that you've broken the core idea behind the repository: keeping everything related to entity storage isolated from the entity itself.
I would say keep the NextIdentity method in the respository, and still use it, even if you are only generating the GUID's client-side. The benefit is that in some future where you want to change how the identity's are being seeded, you can support that through the repository. Whereas, if you go with the approach directly on the Entity, then you would have to refactor later to support such a change.
Also, consider scenarios where you would use different repositories in such cases like testing. ie. you might want to generate two identities with the same ID and perform clash testing or "does this fail properly". Having a repository handle the generation gives you opportunity to get creative in such ways, without making completely unique test cases that don't mimic what actual production calls would occur.
TLDR; Keep it in the repository, even if your identifier can be client-side generated.
Good day,
I want to have a clean codes that are basing on the actual enterprise level application. I know how to implement repositories and services but I'm not sure if I'm doing this right.
Suppose, I have a custom class (mostly for json result)
public class CustomClass{
public string Name { get; set; }
public string Age { get; set; }
}
I have a model class (connected to my dbcontext)
public class Employees{
public string Name { get; set; }
public string Age { get; set; }
}
Here's my 1st sample repository that supplies the Custom Class
// in repository file only
public IEnumerable<CustomClass> SupplyCustomClass(){
return context.Employees.Select(obj=>new CustomClass{
Name = obj.Name,
Age = obj.Age
}).ToList();
}
Here's my 2nd sample, the repository is supplies the first then followed by service.
// in repository file (EmployeeRepo)
public IEnumerable<Employees> SupplyEmployeeFirst(){
return context.Employees.ToList();
}
// in service file
// dependency injection from EmployeeRepo
public IEnumerable<CustomClass> SupplyCustomClassSecond(){
var customClass = new List<CustomClass>();
var employees = employeeRepo.SupplyEmployeeFirst();
foreach(employee in employees){
customClass.Add(new CustomClass{
Name = obj.Name,
Age = obj.Age
});
}
return customClass;
}
Both implementation execute the same result. But I want to learn what's the best way in order to follow the rule in enterprise development level.
From your two approaches, I think the 2nd one is better. The repository should work with Models and not with ViewModels. And as another improvement, you can implement Unit of Work Pattern with Repository pattern.
With the two approaches you have described I would recommend going with a service layer that calls into the repositories (as per your second example). This allows for a clear separation of concerns in that only the repository deals with managing data.
Something important to consider however is that in your examples the second example is wildly inefficient. Your first example will be pulling just 2 columns of data out of the database and constructing your class where as your second example is materialising the entire collection to a list and then you are working with it in memory which places strain on the DB as well as your host process.
This is where clear separation between models and view models can become messy. You can either have the repository spit out the view models to keep maintain performance or you can return IQueryable from the repository layer to work with. I personally would recommend having the repository return view models if it is a limited data set you need back but leave all other logic to be done within the service layer. This maintains the repository as the layer that deals directly with your data. In a basic example such as yours its not obvious what work you would then do within the service but there will be plenty of things you wish to do once the data has been pulled out which becomes clear as you flesh out your project. Having this separation allows you to have a clean line between what is fetching your data and the service that may then work with that data to present it, pull in data externally from other API's or other general work that has nothing directly to do with your database.
The short version of the Entity Framework performance consideration is that you want to have it pull out as little data as you can and leave it as an IQueryable for as long as possible. Once you run .ToList() or iterate over the collection Entity Framework will materialise the result set.
I'm doing 3 tier application using asp.net mvc and I want to do everything as recommended.
So I've done MvcSample.Bll for business logic, MvcSample.Data for data and MvcSample.Web for website.
In Data I've my edmx file (I'm using database first approach) and my repositories. And in Bll I'm doing services which will called in web.
So my question is that:
Should I write other models in Bll or use that ones which are generated in edmx file?
It heavily depends on the type of problem that your application is trying to solve.
From my experience, it is very rare that the business logic returns model objects directly from Entity Framework. Also, accepting these as arguments may not be the best idea.
Entity Framework model represents your relational database. Because of that, its definition contains many things that your business logic should not expose, for example navigation properties, computed properties etc. When accepting your model object as an argument, you may notice that many properties are not used by the particular business logic method. In many cases it confuses the developer and is the source of bugs.
All in all, if your application is a quick prototype, proof of concept or a simple CRUD software than it might be sufficient to use EF model classes. However, from practical point of view consider bespoke business logic model/dto classes.
From my point of view you need another model for your Bll.
That would encapsulate your Bllcompletely.
I think there is no right or wrong answer for your question.
In my experience, I used both.
Let's see at below example:
I have an User table
public class User
{
public int Id{get;set;}
public string First_Name{get;set;}
public string Last_Name{get;set;}
public int Age{get;set;}
public string Password{get;set;} //let's use this for demonstration
}
I have a Method call DisplayAll() in Bll. This method should list down all users in my database by Full Names (FirstName + LastName) and their Ages.
I should not return User class because it will expose the Password, but rather, I create a new Class UserDto
public class UserDto
{
public string FullName{get;set;}
public int Age{get;set;}
}
So here is my DisplayAll():
public List<UserDto> DisplayAll()
{
List<UserDto> result = ctx.User //my DbContext
.Select(x => new UserDto()
{
FullName = x.First_Name + " " +Last_Name,
Age = x.Age
}
return result;
}
So as you can see, my method DisplayAll() uses both User and UserDto
My approach will be
MvcSample.Data
-- Model Classes
-- EDMX attach to model
MvcSample.Bll
-- Model Inheriting MvcSample.Data.Model
-- Business Logic Class - Using MvcSample.Bll.Model
MvcSample.Web
-- Controller using MvcSample.Bll.Model
It depends on your view about software design and how you want to take advantage of it. by separating BLL model, you will have your freedom to put story specific validation and calculation. By using only DLL model, it is sometimes tough as it is going to take effect in DB.
You can use 3 tier architecture in asp.net in this way
MvcSample.BLL - business logic layer
MvcSample.DAL - Data access layer
MvcSample.Domain - Domain layer
MvcSample.web - website
All your repository classes are including in .BLL layer.That means your logics are stored here.
Usually .DAL is used for storing .edmx classes. .Domain is using for recreate database objects that are useful for server side.That means if you are passing a json object from client to server,Then that object should be create on the server side.So those classes can be implement in the .domain
I have a general difference of opinion on an architectural design and even though stackoverflow should not be used to ask for opinions I would like to ask for pros and cons of both approaches that I will describe below:
Details:
- C# application
- SQL Server database
- Using Entity Framework
- And we need to decide what objects we are going to use to store our information and use all throughout the application
Scenario 1:
We will use the Entity Framework entities to pass all around through our application, for example the object should be used to store all information, we pass it around to the BL and eventually our WepApi will take this entity and return the value. No DTOs nor POCOs.
If the database schema changes, we update the entity and modify in all classes where it is used.
Scenario 2:
We create an intermediate class - call it a DTO or call it a POCO - to hold all information that is required by the application. There is an intermediate step of taking the information stored in the entity and populated into the POCO but we keep all EF code within the data access and not across all layers.
What are the pros and cons of each one?
I would use intermediate classes, i.e. POCO instead of EF entities.
The only advantage I see to directly use EF entities is that it's less code to write...
Advantages to use POCO instead:
You only expose the data your application actually needs
Basically, say you have some GetUsers business method. If you just want the list of users to populate a grid (i.e. you need their ID, name, first name for example), you could just write something like that:
public IEnumerable<SimpleUser> GetUsers()
{
return this.DbContext
.Users
.Select(z => new SimpleUser
{
ID = z.ID,
Name = z.Name,
FirstName = z.FirstName
})
.ToList();
}
It is crystal clear what your method actually returns.
Now imagine instead, it returned a full User entity with all the navigation properties and internal stuff you do not want to expose (such as the Password field)...
It really simplify the job of the person that consumes your services
It's even more obvious for Create like business methods. You certainly don't want to use a User entity as parameter, it would be awfully complicated for the consumers of your service to know what properties are actually required...
Imagine the following entity:
public class User
{
public long ID { get; set; }
public string Name { get; set; }
public string FirstName { get; set; }
public string Password { get; set; }
public bool IsDeleted { get; set; }
public bool IsActive { get; set; }
public virtual ICollection<Profile> Profiles { get; set; }
public virtual ICollection<UserEvent> Events { get; set; }
}
Which properties are required for you to consume the void Create(User entity); method?
ID: dunno, maybe it's generated maybe it's not
Name/FirstName: well those should be set
Password: is that a plain-text password, an hashed version? what is it?
IsDeleted/IsActive: should I activate the user myself? Is is done by the business method?
Profiles: hum... how do I affect a profile to a user?
Events: the hell is that??
It forces you to not use lazy loading
Yes, I hate this feature for multiple reasons. Some of them are:
extremely hard to use efficiently. I've seen too much times code that produces thousands of SQL request because the developers didn't know how to properly use lazy loading
extremely hard to manage exceptions. By allowing SQL requests to be executed at any time (i.e. when you lazy load), you delegate the role of managing database exceptions to the upper layer, i.e. the business layer or even the application. A bad habit.
Using POCO forces you to eager-load your entities, much better IMO.
About AutoMapper
AutoMapper is a tool that allows you to automagically convert Entities to POCOs and vice et versa. I do not like it either. See https://stackoverflow.com/a/32459232/870604
I have a counter-question: Why not both?
Consider any arbitrary MVC application. In the model and controller layer you'll generally want to use the EF objects. If you defined them using Code First, you've essentially defined how they are used in your application first and then designed your persistence layer to accurately save the changes you need in your application.
Now consider serving these objects to the View layer. The views may or may not reflect your objects, or an aggregation of your working objects. This often leads to POCOS/DTO's that captures whatever is needed in the view. Another scenario is when you want to publish objects in a web service. Many frameworks provide easy serialization on poco classes in which case you typically either need to 1) annotate your EF classes or 2) make DTO's.
Also be aware that any lazy loading you may have on your EF classes is lost when you use POCOS or if you close your context.
I have a question on what's the best design for my domain service. The use case is to create some entities based on user selected conditions.
The workflow of the app that will use this service:
User selects some conditions (like date, and other data)
He gets a list of "propositions" of the entities. He can select all of them, or only some.
The entities are created
What would be the best design for the domain service? I have two in mind:
Solution 1
interface IMyDomainService
{
IEnumerable<EntityProposition> GetEntitiesPropositions(Conditions conditions);
void CreateEntities(Conditions conditions);
}
In this case i would probably have some private method on the service that will be used by both of those. EntityProposition class is basicly 1:1 of what will be displayed in the view. There is some data in that class that is not part of the entity itself.
Solution 2
interface IMyDomainService
{
IEnumerable<EntitiyData> GetDataForEntities(Conditions conditions);
void CreateEntities(IEnumerable<EntityData> entities);
}
What would be the private method in Solution 1 is now exposed in interface. EnityData class holds all data for the entity that is relevant for creating the entity itself and displaying all the data for view.
To add some context:
This service is now used directly by ASP.NET MVC controller. It seems to me, that if i go with solution#2 i will have to create some additional application service, so it will wrap the logic of geting the data and creating entities.
EDIT 1
I will ask the question from different perspective: Should my controller look like this:
public ActionResult GetPropositions(Condtidions condtitions)
{
var entitiyData= service.GetEntityData(conditions);
return Json(entitiyData.ToViewModel());
}
public void CreateEntities(Conditions conditions)
{
var entitiyData= service.GetEntityData(conditions);
service.CreateEntities(entitiyData);
}
or:
public ActionResult GetPropositions(Condtidions condtitions)
{
var propositions = service.GetPropositons(conditions);
return Json(propositions.ToViewModel());
}
public void CreateEntities(Conditions conditions)
{
service.CreateEntities(conditions);
}
Of course this is simplified example, just to show my point.
Edit 2
Just as a followup: Firstly I gone with the Solution #2, but later my requirements changed, and i had to go back to the Solution #1. The reason behind was that after generating propositions, user could select few of them, but withing the same scope (conditions).
What is the most common case? Creating many entities or creating one?
Also I would not use Entites in the method names, it's pretty obvious that a service works with entities.
To me, the name sounds like you are just wrapping a repository with the services. That's a big no-no. Services in DDD is an extension of domain entities to encaspulate logic where you have to work with two or more entities in the same business case.
If you just need to fetch a entity, modify it and save it you should use the repository directly (no need to abstract away an abstraction).
interface IMyDomainRepository
{
IEnumerable<EntitiyData> GetData(Conditions conditions);
void Create(IEnumerable<EntityData> entities);
}