We can have these two approaches to send data to Data Access Layer or any other source:
Approach 1:
Repository way:
public class User
{
public string FirstName { get; set; }
public string LastName { get; set; }
public int Age { get; set; }
}
public class UserRepository
{
public static void Add(User user)
{
// Add user logic
}
public static void Delete(User user)
{
// Delete user logic
}
public static User Get(int userid)
{
// Get user logic
}
}
Usage:
var user = new User
{
FirstName = "FirstName",
LastName = "LastName",
Age = 20
};
UserRepository.Add(user);
Above, you have seen that I have kept the User class simple. It is not having any behavior.The behavior is added in a separate class UserRepository.
Second approach:
Keeping Add/Delete/Get etc in User.cs only:
public class User
{
public string FirstName { get; set; }
public string LastName { get; set; }
public int Age { get; set; }
public void Add()
{
// Add user logic
}
public void Delete()
{
// Delete user logic
}
public User Get()
{
// Get user logic
}
}
Usage:
var user = new User
{
FirstName = "FirstName",
LastName = "LastName",
Age = 20
};
user.Add();
Above I have kept the behavior in User.cs only. Both of the two approaches is serving the purpose of adding, deleting etc. the user. Can you let me know
Which approach is better?
When to decide which of the above two approach we have to opt?
If I have to add other methods too like FindAllUsers,
FindUserByUserId, DeleteUserByUserId which approach should I go for?
The first approach is far better as you are separating concerns i.e. the domain entity User and persistence to the database.
One of the most important things that is often talked about in Domain Driven Design is "persistence ignorance" See What are the benefits of Persistence Ignorance?
By using the repository pattern, the way you save /get your entity is kept out of the entity code, i.e. your domain keeping it cleaner and in essence achieving persistence ignorance (or going a long way towards it anyway)
So answers:
The repository approch is much better
Always go for option 1
Add these methods to the repository class
It strictly depends on the work you need to get done and on the size of your app. If you want something developed fast and less scalable you don't need to use a n-tier type architecture(i mean separate your data interactions to your Data acces layer).
However if you are looking for something that need to be highly scalable, editable, modifiable and know that it will get future features then clearly you must separate your consceurns to make your work easier for a longer period of time.
In the end as i said each approach serves a purpose, just know what your purpose is before getting to work.
Cheers.
If you follow DDD guidelines, then, entities should NOT have dependency on
infrastructure:
Nikola’s 1st law of IoC
Store in IoC container only services. Do not store any entities.
Reason: By including infrastructure logic into your entity, you are moving away from Single Responsibility Principle. See Mark Seemann's explanation.
The problem with your second approach is that User Entity has dependency on Infrastructure.
Related
Imagine you're using EntityFramework as your ORM, all wrapped up in a separated DAL class library.
You have the following POCO object in another "common" class library, which is nicely shared between your DAL,SL and Presentation Layer:
public class User
{
public int Id { get; set;}
public string FirstName { get; set; }
public string LastName { get; set; }
public string Email { get; set; }
public int Age { get; set; }
public Gender Gender { get; set; }
}
You then implement the following in the SL:
public interface IUserService
{
User GetById(int u);
List<User> GetByLastName(string s);
}
public class UserService : IUserService
{
private MyContext _myContext;
public UserService(MyContext myContext = null)
{
_myContext = myContext ?? new MyContext();
}
public User GetById(int userId)
{
return _myContext.Users.FirstOrDefault(u=>u.Id == userId);
}
public List<User> GetByLastName(string lastName)
{
return _myContext.Users.Where(u=>u.LastName == lastName).ToList();
}
}
And all works hunky-dory.
.
But then you need to add a new method to the service to handle a different query (for example, users who fall within an age-range).
And then another.
And another...
.
Before long, you start to think
Wouldn't it be nice if you could provide any query you can think of
through to the service layer, and it would get the relevant data and
return it for you, without having to explicitly define each possibly
query as a distinct method, much in the same way the SL is already
doing with the DAL?
So the question is:
Is this possible to achieve SAFELY within a SL, whilst still
maintaining loose coupling?
.
I've read that using IQueryable can lead to disaster with things like:
q.Where(x=>{Console.WriteLine("fail");return true;});
But I'm also fairly new to using ORMs and service layers, so naturally am looking for the "best practices" and "known pitfalls", whilst also wanting to keep my code clean.
It sounds like you're leaking business layer logic into your presentation layer.
As mentioned in the comments, determining the exact dataset that should be displayed by the presentation layer is actually business logic.
You may have fields on your UI that give the user the ability to select a particular age range to display, which is perfectly valid, but the presentation layer should be responsible for just pushing those values up to the service layer and providing the data it returns to the actual UI in a friendly/expected fashion.
The actual searching / fitlering of data based on those values should be done within the service layer / business layer.
I am doing code review of following code written by my colleague and as per my experience projection and formatting kind of activity should not be done in DB layer instead should do it in business layer. But he is not convinced. This DB layer is for mvc application.
Please suggest me is following codes are OK or we should always avoid projection/formatting in DB layer.
public class CustomerDetails
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Address1 { get; set; }
public string Address2 { get; set; }
public string City { get; set; }
public string State { get; set; }
public string Country { get; set; }
public DateTime PurchaseDate { get; set; }
public Decimal OrderAmount { get; set; }
//more propery...
}
public class CustomerRepository
{
public IEnumerable<CustomerDetails> GetCustomer(int customerID)
{
//get data using entity framework DBContext
IEnumerable<CustomerDetails> customer = get data from database using sqlquery;
//projection and formatting
return customer.Select
(p =>
new CustomerDetails
{
FirstName=p.FirstName.ToProper(), //extension method
LastName = p.LastName.ToProper(),
Address1 = p.Address1,
Address2=p.Address2,
City = p.City.ToProper(),
State=p.State.ToUpper(),
PurchaseDate=p.PurchaseDate.Tommddyyy(),
OrderAmount=p.OrderAmount.ToUSA()
}
);
}
}
UPDATE:
CustomerDetails is db entity which maps field return by stored procedure. We are using repository to put a abstraction layer on ORM(EF) so that if we need change our ORM framework it should not impact on dependent layer.
What i thought, from repository we should return row data and different representation of same data should be done in service layer.
I would say that where to put this kind of code might be a design decision depending on what's the actual case.
Furthermore, when using OR/M frameworks I doubt that there would be a database layer at all, since any OR/M tries to be a data layer itself, and they tend to provide a Repository Pattern-like interface to query and write persistent objects. That is, since a repository is a collection-like interface which translates the underlying data format to domain objects (entities), it seems like whatever you call your layers they will be still the domain/business layer, or maybe the service layer - who knows -.
Also, your code seems to be a repository (!) and this means that we're not talking about a database layer. Instead of that, we're talking about the boundary between the domain and the data mapper (OR/M, i.e. Entity Framework).
TL;DR
If the whole repositories require some kind of projection, meaning that these projects are the domain objects expected by the domain layer, I find that such projections are implemented in the right place.
After all, as I said in the long text above, I don't see a data/database layer in an OR/M-based solution at all...
Think of the Single Responsibility Principle and apply it to your layers.
The responsibility of a "business layer", if the name means anything, is to express the Business: it's rules, processes, logic, concepts. If you have stuff in your business layer that your non-techy business people can't look at and understand, then it probably shouldn't be there.
The responsibility of the code you've shown us, appears to be to map from your persistence technology format into classes that your business layer can understand.
That responsibility is exactly what a Repository is supposed to encapsulate. So the code is in the right place.
If you put it in the business layer, you're moving towards Ball of Mud architecture.
Assume the following simple POCOs, Country and State:
public partial class Country
{
public Country()
{
States = new List<State>();
}
public virtual int CountryId { get; set; }
public virtual string Name { get; set; }
public virtual string CountryCode { get; set; }
public virtual ICollection<State> States { get; set; }
}
public partial class State
{
public virtual int StateId { get; set; }
public virtual int CountryId { get; set; }
public virtual Country Country { get; set; }
public virtual string Name { get; set; }
public virtual string Abbreviation { get; set; }
}
Now assume I have a simple respository that looks something like this:
public partial class CountryRepository : IDisposable
{
protected internal IDatabase _db;
public CountryRepository()
{
_db = new Database(System.Configuration.ConfigurationManager.AppSettings["DbConnName"]);
}
public IEnumerable<Country> GetAll()
{
return _db.Query<Country>("SELECT * FROM Countries ORDER BY Name", null);
}
public Country Get(object id)
{
return _db.SingleById(id);
}
public void Add(Country c)
{
_db.Insert(c);
}
/* ...And So On... */
}
Typically in my UI I do not display all of the children (states), but I do display an aggregate count. So my country list view model might look like this:
public partial class CountryListVM
{
[Key]
public int CountryId { get; set; }
public string Name { get; set; }
public string CountryCode { get; set; }
public int StateCount { get; set; }
}
When I'm using the underlying data provider (Entity Framework, NHibernate, PetaPoco, etc) directly in my UI layer, I can easily do something like this:
IList<CountryListVM> list = db.Countries
.OrderBy(c => c.Name)
.Select(c => new CountryListVM() {
CountryId = c.CountryId,
Name = c.Name,
CountryCode = c.CountryCode,
StateCount = c.States.Count
})
.ToList();
But when I'm using a repository or service pattern, I abstract away direct access to the data layer. It seems as though my options are to:
Return the Country with a populated States collection, then map over in the UI layer. The downside to this approach is that I'm returning a lot more data than is actually needed.
-or-
Put all my view models into my Common dll library (as opposed to having them in the Models directory in my MVC app) and expand my repository to return specific view models instead of just the domain pocos. The downside to this approach is that I'm leaking UI specific stuff (MVC data validation annotations) into my previously clean POCOs.
-or-
Are there other options?
How are you handling these types of things?
It really depends on the projects architecture for what we do. Usually though.. we have services above the repositories that handle this logic for you. The service decides what repositories to use to load what data. The flow is UI -> Controller -> Service -> Repositories -> DB. The UI and/or Controllers have no knowledge of the repositories or their implementation.
Also, StateCount = c.States.Count would no doubt populate the States list anyway.. wouldn't it? I'm pretty sure it will in NHibernate (with LazyLoading causing an extra select to be sent to the DB).
One option is to separate your queries from your existing infrastructure entirely. This would be an implementation of a CQRS design. In this case, you can issue a query directly to the database using a "Thin Read Layer", bypassing your domain objects. Your existing objects and ORM are actually getting in your way, and CQRS allows you to have a "command side" that is separate and possibly a totally different set of tech to your "query side", where each is designed to do it's own job without being compromised by the requirements of the other.
Yes, I'm quite literally suggesting leaving your existing architecture alone, and perhaps using something like Dapper to do this (beware of untested code sample) directly from your MVC controllers, for example:
int count =
connection.Query<int>(
"select count(*) from state where countryid = #countryid",
new { countryid = 123 } );
Honestly, your question has gave me a food for thought for a couple of days. More and more I tend to think that denormalization is the correct solution.
Look, the main point of domain driven design is to let the problem domain drive your modeling decisions. Consider the country entity in the real world. A country has a list of states. However, when you want to know how many states a certain country has, you are not going over the list of the states in the encyclopedia and count them. You are more likely to look at the country's statistics and check the number of states there.
IMHO, the same behavior should be reflected in your domain model. You can have this information in the country's property, or introduce a kind of CountryStatistics object. Whatever approach you choose, it must be a part of the country aggregate. Being in the consistency boundary of the aggregate will ensure that it holds a consistent data in case of adding or removing a state.
Some other approaches:
If the states collection is not expected to change a lot, you can
allow a bit of denormalization - add "NumberOfStates" property to the
Country object. It will optimise the query, but you'll have to make
sure the extra field holds the correct information.
If you are using NHibernate, you can use ExtraLazyLoading - it will
issue another select, but won't populate the whole collection when
Count is called. More info here:
nHibernate Collection Count
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Let's say we have a project that will handle lots of data (employees, schedules, calendars....and lots more). Client is Windows App, Server side is WCF. Database is MS SQL Server. I am confused regarding which approach to use. I read few articles and blogs they all seem nice but I am confused. I don't want to start with one approach and then regret not choosing the other. The project will have around 30-35 different object types. A lot of Data retrieving to populate different reports...etc
Approach 1:
// classes that hold data
public class Employee
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
.....
}
public class Assignment
{
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
.....
}
.....
Then Helper classes to deal with data saving and retrieving:
public static class Employees
{
public static int Save(Employee emp)
{
// save the employee
}
public static Employee Get(int empId)
{
// return the ugly employee
}
.....
}
public static class Assignments
{
public static int Save(Assignment ass)
{
// save the Assignment
}
.....
}
FYI, The object classes like Employees and Assignment will be in a separate Assembly to be shared between Sever and Client.
Anyway, with this approach I will have a cleaner objects. The Helper classes will do most of the job.
Approach 2:
// classes that hold data and methods for saving and retrieving
public class Employee
{
// constructors
public Employee()
{
// Construct a new Employee
}
public Employee(int Id)
{
// Construct a new Employee and fills the data from db
}
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
.....
public int Save()
{
// save the Employee
}
.....
}
public class Assignment
{
// constructors
public Assignment()
{
// Construct a new assignment
}
public Assignment(int Id)
{
// Construct a new assignment and fills the data from db
}
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
.....
public int Save()
{
// save the Assignment
}
.....
}
.....
With this approach, Each object will do its own job.. Data still can be transferred from WCF to client easily since WCF will only share properties.
Approach 3:
Using Entity Framework.. beside the fact that I never worked with it (which is nice since I have to learn something new) I will need to create POCOs to transfer data between client and WCF..
Now, Which is better? more options?
Having peristence logic in object itself is always a bad idea.
I would use first aproach. It looks like Repository pattern. This way, you can easily debug peristing of data, because it will be clearly separated from rest of the logic of the object.
I would suggest using Entity Framework + Repository pattern. This way your entities are simple objects without any logic in them. All retrieve-save logic stays in repository. I have some successful experience with using generic repository, which is typed with entity, something similar is described here (generic repository part of the article). This way you write repository code only once and you can reuse it for all entities you have. E.g.:
interface IRepositry<T>
{
T GetById(long id);
bool Save(T entity);
}
public class Repository<T> : IRepository<T> {...}
var repository = new Repository<MyEntity>();
var myEntity = repository.GetById(1);
var repository2 = new Repository<MySecondEntity>();
var mySecondEntity = repository.GetById(1);
Whenever an entity needs some very specific operation, you can add this operation to a concrete typed implementation of IRepository:
interface IMySuperRepositry : IRepository<MySuperEntity>
{
MySuperEntity GetBySuperProperty(SuperProperty superProperty);
}
public class MySuperEntityRepository : Repository, IMySuperRepository
{...}
To create repositories it is nice to use a factory, which is based for example on configuration file. This way you can switch implementation of repositories, e.g. for unit testing, when you do not want to use repository that really accesses DB:
public class RepositoryFactory
{
IRepository<T> GetRepository<T>()
{
if (config == production)
return new Repository<T>(); // this is implemented with DB access through EF
if (config == test)
return new TestRepository<T>(); // this is implemented with test values without DB access
}
}
}
}
You can add validation rules for saving and further elaborate on this. EF also lets you add some simple methods or properties to generated entities, because all of them are partial classes.
Furthermore using POCOs or STEs (see later) it is possible to have EDMX DB model in one project, and all your entities in another project and thus distribute this DLL to client (which will contain ONLY your entities). As I understood, that's what you also want to achieve.
Also seriously consider using Self tracking entities (and not just POCOs). In my opinion they are great for usage with WCF. When you get an entity from DB and pass it to the client, client changes it and gives it back, you need to know, if entity was changed and what was changed. STEs handle all this work for you and are designed specifically for WCF. You get entity from client, say ApplyChanges and Save, that's it.
What about implementing the Save as an extension method? That way your classes are clean as in the first option, but the methods can be called on the object as in the second option.
public static class Employee
{
public static int Save(this Employee emp)
{
// save the employee
}
public static Employee Get(int empId)
{
// return the ugly employee
}
}
you're over thinking this. trying to apply technologies and patterns "just because" or "that's what they say" only makes the solution complicated. The key is designing the application so that it can easily adapt to change. that's probably an ambiguous answer, but it's what it all comes down to. how much effort is required to maintain and/or modify the code base.
currently it sounds like the patterns and practices are the end result, instead of a means to an end.
Entity Framework is a great tool but is not necessarily the best choice in all cases. It will depend on how much you expect to read/write from the database vs how much you expect to read/write to your WCF services. Perhaps someone better-versed in the wonderful world of EF will be able to help you. To speak from experience, I have used LINQ-TO-SQL in an application that features WCF service endpoints and had no issues (and in fact came to LOVE Linq-To-Sql as an ORM).
Having that said, if you decide that EF is not the right choice for you, it looks like you're on the right track with Approach 1. However, I would recommend implementing a Data Access Layer. That is, implement a Persist method in your business classes that then calls methods in a separate DAO (Data Access Object, or a class used to persist data from a business object) to actually save it to your database.
A sample implementation might look like this:
public class Employee
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public void Persist()
{
EmployeeDAO.Persist(this);
}
}
public class Assignment
{
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
public void Persist()
{
AssignmentDAO.Persist(this);
}
}
public static class EmployeeDAO
{
public static int Persist(Employee emp)
{
// insert if new, else update
}
public static Employee Get(int empId)
{
// return the ugly employee
}
.....
}
public static class AssignmentDAO
{
public static int Persist(Assignment ass)
{
// insert if new, else update
}
.....
}
The benefit to a pattern like this is that you get to keep your business classes clean, your data-access logic separate, while still giving the objects the easy syntax of being able to write new Employee(...).Persist(); in your code.
If you really want to go nuts, you could even consider implementing interfaces on your Persistable classes, and have your DAO(s) accept those IPersistable instances as arguments.
I am looking into migrate a large project to Entity Framework 4.0 but am not sure if it can handle my inheritance scenario.
I have several projects that inherit from an object in the “main” project. Here is a sample base class:
namespace People
{
public class Person
{
public int age { get; set; }
public String firstName { get; set; }
public String lastName { get; set; }
}
}
and one of the sub-classes:
namespace People.LawEnforcement
{
public class PoliceOfficer : People.Person
{
public string badgeNumber { get; set; }
public string precinct { get; set; }
}
}
And this is what the project layout looks like:
People - People.Education - People.LawEnforcement http://img51.imageshack.us/img51/7293/efdemo.png
Some customers of the application will use classes from the People.LawEnforcement and other users will use People.Education and some will use both. I only ship the assembles that the users will need. So the Assembles act somewhat like plug-ins in that they add features to the core app.
Is there anyway in Entity Framework to support this scenario?
Based on this SO question I'm think something like this might work:
ctx.MetadataWorkspace.LoadFromAssembly(typeof(PoliceOfficer).Assembly);
But even if that works then it seams as if my EDMX file will need to know about all the projects. I would rather have each project contain the metadata for the classes in that project but I'm not sure if that is possible.
If this isn't possible with entity framework is there another solution (NHibernate, Active Record, etc.) that would work?
Yes this is possible, using the LoadFromAssembly(..) method you've already found.
... but it will only work if you have an specialized model (i.e. EDMX) for each distinct type of client application.
This is because EF (and most other ORMs) require a class for each entity in the model, so if some clients don't know about some classes, you will need a model without the corresponding entities -- i.e. a customized EDMX for each scenario.
To make it easier to create a new model for each client application, if I was you I'd use Code-Only following the best practices laid out on my blog, to make it easy to grab only the fragments of the model you need actually need.
Hope this helps
Alex
Alex is correct (+1), but I'd strongly urge you to reconsider your model. In the real world, a police officer is not a subtype of a person. Rather, it's an attribute of that person's employment. I think programmers frequently tend to over-emphasize inheritance at the expense of composition in object oriented design, but it's especially problematic in O/R mapping. Remember that an object instance can only ever have one type. When that object is stored in the database, the instance can only have that type for as long as it exists, across multiple application sessions. What if a person had two jobs, as a police officer and a teacher? Perhaps that scenario is unlikely, but the general problem is more common than you might expect.
More relevant to your question, I think you can solve your actual problem at hand by making your mapped entity model more generic, and your application-specific data projections on the entities rather than entities themselves. Consider entities like:
public class JobType
{
public Guid Id { get; set; }
// ...
}
public class Job
{
public JobType JobType { get; set; }
public string EmployeeNumber { get; set; }
}
public class Person
{
public EntityCollection<Job> Jobs { get; set; }
}
Now your law enforcement app can do:
var po = from p in Context.People
let poJob = p.Jobs.Where(j => j.JobType == JobType.PoliceOfficerId).FirstOrDefault()
where poJob != null
select new PoliceOfficer
{
Id = p.Id,
BadgeNumber = poJob.EmployeeNumber
};
Where PoliceOfficer is just a POCO, not a mapped entity of any kind.
And with that you've achieved your goal of having a common data model, but having the "job type specific" elements in separate projects.