Domain models and the List / Detail page - c#

I need some advice on the a question I have been battling with on DDD .
I have a domain model which is a aggregate root
public class Objective {
public int ObjectiveId { get; private set; }
public string ObjectiveDescription { get; private set; }
public DateTime StartDate { get; private set; }
public DateTime TargetDate { get; private set; }
public DateTime? CompletedDate { get; private set; }
public int EmploymentId { get; private set; }
public List<Skill> RelatedSkills { get; private set; }
public List<Goal> RelatedGoals { get; private set; }
// few more properties.
}
I have 2 views one is a List View and another is a details View.
The List View has a IEnumerable which has just 3 fields
class ObjectiveListVM{
public int ObjectiveId { get; private set; }
public string ObjectiveDescription { get; private set; }
public DateTime StartDate { get; private set; }
}
The Details view has a ObjectiveDetailViewModel which has 90 percent of the fields from the Objective domain model plus a few more.
I have a repository which gets me either a list or one objective
IObjectiveRepo
{
Objective GetById();
IEnumerable<Objective> GetList();
}
This is how I have implemented DDD and Repository pattern.
My question is this , my GetList query is really expensive , it only needs data from 3 columns but since my Repositories should always return domain objects , I End up returning a list the entire Objective domain object which has child lists and lots of fields.
The solution I thought of is to have another ObjectiveSummary domain model which just has a few fields and is returned by the GetList repo method. But it then breaks some other principles of DDD mainly that ObjectiveSummary is a Anemic Domain model . In fact its not really a model , its more of a DTO in my head.
This is such a common scenario that I feel I am missing something very basic in my implementation or interpretation of DDD / repository patterns .
Can some of the experts point out the mistake i have made in the implementation or highlight a way to address this problem without expensive queries ?
Note : I can thinka few ways of getting around this problem.But I am more interested in finding the correct way which does not break the principles of the architecture/pattern that I am using.

You should not query your domain model. An aggregate is always loaded in its entirety so it does not lend itself well to querying.
As soon as you think about lazy-loading you are probably not using an optimal approach. Lazy-loading is evil. Don't do it :)
What you are after is a query layer of sorts. This is directly related to CQRS. The query side only returns data. It has no behaviour and you return the most basic structure that you can. In the C# world that I am also in I use a DataRow or IEnumerable<DataRow>. If it is really complex I may opt for a DTO:
public interface IObjectiveQuery
{
DataRow ForList(int id);
bool Contains(string someUniqueKey);
IEnumerable<DataRow> Search(string descriptionLike, DateTime startDate);
string Description(int id);
}
Give it a go. I think you'll find it simplifies things immensely. Your domain should only be concerned about the command/write side of things.

One way to deal with it is to return IQueryable<Objective> from your repository instead of IEnumerable<Objective>.
public interface IObjectiveRepository
{
Objective GetById();
IQueryable<Objective> GetList();
}
It will allow you to keep the repository simple and add more logic to the queries in application/domain layer without performance loss. The following query will be executed on the db server, including the projection to ObjectiveListVM:
public IReadOnlyList<ObjectiveListVM> GetSummaryList()
{
return _repository
.GetList()
.Select(o => new ObjectiveListVM
{
ObjectiveId = o.ObjectiveId,
ObjectiveDescription = o.ObjectiveDescription,
StartDate = o.StartDate
})
.ToList();
}
You can use Automapper's Queryable extensions to make the projection to VMs easier.
return _repository
.GetList()
.ProjectTo<ObjectiveListVM>()
.ToList();

Related

Using Queryable Extension to get back Foreign Key Data Value

I have my Unit of Measure which users fill in and save, they can then save a list of Unit Sizes which has its own table and is a foreign key to the Unit Of Measure. When I am fetching all the data back, the Unit Size value is coming back blank.
I have read a half dozen ways to do this and I am not comprehending them. The one that makes the most sense to me is using a Queryable extension so I am trying to go that route but my code still hasn't quite gotten there.
Here is where I am at - these are my entities:
namespace Mudman.Data.Entities
{
[Table("UnitOfMeasure")]
public class UnitOfMeasure : IEntityBase, IAuditBase
{
[Key]
[Column("UnitOfMeasureId")]
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public string Id { get; set; }
[Required]
[ForeignKey("TenantId")]
public string TenantId { get; set; }
[JsonIgnore]
public virtual Tenant Tenant { get; set; }
public string Name { get; set; }
public virtual IEnumerable<UnitOfMeasureSize> UnitSize { get; set; }
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public DateTime CreateDate { get; set; } = DateTime.UtcNow;
[StringLength(255)]
public string CreateUserId { get; set; }
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public DateTime UpdateDate { get; set; }
[StringLength(255)]
public string UpdateUserId { get; set; }
}
}
Unit Of Measure size entity:
namespace Mudman.Data.Entities
{
[Table("UnitOfMeasureSize")]
public class UnitOfMeasureSize : IEntityBase, IAuditBase
{
[Key]
[Column("UnitOfMeasureSize")]
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public string Id { get; set; }
[Required]
[ForeignKey("TenantId")]
public string TenantId { get; set; }
[JsonIgnore]
public virtual Tenant Tenant { get; set; }
[Required]
[ForeignKey("UnitOfMeasureId")]
public string UnitOfMeasureId { get; set; }
public virtual UnitOfMeasure UnitOfMeasure { get; set; }
[Required]
public int UnitSize { get; set; }
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public DateTime CreateDate { get; set; } = DateTime.UtcNow;
[StringLength(255)]
public string CreateUserId { get; set; }
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public DateTime UpdateDate { get; set; }
[StringLength(255)]
public string UpdateUserId { get; set; }
}
}
Unit Of Measure Repository including Unit Size:
namespace Mudman.Repository
{
public class UnitOfMeasureRepository : EntityBaseRepository<UnitOfMeasure>,
IUnitOfMeasureRepository
{
MudmanDbContext context;
public UnitOfMeasureRepository(MudmanDbContext context) : base(context)
{
{ this.context = context; };
}
public IEnumerable<UnitOfMeasure> GetAllUnitsOfMeasure(string TenantId)
{
var result = context.UnitOfMeasure
.Where( uom => uom.TenantId == TenantId)
.Include(uom => uom.UnitSize);
return result;
}
}
}
My GetAllAsync method in my service:
public Task<IEnumerable<UnitOfMeasureViewModel>> GetAllAsync()
{
var result = _unitOfMeasureRepo.GetAllUnitsOfMeasure(TenantId);
result.OrderBy(r => r.Name);
return _mapper.Map<List<UnitOfMeasure>, List<UnitOfMeasureViewModel>>(result.ToList());
}
AutoMapper Code:
CreateMap<UnitOfMeasure, UnitOfMeasureViewModel>().ReverseMap()
.ForMember(dest => dest.UnitSize, uos => uos.Ignore())
.ForMember(uom => uom.UnitSize, src => src.MapFrom(uom => uom.UnitSize));
There are a few issues with your attempts so far.
Firstly, your GetAllAsync looks like it wants to be an async method but you have it making entirely synchronous calls, and hasn't been marked as async. I would avoid diving into asynchronous methods until you have the fundamentals of retrieving your data down.
What we cannot see from your example is the mapping between your unit of measure entity and the view model. The entity has a one-to-many relationship between unit of measure and UnitSizes, so what gets updated depends on how the view model is laid out and configured for mapping. This is most likely the root of your problem where your view model mapping from the entity is likely relying on a convention that isn't pairing up with the data you expect.
Performance wise, this approach will run into problems as your data model grows in terms of entities and rows. The fundamental problem with using a repository like this is that a method like this:
IEnumerable<UnitOfMeasure> GetAllUnitsOfMeasure(string TenantId)
will load all data into memory and you explicitly need to include related entities, whether the consumer will want them or not, which adds to the amount of work the queries need to do and the memory required. If TenantId is for something like a multi-tenant database such as in a SaaS application with multiple tenants using a single data source, this is a good reason to adopt a Repository pattern, but I would not pass tenantIds around as parameters. Instead, have the repository accept a dependency that can validate and resolve the current TenantId from the session. This way the repository can always ensure that the current tenant rules are validated and applied for every query without worrying about where the caller might have gotten a TenantId from. (I.e accepting a TenantId from a POST request would be bad as that value could easily be tampered with)
To address performance and probably touch on what you had read about IQueryable extensions, rather than returning IEnumerable<TEntity> from a repository, you can return IQueryable<TEntity>. The advantages here are that you can still have the repository add base filtering rules like the tenantID, and allow the consumer to handle things like sorting and projection.
For example, the repository looks more like:
public class UnitOfMeasureRepository : IUnitOfMeasureRepository
{
private readonly MudmanDbContext _context;
private readonly ICurrentUserLocator _currentUserLocator;
public UnitOfMeasureRepository(MudmanDbContext context, ICurrentUserLocator currentUserLocator )
{
_context = context ?? throw new ArgumentNullException("context");
_currentUserLocator = currentUserLocator ?? throw new ArgumentNullException("currentUserLocator");
}
public IQueryable<UnitOfMeasure> GetUnitsOfMeasure()
{
var tenantId = _currentUserLocator.CurrentUserTenantId; // Checks session for current user and retrieves a tenant ID or throws an exception. (no session, etc.)
var query = _context.UnitOfMeasure
.Where( uom => uom.TenantId == tenantId)
return query;
}
}
The changes to note here is that we do away with the base generic repository class. This was confusing as you were passing the context to a base class then setting a local context instance as well. Generic repositories with EF are a bad code smell as they lead to either very complex code, very poor performing code, or both. There is a CurrentUserLocator with the container can inject which is a simple class that can verify that a user is currently authenticated and can return their Tenant ID. From there we will return an IQueryable<UnitOfMeasure> which has a base filter for the TenantID which will allow our consumers to make up their own minds how they want to consume it. Note that we do not need to use Include for related entities, again the consumers can decide what they need.
Calling the new repository method and projecting your view models looks fairly similar to what you had. It looks like you are using Automapper, rather than using .Map() we can use .ProjectTo() with the IQueryable and Automapper can essentially build a Select() expression to pull back only the data that the view model will need. To use ProjectTo extension method we do need to provide it with the MappingConfiguration that was used to create your mapper and that will tell it how to build the ViewModel. (So rather than having a dependency of type 'Mapper' you will need one for the MapperConfiguration you set up for that mapper.)
public IEnumerable<UnitOfMeasureViewModel> GetAll()
{
var models = _unitOfMeasureRepo.GetUnitsOfMeasure()
.OrderBy(r => r.Name)
.ProjectTo<UnitOfMeasureViewModel>(_mapperConfiguration)
.ToList();
}
What this does is call our repository method to get the IQueryable, which we can then append the ordering we desire, and call ProjectTo to allow Automapper to populate the view models before executing the query with ToList(). When using Select or ProjectTo we don't need to worry about using Include to eager load related data that might be mapped, these methods take care of loading data related entities if/when needed automatically.
Even in cases where we want to use a method like this to update entities with related entities, using IQueryable works there to:
public void IncrementUnitSize(string unitOfMeasureId)
{
var unitOfMeasure = _unitOfMeasureRepo.GetUnitsOfMeasure()
.Include(r => r.UnitSizes)
.Where(r => r.Id == unitOfMeasureId)
.Single();
foreach(var unitSize in unitOfMeasure.UnitSizes)
unitSize.UnitSize += 1;
_context.SaveChanges();
}
Just as an example of fetching related entities as needed, versus having a method that returns IEnumerable and needs to eager load everything just in case some caller might need it.
These methods can very easily be translated into an asyncronous method without touching the repository:
public async Task<IEnumerable<UnitOfMeasureViewModel>> GetAll()
{
var models = await _unitOfMeasureRepo.GetAllUnitsOfMeasure(TenantId)
.OrderBy(r => r.Name)
.ProjectTo<UnitOfMeasureViewModel>(_mapperConfiguration)
.ToListAsync();
}
... and that is all! Just remember that async doesn't make the call faster, if anything it makes it a touch slower. What it does is make the server more responsive by allowing it to move the request handling to a background thread and free the request thread to pick up a new server request. That is great for methods that are going to take a bit of time, or are going to get called very frequently to avoid tying down all of the server request threads leading to timeouts for users waiting for a response from the server. For methods that are very fast and aren't expected to get hammered by a lot of users, async doesn't add a lot of value and you need to ensure every async call is awaited or you can end up with whacky behaviour and exceptions.

"safe" queryable service layer design?

Imagine you're using EntityFramework as your ORM, all wrapped up in a separated DAL class library.
You have the following POCO object in another "common" class library, which is nicely shared between your DAL,SL and Presentation Layer:
public class User
{
public int Id { get; set;}
public string FirstName { get; set; }
public string LastName { get; set; }
public string Email { get; set; }
public int Age { get; set; }
public Gender Gender { get; set; }
}
You then implement the following in the SL:
public interface IUserService
{
User GetById(int u);
List<User> GetByLastName(string s);
}
public class UserService : IUserService
{
private MyContext _myContext;
public UserService(MyContext myContext = null)
{
_myContext = myContext ?? new MyContext();
}
public User GetById(int userId)
{
return _myContext.Users.FirstOrDefault(u=>u.Id == userId);
}
public List<User> GetByLastName(string lastName)
{
return _myContext.Users.Where(u=>u.LastName == lastName).ToList();
}
}
And all works hunky-dory.
.
But then you need to add a new method to the service to handle a different query (for example, users who fall within an age-range).
And then another.
And another...
.
Before long, you start to think
Wouldn't it be nice if you could provide any query you can think of
through to the service layer, and it would get the relevant data and
return it for you, without having to explicitly define each possibly
query as a distinct method, much in the same way the SL is already
doing with the DAL?
So the question is:
Is this possible to achieve SAFELY within a SL, whilst still
maintaining loose coupling?
.
I've read that using IQueryable can lead to disaster with things like:
q.Where(x=>{Console.WriteLine("fail");return true;});
But I'm also fairly new to using ORMs and service layers, so naturally am looking for the "best practices" and "known pitfalls", whilst also wanting to keep my code clean.
It sounds like you're leaking business layer logic into your presentation layer.
As mentioned in the comments, determining the exact dataset that should be displayed by the presentation layer is actually business logic.
You may have fields on your UI that give the user the ability to select a particular age range to display, which is perfectly valid, but the presentation layer should be responsible for just pushing those values up to the service layer and providing the data it returns to the actual UI in a friendly/expected fashion.
The actual searching / fitlering of data based on those values should be done within the service layer / business layer.

Restrict Operations On Database Through Repository

I have implemented Unit of Work and Repository pattern with Entity Framework. A sample operation is shown below:
public T GetById(int id)
{
return _context.Set<T>().Find(id);
}
So if I wanted to bring a record with id 12 and only StudentName column. I cannot do that using the above method as all columns will be pulled which I could have done using Linq like below:
Student get = context.Students
.Where(s => s.Id = 12)
.Select(s => new { StudentName = Name })
.SingleOrDefault();
Currently, I am returning an IQueryable from the repository like below to make above aforementioned scenario work:
public IQueryable<T> Query()
{
IQueryable<T> query = dbset;
return query;
}
which makes the point of having a Repository to null anyway because I cannot restrict operations on database now. I have to query like below:
Student get = _uow.Students
.Query()
.Where(s => s.Id = 12)
.Select(s => new { StudentName = Name })
.SingleOrDefault();
Any suggestions on how to improve this situation or other opinions are required please.
I would argue that you are worried about restricting operations at the wrong layer; the Repository. Assuming your Repositories are supposed to abstract away the details of your DB and Entity Framework, I think this is too low of a level to do what you want.
If you have a Repo per DB table, and a class per query result type (direct SQL/EF or DB View), it doesn't make sense to introduce another layer of abstraction here. It would be better to do this at the next layer up, or whatever is handling your transactional boundaries.
To demonstrate, here is a more concrete example:
Given a Student DB table:
TABLE Student
PK Id int
COLUMN Name string
COLUMN SecretData string
Your StudentRepo should always return instance(s) of a Student class:
public class Student
{
public int Id { get; set; }
public string Name { get; set; }
public string SecretData { get; set; }
}
Then the layer that utilizes your repo(s) should handle your transactions (potentially across multiple repos/operations) and map its results to a Domain entity. Your domain entity can include only the fields which you want to surface. You can create specialized domain entites for each purpose you need.
public class DomainStudent
{
public int Id { get; private set; } // prevent attempts to change Ids on domain entities
public string Name { get; set; }
}
public class DomainStudentWithSecret
{
public int Id { get; private set; }
public string Name { get; set; }
public string SecretData { get; set; }
}
And to expand on why you would want to handle this kind of mapping and transactional boundaries outside of your Repository code: these things are best left to code which can operate across many DB tables. Often you need to take the result of two separate SQL queries and map the result to a single domain entity. Or sometimes, you want to roll back a transaction (or not execute subsequent SQL) if an initial query fails. I find it best to keep the Repos/DAOs working on a single table/view/sproc (DB entity) to abstract away the details of the Db engine and have domain-layer classes handle the heavy lifting of how to make sense of the data. If you need complex SQL queries with many JOINs, consider creating a view so you can work with the data like any other table.

N-tier Repository POCOs - Aggregates?

Assume the following simple POCOs, Country and State:
public partial class Country
{
public Country()
{
States = new List<State>();
}
public virtual int CountryId { get; set; }
public virtual string Name { get; set; }
public virtual string CountryCode { get; set; }
public virtual ICollection<State> States { get; set; }
}
public partial class State
{
public virtual int StateId { get; set; }
public virtual int CountryId { get; set; }
public virtual Country Country { get; set; }
public virtual string Name { get; set; }
public virtual string Abbreviation { get; set; }
}
Now assume I have a simple respository that looks something like this:
public partial class CountryRepository : IDisposable
{
protected internal IDatabase _db;
public CountryRepository()
{
_db = new Database(System.Configuration.ConfigurationManager.AppSettings["DbConnName"]);
}
public IEnumerable<Country> GetAll()
{
return _db.Query<Country>("SELECT * FROM Countries ORDER BY Name", null);
}
public Country Get(object id)
{
return _db.SingleById(id);
}
public void Add(Country c)
{
_db.Insert(c);
}
/* ...And So On... */
}
Typically in my UI I do not display all of the children (states), but I do display an aggregate count. So my country list view model might look like this:
public partial class CountryListVM
{
[Key]
public int CountryId { get; set; }
public string Name { get; set; }
public string CountryCode { get; set; }
public int StateCount { get; set; }
}
When I'm using the underlying data provider (Entity Framework, NHibernate, PetaPoco, etc) directly in my UI layer, I can easily do something like this:
IList<CountryListVM> list = db.Countries
.OrderBy(c => c.Name)
.Select(c => new CountryListVM() {
CountryId = c.CountryId,
Name = c.Name,
CountryCode = c.CountryCode,
StateCount = c.States.Count
})
.ToList();
But when I'm using a repository or service pattern, I abstract away direct access to the data layer. It seems as though my options are to:
Return the Country with a populated States collection, then map over in the UI layer. The downside to this approach is that I'm returning a lot more data than is actually needed.
-or-
Put all my view models into my Common dll library (as opposed to having them in the Models directory in my MVC app) and expand my repository to return specific view models instead of just the domain pocos. The downside to this approach is that I'm leaking UI specific stuff (MVC data validation annotations) into my previously clean POCOs.
-or-
Are there other options?
How are you handling these types of things?
It really depends on the projects architecture for what we do. Usually though.. we have services above the repositories that handle this logic for you. The service decides what repositories to use to load what data. The flow is UI -> Controller -> Service -> Repositories -> DB. The UI and/or Controllers have no knowledge of the repositories or their implementation.
Also, StateCount = c.States.Count would no doubt populate the States list anyway.. wouldn't it? I'm pretty sure it will in NHibernate (with LazyLoading causing an extra select to be sent to the DB).
One option is to separate your queries from your existing infrastructure entirely. This would be an implementation of a CQRS design. In this case, you can issue a query directly to the database using a "Thin Read Layer", bypassing your domain objects. Your existing objects and ORM are actually getting in your way, and CQRS allows you to have a "command side" that is separate and possibly a totally different set of tech to your "query side", where each is designed to do it's own job without being compromised by the requirements of the other.
Yes, I'm quite literally suggesting leaving your existing architecture alone, and perhaps using something like Dapper to do this (beware of untested code sample) directly from your MVC controllers, for example:
int count =
connection.Query<int>(
"select count(*) from state where countryid = #countryid",
new { countryid = 123 } );
Honestly, your question has gave me a food for thought for a couple of days. More and more I tend to think that denormalization is the correct solution.
Look, the main point of domain driven design is to let the problem domain drive your modeling decisions. Consider the country entity in the real world. A country has a list of states. However, when you want to know how many states a certain country has, you are not going over the list of the states in the encyclopedia and count them. You are more likely to look at the country's statistics and check the number of states there.
IMHO, the same behavior should be reflected in your domain model. You can have this information in the country's property, or introduce a kind of CountryStatistics object. Whatever approach you choose, it must be a part of the country aggregate. Being in the consistency boundary of the aggregate will ensure that it holds a consistent data in case of adding or removing a state.
Some other approaches:
If the states collection is not expected to change a lot, you can
allow a bit of denormalization - add "NumberOfStates" property to the
Country object. It will optimise the query, but you'll have to make
sure the extra field holds the correct information.
If you are using NHibernate, you can use ExtraLazyLoading - it will
issue another select, but won't populate the whole collection when
Count is called. More info here:
nHibernate Collection Count

which design is better for a client/server project with lots of data sharing [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Let's say we have a project that will handle lots of data (employees, schedules, calendars....and lots more). Client is Windows App, Server side is WCF. Database is MS SQL Server. I am confused regarding which approach to use. I read few articles and blogs they all seem nice but I am confused. I don't want to start with one approach and then regret not choosing the other. The project will have around 30-35 different object types. A lot of Data retrieving to populate different reports...etc
Approach 1:
// classes that hold data
public class Employee
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
.....
}
public class Assignment
{
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
.....
}
.....
Then Helper classes to deal with data saving and retrieving:
public static class Employees
{
public static int Save(Employee emp)
{
// save the employee
}
public static Employee Get(int empId)
{
// return the ugly employee
}
.....
}
public static class Assignments
{
public static int Save(Assignment ass)
{
// save the Assignment
}
.....
}
FYI, The object classes like Employees and Assignment will be in a separate Assembly to be shared between Sever and Client.
Anyway, with this approach I will have a cleaner objects. The Helper classes will do most of the job.
Approach 2:
// classes that hold data and methods for saving and retrieving
public class Employee
{
// constructors
public Employee()
{
// Construct a new Employee
}
public Employee(int Id)
{
// Construct a new Employee and fills the data from db
}
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
.....
public int Save()
{
// save the Employee
}
.....
}
public class Assignment
{
// constructors
public Assignment()
{
// Construct a new assignment
}
public Assignment(int Id)
{
// Construct a new assignment and fills the data from db
}
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
.....
public int Save()
{
// save the Assignment
}
.....
}
.....
With this approach, Each object will do its own job.. Data still can be transferred from WCF to client easily since WCF will only share properties.
Approach 3:
Using Entity Framework.. beside the fact that I never worked with it (which is nice since I have to learn something new) I will need to create POCOs to transfer data between client and WCF..
Now, Which is better? more options?
Having peristence logic in object itself is always a bad idea.
I would use first aproach. It looks like Repository pattern. This way, you can easily debug peristing of data, because it will be clearly separated from rest of the logic of the object.
I would suggest using Entity Framework + Repository pattern. This way your entities are simple objects without any logic in them. All retrieve-save logic stays in repository. I have some successful experience with using generic repository, which is typed with entity, something similar is described here (generic repository part of the article). This way you write repository code only once and you can reuse it for all entities you have. E.g.:
interface IRepositry<T>
{
T GetById(long id);
bool Save(T entity);
}
public class Repository<T> : IRepository<T> {...}
var repository = new Repository<MyEntity>();
var myEntity = repository.GetById(1);
var repository2 = new Repository<MySecondEntity>();
var mySecondEntity = repository.GetById(1);
Whenever an entity needs some very specific operation, you can add this operation to a concrete typed implementation of IRepository:
interface IMySuperRepositry : IRepository<MySuperEntity>
{
MySuperEntity GetBySuperProperty(SuperProperty superProperty);
}
public class MySuperEntityRepository : Repository, IMySuperRepository
{...}
To create repositories it is nice to use a factory, which is based for example on configuration file. This way you can switch implementation of repositories, e.g. for unit testing, when you do not want to use repository that really accesses DB:
public class RepositoryFactory
{
IRepository<T> GetRepository<T>()
{
if (config == production)
return new Repository<T>(); // this is implemented with DB access through EF
if (config == test)
return new TestRepository<T>(); // this is implemented with test values without DB access
}
}
}
}
You can add validation rules for saving and further elaborate on this. EF also lets you add some simple methods or properties to generated entities, because all of them are partial classes.
Furthermore using POCOs or STEs (see later) it is possible to have EDMX DB model in one project, and all your entities in another project and thus distribute this DLL to client (which will contain ONLY your entities). As I understood, that's what you also want to achieve.
Also seriously consider using Self tracking entities (and not just POCOs). In my opinion they are great for usage with WCF. When you get an entity from DB and pass it to the client, client changes it and gives it back, you need to know, if entity was changed and what was changed. STEs handle all this work for you and are designed specifically for WCF. You get entity from client, say ApplyChanges and Save, that's it.
What about implementing the Save as an extension method? That way your classes are clean as in the first option, but the methods can be called on the object as in the second option.
public static class Employee
{
public static int Save(this Employee emp)
{
// save the employee
}
public static Employee Get(int empId)
{
// return the ugly employee
}
}
you're over thinking this. trying to apply technologies and patterns "just because" or "that's what they say" only makes the solution complicated. The key is designing the application so that it can easily adapt to change. that's probably an ambiguous answer, but it's what it all comes down to. how much effort is required to maintain and/or modify the code base.
currently it sounds like the patterns and practices are the end result, instead of a means to an end.
Entity Framework is a great tool but is not necessarily the best choice in all cases. It will depend on how much you expect to read/write from the database vs how much you expect to read/write to your WCF services. Perhaps someone better-versed in the wonderful world of EF will be able to help you. To speak from experience, I have used LINQ-TO-SQL in an application that features WCF service endpoints and had no issues (and in fact came to LOVE Linq-To-Sql as an ORM).
Having that said, if you decide that EF is not the right choice for you, it looks like you're on the right track with Approach 1. However, I would recommend implementing a Data Access Layer. That is, implement a Persist method in your business classes that then calls methods in a separate DAO (Data Access Object, or a class used to persist data from a business object) to actually save it to your database.
A sample implementation might look like this:
public class Employee
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public void Persist()
{
EmployeeDAO.Persist(this);
}
}
public class Assignment
{
public int Id { get; set; }
public int UserId { get; set; }
public DateTime Date { get; set; }
public void Persist()
{
AssignmentDAO.Persist(this);
}
}
public static class EmployeeDAO
{
public static int Persist(Employee emp)
{
// insert if new, else update
}
public static Employee Get(int empId)
{
// return the ugly employee
}
.....
}
public static class AssignmentDAO
{
public static int Persist(Assignment ass)
{
// insert if new, else update
}
.....
}
The benefit to a pattern like this is that you get to keep your business classes clean, your data-access logic separate, while still giving the objects the easy syntax of being able to write new Employee(...).Persist(); in your code.
If you really want to go nuts, you could even consider implementing interfaces on your Persistable classes, and have your DAO(s) accept those IPersistable instances as arguments.

Categories