Enterprise Design Pattern Question - c#

Something on my mind about structuring a system at a high level.
Let's say you have a system with the following layers:
UI
Service Layer
Domain Model
Data Access
The service layer is used to populate a graph of objects in the domain model. In an attempt to avoid coupling, the domain model will be not be persistence aware and will not have any dependencies on any data access layer.
However, using this approach how would one object in the domain model be able to call other objects without being able to load them with persistence, thus coupling everything together - which I'd be trying to avoid.
e.g. an Order Object would need to check an Inventory object and would obviously need to tell the Inventory object to load in some way, or populate it somehow.
Any thoughts?

You could inject any dependencies from the service layer, including populated object graphs.
I would also add that a repository can be a dependency - if you have declared an interface for the repository, you can code to it without adding any coupling.

One way of doing this is to have a mapping layer between the Data Layer and the domain model.
Have a look at the mapping, repository and facade patterns.
The basic idea is that on one side you have data access objects and on the other you have domain objects.

To decouple you have to: "Program to an 'interface', not an 'implementation'." (Gang of Four 1995:18)
Here are some links on the subject:
Gamma interview on patterns
Random blog article
Googling for "Program to an interface, not an implementation" will yield many useful resources.

Have the domain model layer define interfaces for the methods you'll need to call, and POCOs for the objects that need to be returned by those methods. The data layer can then implement those interfaces by pulling data out of your data store and mapping it into the domain model POCOs.
Any domain-level class that requires a particular data-access service can just depend on the interface via constructor arguments. Then you can leverage a dependency-injection framework to build the dependency graph and provide the correct implementations of your interfaces wherever they are required.

Before writing tons of code in order to separate everything you might want to ask yourself a few questions:
Is the Domain Model truly separate from the DAL? And yes, I'm serious and you should think about this because it is exceedingly rare for an RDBMS to actually be swapped out in favor of a different one for an existing project. Quite frankly it is much more common for the language the app was written in to be replaced than the database itself.
What exactly is this separation buying you? And, just as important, what are you losing? Separation of Concerns (SoC) is a nice term that is thrown about quite a bit. However, most people rarely understand why they are Concerned with the Separation to begin with.
I bring these up because more often than not applications can benefit from a tighter coupling to the underlying data model. Never mind that most ORM's almost enforce a tight coupling due to the nature of code generation. I've seen lot's of supposedly SoC projects come to a crash during testing because someone added a field to a table and the DAL wasn't regenerated... This kind of defeats the purpose, IMHO...
Another factor is where should the business logic live? No doubt there are strong arguments in favor of putting large swaths of BL in the actual database itself. At the same time there are cases where the BL needs to live in or very near your domain classes. With BL spread in such a way, can you truly separate these two items anyway? Even those who hate the idea of putting BL in a database will fall back on using identity keys and letting the DB enforce referential integrity, which is also business logic..
Without knowing more, I would suggest you consider flattening the Data Access and Domain Model layers. You could move to a "provider" or "factory" type architecture in which the service layer itself doesn't care about the underlying access, but the factory handles it all. Just some radical food for thought.

You should take a look at Martin Fowler's Repository and UnitOfWork patterns to use interfaces in your system

Until now I have seen that application can be well layered into three layers: Presentation-->Logic-->Data--and Entities (or Bussines Object). In the Logic Layer case you can use some pattern such as Transaction Script or Domain Model I'm supposing you're using this last. The domain model can use a Data Mapper for interacting with the data layer and create business objects, but you can also use a Table Module pattern.
All this patterns are described in Marttin's Fowler Patterns of Enterprise Application Architecture book. Personally I use Transaction Script because it is simplest than Domanin Model.

One solution is to make your Data Access layer subclass your domain entities (using Castle DynamicProxy, for example) and inject itself into the derived instances that it returns.
That way, your domain entity classes remain persistence-ignorant while the instances your applications use can still hit databases to lazy-load secondary data.
Having said that, this approach typically requires you to make a few concessions to your ORM's architecture, like marking certain methods virtual, adding otherwise unnecessary default constructors, etc..
Moreover, it's often unnecessary - especially for line-of-business applications that don't have onerous performance requirements, you can consider eagerly loading all the relevant data: just bring the inventory items up with the order.

I felt this was different enough from my previous answer, so here's a new one.
Another approach is to leverage the concept of Inversion of Control (IoC). Build an Interface that your Data Access layer implements. Each of the DAL methods should take a list of parameters and return a Data Table.
The service layer would instantiate the DAL through the interface and pass that reference to your Domain Model. The domain model would then make it's own calls into the DAL, using the interface methods, and decide when it needs to load child objects or whatever.
Something like:
interface IDBModel {
DataTable LoadUser(Int32 userId);
}
class MyDbModel : IDBModel {
DataTable LoadUser(Int32 userId) {
// make the appropriate DB calls here, return a data table
}
}
class User {
public User(IDBModel dbModel, Int32 userId) {
DataTable data = dbModel.LoadUser(userId);
// assign properties.. load any additional data as necessary
}
// You can do cool things like call User.Save()
// and have the object validate and save itself to the passed in
// datamodel. Makes for simpler coding.
}
class MyServiceLayer {
public User GetUser(Int32 userId) {
IDBModel model = new MyDbModel();
return new User(model, userId);
}
}
With this mechanism, you can actually swap out your db models on demand. For example, if you decide to support multiple databases then you can have code that is specific to a particular database vendors way of doing things and just have the service layer pick which one to use.
The domain objects themselves are responsible for loading their own data and you can keep any necessary business logic within the domain model. Another point is that the Domain Model doesn't have a direct dependency on the data layer, which preserves your mocking ability for independent testing of business logic.
Further, the DAL has no knowledge of the domain objects, so you can swap those out as necessary or even just test the DAL independently.

Related

How to persist aggregates with repositories?

I am trying to learn some concepts about DDD and the part of persisting Aggregates is confusing me a bit. I have read various answers on the topic on SO but none of them seem to answer my question.
Let's say I have an Aggregate root of Product. Now I do not want to inject the ProductRepository that will persist this aggregate root in the constructor of the Product class itself. Imagine me writting code like
var prod = new Product(Factory.CreateProductRepository(), name, costprice);
in the UI layer. If I do not want to inject my repository via dependency injection in the Aggregate Root, then the question is where should this code go? Should I create a class only for persisting this AR? Can anyone suggest what is the correct & recommended approach to solve this issue?
My concern is not which ORM to use or how to make this AR ORM friendly or easy to persist, my question is around the right use of repositories or any persistence class.
Application Services
You are right, the domain layer should know nothing about persistence. So injecting the repository into Product is indeed a bad idea.
The DDD concept you are looking for is called Application Service. An application service is not part of the domain layer, but lives in the service layer (sometimes called application layer). Application services represent a use case (as opposed to a domain concept) and have the following responsibilities:
Perform input validation
Enforce access control
Perform transaction control
The last point means that an application service will query a repository for an aggregate of a specific type (e.g. by ID), modify it by using one of its methods, and then pass it back to the repository for updating the DB.
Repository Ganularity
Concerning your second question
Should I create a class only for persisting this AR?
Yes, creating one repository per aggregate is a common approach. Often, standard repository operations like getById(), update(), delete(), etc. are extracted into a reusable class (either a base class or by aggregation).
You can also create additional repositories for non-domain information, e.g. statistical data. In these cases, make sure that you don't accidentally miss a domain concept, however.

Is ok to have Entity Framework inside Domain layer?

I have a project with the following structure:
Project.Domain
Contains all the domain objects
Project.EntityFramework, ref Project.Domain
Contains Entity Framework UnitOfWork
Project.Services, ref Project.Domain and Project.EntityFramework
Contains a list of Service classes that perform some operations on the Domain objects
Project.Web.Mvc, ref to all the projects above
I am trying to enforce some Business rules on top of the Domain objects:
For example, you cannot edit a domain object if it's parent is disabled, or, changing the name of an object, Category for example, needs to update recursively all it's children properties (avoiding / ignoring these rules will result in creating invalid objects)
In order to enforce these rules, i need hide all the public properties setters, making them as internal or private.
In order to do this, i need to move the Project.Services and Project.EntityFramework inside the Project.Domain project.
Is this wrong?
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
its really a bad idea, once i had this opinion but honestly if you dont program to abstraction it will become a pain when the project becomes larger. (a real pain)
IRepositories help you spread the job between different team members also. in addition to that you can write many helper extensions for Irepository to encapsulate Different Jobs for example
IReopisotry<File>.Upload()
you must be able to test each layer independently and tying them together will let you only do an integration tests with alot of bugs in lower layers :))
First, I think this question is really opinion based.
According to the Big Book the domain models must be separated from the data access. Your domain has nothing to with the manner of how storing the data. It can be a simple text file or a clustered mssql servers.
This choice must be decided based on the actual project. What is the size of the application?
The other huge question is: how many concurrent user use the db and how complex your business logic will be.
So if it's a complex project or presumably frequently modified or it has educational purposes then you should keep the domain and data access separated. And should define the repository interfaces in the domain model. Use some DI component (personally I like Ninject) and you should not reference the data access component in the services.
And of course you should create the test projects also using some moq tools to test the layers separately.
Yes this is wrong, if you are following Domain Driven Design, you should not compromise your architecture for the sake of doing less work. Your data-access and domain should be kept apart. I would strongly suggest that you implement the Repository pattern as it would allow you more flexibility in the long run.
There are of course to right answer to whats the right design...I would however argue that EF is you data layer abstraction, there is no way youre going to make anything thats more powerful and flexible with repositories.To avoid code repetition you can easily write extension methods (for IQueryable<>) for common tasks.Unit testing of the domain layer is easily handled by substituting you big DB with some in-proc DB (SqlLite / Sql Server Compact).IMHO with the maturity of current ORMs like nHibernate and EF is a huge waste of money and time to implement repositories for something as simple as DB access.
Blog post with a more detailed reply; http://ayende.com/blog/4784/architecting-in-the-pit-of-doom-the-evils-of-the-repository-abstraction-layer

What am I missing with the Respository Pattern? How do you use it in the real world?

Every example of the Repository Pattern I have seen deals with a very simple use case - one object type and the most basic CRUD operations. The Repository is then very often plugged straight into an MVC controller.
Real-world data access just isn't like this. Real-world data access scenarios can involve complex graphs of objects and some form of transactional wrapper. For example, suppose I want to save a new Order. This involves writing to the Order, OrderDetails, Invoice, User, History and ItemStock tables. All of this must be transacted, committed or rolled back. Normally I'd pass around something like an IDbTransaction and an IDbConnection and bundles the whole operation in a service layer.
Where does the Repository Pattern fit with this? Am I missing something (Unit Of Work perhaps)? Are there any more realistic examples of Repositories in use than the usual canned blog snippets?
Appreciate any light.
This is a very controversial subject but here is my catch from my own experience.
Repository works at the aggregate root. For example if an OrderItem is always retrieved as part of Order and does not have a life of its own outside order, then it will be loaded by the OrderRepository, otherwise it will have its own repository.
UnitOfWork is an important concept. Let's say OrderItem is an aggregate root and has its own repository. So at the time of creating an order, OrderManager will create a UnitOfWork of work in a using block, initialise OrderItemRepository and OrderRepository and then commit.
UPDATE
Yes, exactly. Imagine - in our case order - an order is being inserted. This needs to be in control of the transaction and enter order and order items separately inside the same transaction. This cannot be managed at the repository level. This is the sole reason for existence of UnitOfWork concept which is passed to the repository so that it does not own or initialise it. UnitOfWork usually is created at the business layer.
O/R-Mappers like Hibernate basically implement the Repository pattern for object graphs while fully supporting transactions. It's often a leaky abstraction, but it certainly can be made to work in complex real-world scenarios.
If you need a good full blown, widely used expample of the repository pattern look at Cocoa's Core Data I realize it is not in the realm of programming languages that you note. But note that it is NOT an O/R mapper. It is a complete abstraction of an object store. No Sql statements to execute, and while you may pick the format of the external storage that is used, you never interact with it directly.
I like to think of a repository as another layer of abstraction. You should only add layers of abstraction when the cost of their implementation is less than the cost of NOT doing the implementation (code maintenance, support, enhancements, etc).
Remember that the purpose of the repository pattern is to separate the logic that retrieves the data (CRUD) and maps it to the entity model from the business logic that acts on the model. What this often ends up doing/looking like in the real world is some form of business entities, that abstract the underlying physical data model.
As far as your transaction question, yes, this relates more to the Unit of Work pattern. Since you mentioned services, I would encourage you to NOT pass around your connection to your various data access classes/methods, but to instead allow WCF to manage the transaction for you using auto enlistment. Here is an extract of Juval Lowy's WCF book (highly recommended) that explains the how and why of this method of transaction management.
So to answer your question, the repository pattern fits in as a way to abstract the physical data model and to separate the CRUD/mapping from the business logic.

MVC, ORM, and data access patterns

I think I've hit that "paralysis by analysis" state.
I have an MVC app, using EF as an ORM.
So I'm trying to decide on the best data access pattern, and so far I'm thinking putting all data access logic into controllers is the way to go.. but it kinda doesn't sound right.
Another option is creating an external repository, handling data interactions.
Here's my pros/cons:
If embedding data access to controllers, I will end up with code like this:
using (DbContext db = new DbContext())
{
User user = db.Users.Where(x=>x.Name == "Bob").Single();
user.Address.Street = "some st";
db.SaveChanges();
}
So with this, I get full benefits of lazy loading, I close connection right after I'm done, I'm flexible on where clause - all the niceties.
The con - I'm mixing a bunch of stuff in a single method - data checking, data access, UI interactions.
With Repository, I'm externalizing data access, and in theory can just replace repos if I decide to use ado.net or go with different database.
But, I don't see a good clean way to realize lazy loading, and how to control DbContext/connection life time.
Say, I have IRepository interface with CRUD methods, how would I load a List of addresses that belong to a given user ? Making methods like GetAddressListByUserId looks ugly, wrong,
and will make me to create a bunch of methods that are just as ugly, and make little sense when using ORM.
I'm sure this problem been solved like million times, and hope there's a solution somewhere..
And one more question on repository pattern - how do you deal with objects that are properties ? E.g. User has a list of addresses, how would you retrieve that list ? Create a repository for the address ? With ORM the address object doesn't have to have a reference back to user, nor Id field, with repo - it will have to have all that. More code, more exposed properties..
The approach you choose depends a lot on the type of project you are going to be working with. For small projects where a Rapid Application Development (RAD) approach is required, it might almost be OK to use your EF model directly in the MVC project and have data access in the controllers, but the more the project grows, the more messy it will become and you will start running into more and more problems. In case you want good design and maintainability, there are several different approaches, but in general you can stick to the following:
Keep your controllers and Views clean. Controllers should only control the application flow and not contain data access or even business logic. Views should only be used for presentation - give it a ViewModel and it will present it as Html (no business logic or calculations). A ViewModel per view is a pretty clean way of doing it.
A typical controller action would look like:
public ActionResult UpdateCompany(CompanyViewModel model)
{
if (ModelState.IsValid)
{
Company company = SomeCompanyViewModelHelper.
MapCompanyViewModelToDomainObject(model);
companyService.UpdateCompany(company);
return RedirectToRoute(/* Wherever you go after company is updated */);
}
// Return the same view with highlighted errors
return View(model);
}
Due to the aforementioned reasons, it is good to abstract your data access (testability, ease of switching the data provider or ORM or whatever, etc.). The Repository pattern is a good choice, but here you also get a few implementation options. There's always been a lot of discussion about generic/non-generic repositories, whether or not one should return IQueryables, etc. But eventually it's for you to choose.
Btw, why do you want lazy loading? As a rule, you know exactly what data you require for a specific view, so why would you choose to fetch it in a deferred way, thus making extra database calls, instead of eager loading everything you need in one call? Personally, I think it's okay to have multiple Get methods for fetching objects with or without children. E.g.
public class CompanyRepository
{
Get(int Id);
Get(string name);
GetWithEmployees(int id);
...
}
It might seem a bit overkill and you may choose a different approach, but as long as you have a pattern you follow, maintaining the code is much easier.
Personally I do it this way:
I have an abstract Domain layer, which has methods not just CRUD, but specialized methods, for example UsersManager.Authenticate(), etc. It inside uses data access logic, or data-access layer abstraction (depending on the level of abstraction I need to have).
It is always better to have an abstract dependency at least. Here are some pros of it:
you can replace one implementation with another at a later time.
you can unit test your controller when needed.
As of controller itself, let it have 2 constructors: one with an abstract domain access class (e.g. facade of domain), and another (empty) constructor which chooses the default implementation. This way your controller lives well during web application run-time (calling empty constructor) and during the unit-testing (with mock domain layer injected).
Also, to be able to easily switch to another domain at a later time, be sure to inject the domain creator, instead of domain itself. This way, localizing the domain layer construction to the domain creator, you can switch to another implementation at any time, by just reconstructing the domain creator (by creator I mean some kind of factory).
I hope this helps.
Addition:
I would not recommend having CRUD methods in domain layer, because this will become a nightmare whenever you rich the unit-testing phase, or even more, when you need to change the implementation to the new one at a later time.
It really comes down to where you want your code. If you need to have data access for an object you can put it behind an IRepository object or in the controller doesn't matter: you will still wind up with either a series of GetByXXX calls or the equivilent code. Either way you can lazy load and control the lifetime of the connection. So now you need to ask yourself: where do I want my code to live?
Personally, I would argue to get it out of the controller. By that I mean moving it to another layer. Probably using an IRespository type of pattern where you have a series of GetByXXX calls. Sure they are ugly. Wrong? I would argue otherwise. At least they are all contained within the same logical layer together rather than being scattered throughout the controllers where they are mixed in with validation code, etc.

Generic business layer design with repository pattern with c# (how)

I am using a repository design with web applications (repository (data layer) exposing model (objects) to the business layer which is then consumed to the data layer (ui). Objects or lists of objects are passed between the layers with this type of implementation.
I am finding my business layer is a becoming a series of manager type classes which all have common GetAll, GetById, Save, Delete type methods. This is very common with a number of very small simple objects. This is the area of concern or opportunity for improvement (the series of smaller business manager classes). I am looking for options to avoid the whole series of smaller business manager classes mapping to the smaller objects which only do get/save/delete object.
The bigger objects which are closer to the functionality to the application have a number of methods in addition to the get/save/delete type methods (these manager classes are ok).
I am thinking there is a design pattern or implementation which will allow me to have one manager class which resides in the business layer which would accept an object as a parameter of a particular object type and the get/save/delete methods respectively know the type of repository object to spin up and pass the object to it for its operation.
The benefit here would be that I can have one generic manager class to pass save/delete/get's for smaller type objects to the appropriate repository class thereby reducing the many smaller manager classes.
Ideas on how to accomplish this?
thx
I would not go that way. The business layer classes can be as simple as code that forwards to the data layer, and it is true that they can be annoying to write, but they exist for a couple of reasons: validation, security, taking some actions based on business rules.
If you try to make a generic business layer, it will hard to include all the various things that a business class could do. The generic business layer will become much more complex than the one you currently have. Testing will be much harder. Adding a new business rule will be hard, too.
Sorry, this is not what you wanted to read, but I have already gone the route of generic systems and have always had lots of regrets.
The idea behind a repository (or a dao), is to further abstract data access concerns away from the business layer in order to simplify that layers focus on the "business" of a given domain.
That said, there are many common plumbing type of concerns that are reuseable across different applications, some of which do lend themselves to a supertype in the business layer. Consider the cross cutting concern of being able to retrieve a given business entity by some Id from a database, and you might come to the conclusion that it is in fact useful to have an Id property in a business layer supertype. It might even be useful if entities considered that Id when determining equality. Etc..
Now I do believe that Timores is right in principal and trying to write one application that fits all domains is both incredibly painful and totally fruitless, but I also beleive the the art of the profession is knowing how to use a variety of tools and when to apply which one, and having some core infrastructure code should be in your tool belt.
For a good idea of a framework concept for a web app that has been road tested, take a look at SharpArch.
HTH,
Berryl

Categories