MVC, ORM, and data access patterns - c#

I think I've hit that "paralysis by analysis" state.
I have an MVC app, using EF as an ORM.
So I'm trying to decide on the best data access pattern, and so far I'm thinking putting all data access logic into controllers is the way to go.. but it kinda doesn't sound right.
Another option is creating an external repository, handling data interactions.
Here's my pros/cons:
If embedding data access to controllers, I will end up with code like this:
using (DbContext db = new DbContext())
{
User user = db.Users.Where(x=>x.Name == "Bob").Single();
user.Address.Street = "some st";
db.SaveChanges();
}
So with this, I get full benefits of lazy loading, I close connection right after I'm done, I'm flexible on where clause - all the niceties.
The con - I'm mixing a bunch of stuff in a single method - data checking, data access, UI interactions.
With Repository, I'm externalizing data access, and in theory can just replace repos if I decide to use ado.net or go with different database.
But, I don't see a good clean way to realize lazy loading, and how to control DbContext/connection life time.
Say, I have IRepository interface with CRUD methods, how would I load a List of addresses that belong to a given user ? Making methods like GetAddressListByUserId looks ugly, wrong,
and will make me to create a bunch of methods that are just as ugly, and make little sense when using ORM.
I'm sure this problem been solved like million times, and hope there's a solution somewhere..
And one more question on repository pattern - how do you deal with objects that are properties ? E.g. User has a list of addresses, how would you retrieve that list ? Create a repository for the address ? With ORM the address object doesn't have to have a reference back to user, nor Id field, with repo - it will have to have all that. More code, more exposed properties..

The approach you choose depends a lot on the type of project you are going to be working with. For small projects where a Rapid Application Development (RAD) approach is required, it might almost be OK to use your EF model directly in the MVC project and have data access in the controllers, but the more the project grows, the more messy it will become and you will start running into more and more problems. In case you want good design and maintainability, there are several different approaches, but in general you can stick to the following:
Keep your controllers and Views clean. Controllers should only control the application flow and not contain data access or even business logic. Views should only be used for presentation - give it a ViewModel and it will present it as Html (no business logic or calculations). A ViewModel per view is a pretty clean way of doing it.
A typical controller action would look like:
public ActionResult UpdateCompany(CompanyViewModel model)
{
if (ModelState.IsValid)
{
Company company = SomeCompanyViewModelHelper.
MapCompanyViewModelToDomainObject(model);
companyService.UpdateCompany(company);
return RedirectToRoute(/* Wherever you go after company is updated */);
}
// Return the same view with highlighted errors
return View(model);
}
Due to the aforementioned reasons, it is good to abstract your data access (testability, ease of switching the data provider or ORM or whatever, etc.). The Repository pattern is a good choice, but here you also get a few implementation options. There's always been a lot of discussion about generic/non-generic repositories, whether or not one should return IQueryables, etc. But eventually it's for you to choose.
Btw, why do you want lazy loading? As a rule, you know exactly what data you require for a specific view, so why would you choose to fetch it in a deferred way, thus making extra database calls, instead of eager loading everything you need in one call? Personally, I think it's okay to have multiple Get methods for fetching objects with or without children. E.g.
public class CompanyRepository
{
Get(int Id);
Get(string name);
GetWithEmployees(int id);
...
}
It might seem a bit overkill and you may choose a different approach, but as long as you have a pattern you follow, maintaining the code is much easier.

Personally I do it this way:
I have an abstract Domain layer, which has methods not just CRUD, but specialized methods, for example UsersManager.Authenticate(), etc. It inside uses data access logic, or data-access layer abstraction (depending on the level of abstraction I need to have).
It is always better to have an abstract dependency at least. Here are some pros of it:
you can replace one implementation with another at a later time.
you can unit test your controller when needed.
As of controller itself, let it have 2 constructors: one with an abstract domain access class (e.g. facade of domain), and another (empty) constructor which chooses the default implementation. This way your controller lives well during web application run-time (calling empty constructor) and during the unit-testing (with mock domain layer injected).
Also, to be able to easily switch to another domain at a later time, be sure to inject the domain creator, instead of domain itself. This way, localizing the domain layer construction to the domain creator, you can switch to another implementation at any time, by just reconstructing the domain creator (by creator I mean some kind of factory).
I hope this helps.
Addition:
I would not recommend having CRUD methods in domain layer, because this will become a nightmare whenever you rich the unit-testing phase, or even more, when you need to change the implementation to the new one at a later time.

It really comes down to where you want your code. If you need to have data access for an object you can put it behind an IRepository object or in the controller doesn't matter: you will still wind up with either a series of GetByXXX calls or the equivilent code. Either way you can lazy load and control the lifetime of the connection. So now you need to ask yourself: where do I want my code to live?
Personally, I would argue to get it out of the controller. By that I mean moving it to another layer. Probably using an IRespository type of pattern where you have a series of GetByXXX calls. Sure they are ugly. Wrong? I would argue otherwise. At least they are all contained within the same logical layer together rather than being scattered throughout the controllers where they are mixed in with validation code, etc.

Related

Right way to work with dbContext

Summary
This question is for a methodology. The answer should be a link to the holy grail in working with contexts for the described scenario.
We have been experiencing different problems in our MVC web application project, related to the use of dbContext.
After reading many question-answer blogs, articles ... including proposals with repositories and injection patterns, Owin, Entity Framework, Ninject, we are still not clear about the right way to work with dbContext’s.
Is there any article, demo, with “The Way” to do it in a more complex application than just “CRUD” operations using separation between MVVC-presentation / Domain Entities / Logic / DataAccess layers, including Identity security handling users and roles permissions?
Description
Previously, our approach was to create dbContext objects when needed in each repository.
Soon we discovered errors like “dbContext is disposed” since the connection dies together with the repository function. This makes the retrieved objects “partially available” to the upper layers in the app (using the trick .ToList(), limited because we can access collections and attributes but not later navigation into the object child tables, and so on). Also using 2 contexts from different repositories, we got an exception telling that 2 contexts are trying to register changes to the same object.
Due to timed commitments to deliver prototypes, we created a single static dbContext shared for the whole application, which is called from everywhere when needed (Controllers, Models, Logic, DataAccess, database initializers). We are aware that is a very dirty workaround but it has been working better than the previous approach.
Still with problems: dbContext can handle only 1 async method call at a time, and we can have many calls (eg. userManager.FindByNameAsync - there are only async methods). Exception: “A second operation started on this context before a previous asynchronous operation completed”.
We were thinking about creating the context as the very first step when an action is called in the controller, then to carry this object as “relay race” to every other layer or function called. In this way the connection will live from the “click in the browser” until the response is loaded back on it. But we don’t like the idea that every single function must have an extra parameter “context” just to share the connection through the layers for the entire operation route.
We are sure that we are not the first ones wondering about what is the right way to use contexts.
Application layers
We have these (logical) layers, differents workspaces, but same webapp MVC project, top to down:
Views: HTML + Razor + JQuery + CSS. Code here is restricted to the layout, but some HTML might depend on the Role. Method calls are to controllers only, plus utils (like formatting).
ViewModels: The data container to be exchanged between Controllers and Views. Classes only define attributes, plus functions to convert to and from Domain entities only (Translators).
Controllers: Actions called from the browser result in calls to functions in the Logic layers. Authentication here restricts access to actions or limits inside an action. Controllers avoid using Domain entities but ViewModels, so that to communicate with Logic layer ViewModels translation functions are called.
Domain Entities: Used for the logic layer, and used to create database tables by Entity Framework.
Logic Classes: A Domain entity has an EntityLogic class with all the operations. These are the core where all the rules that are common and abstracted from specific consumer clients (ViewModels are unknown).
Repositories: To access the database. Not sure if we do need this since Domain entities are already mapped to objects in database by Entity Framework.
Typical scenario
The browser calls an action (POST) in the Products controller to edit a product. The ProductViewModel is used as container of the data.
The controller action is restricted to a collection of roles. Inside the action, depending of the role, a different Logic function is called and ProductViewModel is translated to ProductDomainEntity and passed as parameter.
The logic EditProduct function calls others functions in different logic classes and also use localization and security to restrict or filter. The logic may or may not call a Repository to access the data, or to use a global context for all, and deliver the resulting domain entity collections to the Logic.
Based on the results, the logic may or may not try to navigate the results’ children collections. The results are given back to the controller action as domain entity (or collection of), and depending of this results, the controller may call more Logic, or redirect to another action or respond with a View translating the results to the right ViewModel.
Where, when and how to create the dbContext to support the whole operation in the best way?
UPDATE: All classes within the Logic layer are static. The methods are called from controllers simply like this:
UserLogic.GetCompanyUserRoles(user)
, or
user.GetCompanyRoles()
where GetCompanyRoles() is an extension method for User implemented in UserLogic. Thus, no instances for Logic classes means no constructors to receive a dbContext to use inside its methods.
I want a static method inside a static class to know where to get the instance of the dbContext active to the current HttpRequest.
Could NInject and OnePerRequestHttpModule help with this? Someone who tried?
I don't believe there is a "Holy Grail" or magic-bullet answer to this or any other problem with EF / DbContexts. Because of that, I also don't believe that there is one definitive answer to your question, and that any answers will be primarily opinion-based. However I have found personally that using a CQRS pattern rather than a repository pattern allows for more control and fewer problems when dealing with EF semantics and quirks. Here are a few links that you may (or may not) find helpful:
https://stackoverflow.com/a/21352268/304832
https://stackoverflow.com/a/21584605/304832
https://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=91
https://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=92
http://github.com/danludwig/tripod
Some more direct answers:
...This makes the retrieved objects “partially available” to the upper layers in the app (using the trick .ToList(), limited because we can access collections and attributes but not later navigation into the object child tables, and so on). Also using 2 contexts from different repositories, we got an exception telling that 2 contexts are trying to register changes to the same object.
The solutions to these problems are to 1) eager load all of the child and navigation properties that you will need when initially executing the query instead of lazy loading, and 2) only work with 1 DbContext instance per HTTP request (inversion of control containers can help with this).
Due to timed commitments to deliver prototypes, we created a single static dbContext shared for the whole application, which is called from everywhere when needed (Controllers, Models, Logic, DataAccess, database initializers). We are aware that is a very dirty workaround but it has been working better than the previous approach.
This is actually much worse than a "dirty workaround", as you will start to see very strange and hard to debug errors when you have a static DbContext instance. I am very surprised to hear that this is working better than your previous approach, but it only points out that there are more problems with your previous approach if this one works better.
We were thinking about creating the context as the very first step when an action is called in the controller, then to carry this object as “relay race” to every other layer or function called. In this way the connection will live from the “click in the browser” until the response is loaded back on it. But we don’t like the idea that every single function must have an extra parameter “context” just to share the connection through the layers for the entire operation route
This is what an Inversion of Control container can do for you, so that you don't have to keep passing around instances. If you register your DbContext instance one per HTTP request, you can use the container (and constructor injection) to get at that instance without having to pass it around in method arguments (or worse).
ViewModels: The data container to be exchanged between Controllers and Views. Classes only define attributes, plus functions to convert to and from Domain entities only (Translators).
Little piece of advice: Don't declare functions like this on your ViewModels. ViewModels should be dumb data containers, void of behavior, even translation behavior. Do the translation in your controllers, or in another layer (like a Query layer). ViewModels can have functions to expose derived data properties that are based on other data properties, but without behavior.
Logic Classes: A Domain entity has an EntityLogic class with all the operations. These are the core where all the rules that are common and abstracted from specific consumer clients (ViewModels are unknown).
This could be the fault in your current design. Boiling all of your business rule and logic into entity-specific classes can get messy, especially when dealing with repositories. What about business rules and logic that span entities or even aggregates? Which entity logic class would they belong to?
A CQRS approach pushes you out of this mode of thinking about rules and logic, and more into a paradigm of thinking about use cases. Each "browser click" is probably going to boil down to some use case that the user wants to invoke or consume. You can find out what the parameters of that use case are (for example, which child / navigation data to eager load) and then write 1 (one) query handler or command handler to wrap the entire use case. When you find common subroutines that are part of more than one query or command, you can factor those out into extension methods, internal methods, or even other command and query handlers.
If you are looking for a good place to start, I think that you will get the most bang for your buck by first learning how to properly use a good Inversion of Control container (like Ninject or SimpleInjector) to register your EF DbContext so that only 1 instance gets created for each HTTP request. This should help you avoid your disposal and multi-context exceptions at the very least.
I always use a BaseController that contains a dbContext and passes it to the logic functions (Extensions i call). That way you only use one context per call and if something fails it will do a rollback.
Example:
Controller1 that inherits BaseController
Controller1 now have access to the property db that is a context
Controller1 contains an action "Action1"
Action1 will call the function "LogicFunctionX(db, value1, Membership.CurrentUserId, true)"
In Action1 you can call other logic functions or even call them inside "LogicFunctionX". Always passing the property db through functions.
To save the context i do it inside the controller (mostly) after calling all the logic functions.
Note: the argument true that i pass in LogicFunctionX is to save the context inside or not. Like:
if(Save)
db.SaveChanges();
I had several problems before doing this.

Is ok to have Entity Framework inside Domain layer?

I have a project with the following structure:
Project.Domain
Contains all the domain objects
Project.EntityFramework, ref Project.Domain
Contains Entity Framework UnitOfWork
Project.Services, ref Project.Domain and Project.EntityFramework
Contains a list of Service classes that perform some operations on the Domain objects
Project.Web.Mvc, ref to all the projects above
I am trying to enforce some Business rules on top of the Domain objects:
For example, you cannot edit a domain object if it's parent is disabled, or, changing the name of an object, Category for example, needs to update recursively all it's children properties (avoiding / ignoring these rules will result in creating invalid objects)
In order to enforce these rules, i need hide all the public properties setters, making them as internal or private.
In order to do this, i need to move the Project.Services and Project.EntityFramework inside the Project.Domain project.
Is this wrong?
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
its really a bad idea, once i had this opinion but honestly if you dont program to abstraction it will become a pain when the project becomes larger. (a real pain)
IRepositories help you spread the job between different team members also. in addition to that you can write many helper extensions for Irepository to encapsulate Different Jobs for example
IReopisotry<File>.Upload()
you must be able to test each layer independently and tying them together will let you only do an integration tests with alot of bugs in lower layers :))
First, I think this question is really opinion based.
According to the Big Book the domain models must be separated from the data access. Your domain has nothing to with the manner of how storing the data. It can be a simple text file or a clustered mssql servers.
This choice must be decided based on the actual project. What is the size of the application?
The other huge question is: how many concurrent user use the db and how complex your business logic will be.
So if it's a complex project or presumably frequently modified or it has educational purposes then you should keep the domain and data access separated. And should define the repository interfaces in the domain model. Use some DI component (personally I like Ninject) and you should not reference the data access component in the services.
And of course you should create the test projects also using some moq tools to test the layers separately.
Yes this is wrong, if you are following Domain Driven Design, you should not compromise your architecture for the sake of doing less work. Your data-access and domain should be kept apart. I would strongly suggest that you implement the Repository pattern as it would allow you more flexibility in the long run.
There are of course to right answer to whats the right design...I would however argue that EF is you data layer abstraction, there is no way youre going to make anything thats more powerful and flexible with repositories.To avoid code repetition you can easily write extension methods (for IQueryable<>) for common tasks.Unit testing of the domain layer is easily handled by substituting you big DB with some in-proc DB (SqlLite / Sql Server Compact).IMHO with the maturity of current ORMs like nHibernate and EF is a huge waste of money and time to implement repositories for something as simple as DB access.
Blog post with a more detailed reply; http://ayende.com/blog/4784/architecting-in-the-pit-of-doom-the-evils-of-the-repository-abstraction-layer

Best practice for database methods in MVC 4

I am pretty new to MVC 4, and I have worked mostly with web forms up to this moment in C#. I understand the pattern of MVC, the routing, calling actions and so on.
But what about the actions which are responsible for fetching data from the database, for example by firing stored procedures? I have seen some tutorials where they put the logic for connecting to the database directly in the actions.
However I am thinking of a more centralized way to do it. For example, I can put all the functions which are firing stored procedures in a separate class named DatabaseCoordinator.cs in a folder named Helpers for example. Then I can call them from the actions in the controllers.
In that way I will know that I can find all of my methods for the database in one class, which is a very clean solution, I think (or at least in web forms). However I want to follow the pattern of MVC, and use only models, views and controllers as the name of the pattern itself implies.
So what is the best practice for that? Should I make a separate class for this, or implement the logic directly in the controllers, or perhaps somewhere else?
You should certainly make a separate repository class to contain all of your data access operations.
There is a good worked example here:
http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
I recommend that you put your data access code somewhere other than in your controller. The controller's primary purpose is to gather together the information for display on a page or the reverse - to take the data from the page that is posted back and feed it to the code responsible for business rules and data access.
For most MVC projects (heck, for most projects really!) I build separate class library projects - at minimum one for business rules and data access, though typically I'll make those two separate projects. The purpose of separating the logic is really for simpler future maintenance and reusability. If you keep your various logical parts separate, you can easily swap them out if your logic or database needs to change, or you can easily consume the business rules and data from a new type of user interface; for example, if you decided to implement your project as a Windows forms application in addition to your web system, you could (theoretically) just reuse your business logic and data access logic libraries and only rebuild the user layer. However, if you build your logic into your controller, you really can't reuse that logic without extracting it and converting it to the new application model you're using.
So, simply put, definitely keep 99% of your logic and data access out of your controller. Only put what you must put into your controller, the rest in a separate class, or where appropriate, in separate class libraries.
Good luck!
The Controllers and Views tend to stay within the same project, but it's common to split the data access classes and models into their own seperate class library, as this allows other projects to utilise them.
This will allow you, in the future, to maybe add a windows forms/wpf interface or maybe a mobile device interface, leveraging the work you already have in the standalone class library.
Another thing to consider, is looking into how to use ViewModels in your MVC application. It's a common technique when Views require more than one domain object. Using View Models in MVC.
Check out the Unit of Work Pattern (UOW) combined with the Repository Pattern. It doesn't matter if you ultimately call a stored procedure or an inline linq query to return results, your caller shouldn't know or care how GetPersons is ultimately implemented. The UOW pattern combined with the Repository pattern is a very popular way to expose an Entity Framework database in the ASP.NET community. You will find different ways to do it, some are over-kill and some just create dependencies with no actual benefit but you will find a way that feels right to you with those patterns.
After more experience, I would like to change my answer and state that the Repository Pattern and thus the Unit of Work pattern are pointless layers of abstraction to prevent you from working with Entity Framework, which is your data layer abstraction! directly.
Other than being able to swap out databases from say Microsoft SQL PostgreSQL (when would this ever happen in the real world?) and control the structure of complex queries that you don't want repeated in your code, I see no real value to the repository pattern. To include CreatedBy,ModifiedBy values on Insert/Update you need only override EntityFramework. To encapsulate queries that include business rules such as where active = 1 and isdeleted = 0 just extend Linq queries with extension methods.

Resolving a call-chain anti-pattern

I've begun to notice something of an anti-pattern in my ASP.NET development. It bothers me because it feels like the right thing to do to maintain good design, but at the same time it smells wrong.
The problem is this: we have a multi-layered application, the bottom layer is a class handling calls to a service that provides us with data. Above that is a layer of classes that possible transform, manipulate, and check the data. Above that are the ASP.NET pages.
In many cases, the methods from the the service layer don't need any changes before going on the view, so the model is just a straight pass through, like:
public List<IData> GetData(int id, string filter, bool check)
{
return DataService.GetData(id, filter, check);
}
It's not wrong, nor necessarily awful to work on, but it creates an odd kind of copy/paste dependency. I'm also working on the underlying service, and it also replicates this patter a lot, and there are interfaces throughout. So what happens is, "I need to add int someotherID to GetData" So I add it to the model, the service caller, the service itself, and the interfaces. It doesn't help that GetData is actually representative of several methods that all use the same signature but return different information. The interfaces help a bit with that repetition, but it still crops up here and there.
Is there a name for this anti-pattern? Is there a fix, or is a major change to the architecture the only real way? It sounds like I need to flatten my object model, but sometimes the data layer is doing transformations so it has value. I also like keeping my code separated between "calls an outside service" and "supplies page data."
I would suggest you use the query object pattern to resolve this. Basically, your service could have a signature like:
IEnumerable<IData> GetData(IQuery<IData> query);
Inside the IQuery interface, you could have a method that takes a unit of work as input, for example a transaction context or something like ISession if you are using an ORM such as NHibernate and returns a list of IData objects.
public interface IQuery<T>
{
IEnumerable<T> DoQuery(IUnitOfWork unitOfWork);
}
This way, you can create strongly typed query objects that match your requirements, and have a clean interface for your services. This article from Ayende makes good reading about the subject.
Sounds to me like you need another interface, so that the method becomes something like:
public List<IData> GetData(IDataRequest request)
You're delegating to another layer, and it's not necessarily a bad thing at all.
You could add some other logic here or in another method down the line, that belongs only in this layer, or swap out to having the layer delegated-to with another implementation, so it certainly could be perfectly good use of the layers in question.
You may have too many layers, but I wouldn't say so just from seeing this, more from not seeing anything else.
From what you've described it simply sounds like you have encountered one of the 'trade-offs' of abstraction in your application.
Consider the case where those 'call-chains' no longer 'pass-thru' the data but require some tranformation. It might not be needed now and certainly the case can be made for YAGNI.
However, in this case it doesn't seem like too much tech debt to handle with the positive side effect of being able to easily introduce changes to the data between layers.
I use this pattern as well. However I used it for the purpose of de-coupling my domain model objects from my data objects.
In my case, instead of "passing through" the object coming from the data layer as you do in your example, I "map" it to another object that lives in my domain layer. I use AutoMapper to take out the pain of manually doing it.
In most cases my domain object looks exactly the same as my data object that it originated from. However there are times when I need to flatten information coming from my data object... or I may not be interested in everything that is in my data object etc.. I map the data object to a customized domain object that only holds the fields my domain layer is interested in.
Also this has the side effect that when I decide to re factor or change my data-layer for something else, It does not have to affect my domain objects since they are de-coupled using the mapping technique.
Here is a description of auto-mapper, which is sort of what this design pattern tries to achieve I think:
AutoMapper is geared towards model projection scenarios to flatten complex object models to DTOs and other simple objects, whose design is better suited for serialization, communication, messaging, or simply an anti-corruption layer between the domain and application layer
Actually, the way you have chosen to go, is the reason of having what you have (I am not saying it is bad).
First, let me say your approach is quite normal.
Now, let me go thought your layers:
Your service - provides somewhat kind of strongly-typed access model. What that means is it has some types of arguments, used them in some special types of methods which return again some special type of results.
Your service-access-layer - also provides the same kind of model. So that it takes special kinds of arguments for special kinds of methods, returning special kinds of results.
etc...
In order not to confuse, here is what I call special kind:
public UserEntity GetUserByID(int userEntityID);
In this example you need to pass exactly the Int, while calling exactly the GetUserByID and it will return exactly the UserEntity object.
Now another kind of approach:
Remember how SqlDataReader works? not very strongly-typed, right?
What you call here for, in my opinion, is that you are missing some not-strongly typed layer.
For that to happen: you need to switch from strongly-typed to non-strongly typed somewhere in your layers.
Example:
public Entity SelectByID(IEntityID id);
public Entity SelectAll();
So, if you had something like this instead of the service access layer, then you could call it for whichever arguments you wanted.
But, that is almost creating an ORM of your own, so I would not think this is the best way to go.
It's essential to define what kind of responsibility goes to which layer, and place such logic only in the layer it belongs to.
It's absolutely normal to just pass through, if you don't have to add any logic in particular method. At some time you might need to do so, and abstraction layer will pay off at that point.
It's even better to have parallel hierarchies, not just passing the underlying layer's objects up, so each layer uses it's own class hierarchy, and you can employ something like AutoMapper in case you feel there's no much difference in the hierarches. This gives you flexibility, and you can always replace automapping with custom mapping code in particular methods/classes, in case hierarchies do not match anymore.
If you many methods with almost the same signature, then you should think of Query Specification pattern.
IData GetData(IQuery<IData> query)
Then, in presentation layer you can implement a databinder for your custom query specification objects, where a single aspnet handler could implement creation of specific query objects, and passing them to a single service method, which will pass it to a single repository method, where it can be dispatched according to a specific query class, possibly with a Visitor pattern.
IQuery<IData> BindRequest(IHttpRequest request)
With this to Automapping and Query Specification pattern, you can reduce duplication to a minimum.

Enterprise Design Pattern Question

Something on my mind about structuring a system at a high level.
Let's say you have a system with the following layers:
UI
Service Layer
Domain Model
Data Access
The service layer is used to populate a graph of objects in the domain model. In an attempt to avoid coupling, the domain model will be not be persistence aware and will not have any dependencies on any data access layer.
However, using this approach how would one object in the domain model be able to call other objects without being able to load them with persistence, thus coupling everything together - which I'd be trying to avoid.
e.g. an Order Object would need to check an Inventory object and would obviously need to tell the Inventory object to load in some way, or populate it somehow.
Any thoughts?
You could inject any dependencies from the service layer, including populated object graphs.
I would also add that a repository can be a dependency - if you have declared an interface for the repository, you can code to it without adding any coupling.
One way of doing this is to have a mapping layer between the Data Layer and the domain model.
Have a look at the mapping, repository and facade patterns.
The basic idea is that on one side you have data access objects and on the other you have domain objects.
To decouple you have to: "Program to an 'interface', not an 'implementation'." (Gang of Four 1995:18)
Here are some links on the subject:
Gamma interview on patterns
Random blog article
Googling for "Program to an interface, not an implementation" will yield many useful resources.
Have the domain model layer define interfaces for the methods you'll need to call, and POCOs for the objects that need to be returned by those methods. The data layer can then implement those interfaces by pulling data out of your data store and mapping it into the domain model POCOs.
Any domain-level class that requires a particular data-access service can just depend on the interface via constructor arguments. Then you can leverage a dependency-injection framework to build the dependency graph and provide the correct implementations of your interfaces wherever they are required.
Before writing tons of code in order to separate everything you might want to ask yourself a few questions:
Is the Domain Model truly separate from the DAL? And yes, I'm serious and you should think about this because it is exceedingly rare for an RDBMS to actually be swapped out in favor of a different one for an existing project. Quite frankly it is much more common for the language the app was written in to be replaced than the database itself.
What exactly is this separation buying you? And, just as important, what are you losing? Separation of Concerns (SoC) is a nice term that is thrown about quite a bit. However, most people rarely understand why they are Concerned with the Separation to begin with.
I bring these up because more often than not applications can benefit from a tighter coupling to the underlying data model. Never mind that most ORM's almost enforce a tight coupling due to the nature of code generation. I've seen lot's of supposedly SoC projects come to a crash during testing because someone added a field to a table and the DAL wasn't regenerated... This kind of defeats the purpose, IMHO...
Another factor is where should the business logic live? No doubt there are strong arguments in favor of putting large swaths of BL in the actual database itself. At the same time there are cases where the BL needs to live in or very near your domain classes. With BL spread in such a way, can you truly separate these two items anyway? Even those who hate the idea of putting BL in a database will fall back on using identity keys and letting the DB enforce referential integrity, which is also business logic..
Without knowing more, I would suggest you consider flattening the Data Access and Domain Model layers. You could move to a "provider" or "factory" type architecture in which the service layer itself doesn't care about the underlying access, but the factory handles it all. Just some radical food for thought.
You should take a look at Martin Fowler's Repository and UnitOfWork patterns to use interfaces in your system
Until now I have seen that application can be well layered into three layers: Presentation-->Logic-->Data--and Entities (or Bussines Object). In the Logic Layer case you can use some pattern such as Transaction Script or Domain Model I'm supposing you're using this last. The domain model can use a Data Mapper for interacting with the data layer and create business objects, but you can also use a Table Module pattern.
All this patterns are described in Marttin's Fowler Patterns of Enterprise Application Architecture book. Personally I use Transaction Script because it is simplest than Domanin Model.
One solution is to make your Data Access layer subclass your domain entities (using Castle DynamicProxy, for example) and inject itself into the derived instances that it returns.
That way, your domain entity classes remain persistence-ignorant while the instances your applications use can still hit databases to lazy-load secondary data.
Having said that, this approach typically requires you to make a few concessions to your ORM's architecture, like marking certain methods virtual, adding otherwise unnecessary default constructors, etc..
Moreover, it's often unnecessary - especially for line-of-business applications that don't have onerous performance requirements, you can consider eagerly loading all the relevant data: just bring the inventory items up with the order.
I felt this was different enough from my previous answer, so here's a new one.
Another approach is to leverage the concept of Inversion of Control (IoC). Build an Interface that your Data Access layer implements. Each of the DAL methods should take a list of parameters and return a Data Table.
The service layer would instantiate the DAL through the interface and pass that reference to your Domain Model. The domain model would then make it's own calls into the DAL, using the interface methods, and decide when it needs to load child objects or whatever.
Something like:
interface IDBModel {
DataTable LoadUser(Int32 userId);
}
class MyDbModel : IDBModel {
DataTable LoadUser(Int32 userId) {
// make the appropriate DB calls here, return a data table
}
}
class User {
public User(IDBModel dbModel, Int32 userId) {
DataTable data = dbModel.LoadUser(userId);
// assign properties.. load any additional data as necessary
}
// You can do cool things like call User.Save()
// and have the object validate and save itself to the passed in
// datamodel. Makes for simpler coding.
}
class MyServiceLayer {
public User GetUser(Int32 userId) {
IDBModel model = new MyDbModel();
return new User(model, userId);
}
}
With this mechanism, you can actually swap out your db models on demand. For example, if you decide to support multiple databases then you can have code that is specific to a particular database vendors way of doing things and just have the service layer pick which one to use.
The domain objects themselves are responsible for loading their own data and you can keep any necessary business logic within the domain model. Another point is that the Domain Model doesn't have a direct dependency on the data layer, which preserves your mocking ability for independent testing of business logic.
Further, the DAL has no knowledge of the domain objects, so you can swap those out as necessary or even just test the DAL independently.

Categories