.NET Entity framework project layout (architecture) - c#

I'm trying to determine how best to architect a .NET Entity Framework project to achieve a nice layered approach. So far I've tried it out in a browse-based game where the players own and operate planets. Here's how I've got it:
Web Site
This contains all the front end.
C# Project - MLS.Game.Data
This contains the EDMX file with all my data mappings. Not much else here.
C# Project - MLS.Game.Business
This contains various classes that I call 'Managers' such as PlanetManager.cs. The planet manager has various static methods that are used to interact with the planet, such as getPlanet(int planetID) which would return an generated code object from MLS.Game.Data.
From the website, I'll do something like this:
var planet = PlanetManager.getPlanet(1);
It returns a Planet object from from the MLS.Game.Data (generated from the EDMX). It works, but it bothers me to a degree because it means that my front end has to reference MLS.Game.Data. I've always felt that the GUI should only need to reference the Business project though.
In addition, I've found that my Manager classes tend to get very heavy. I'll end up with dozens of static methods in them.
So... my question is - how does everyone else lay out their ASP EF projects?
EDIT
After some more though, there's additional items which bother me. For example, let's say I have my Planet object, which again is generated code from the wizard. What if a time came that my Planet needed to have a specialized property, say "Population" which is a calculation of some sort based on other properties of the Planet object. Would I want to create a new class that inherits from Planet and then return that instead? (hmm, I wonder if those classes are sealed by the EF?)
Thanks

You could try the following to improve things:
Use EF to fetch DTOs in your Data layer, then use these DTOs to populate richer business objects in your Business layer. Your UI would only then need to reference the Business layer.
Once you have created the rich business objects, you can begin to internalise some of the logic from the manager classes, effectively cleaning up the business layer.
I personally prefer the richer model over the manager model because, as you say, you end up with a load of static methods, which you inevitibly end up chaining together inside other static methods. I find this both too messy and, more importantly, harder to understand and guarantee the consistency of your objects at any given point in time.
If you encapsulate the logic within the class itself you can be more certain of the state of your object regardless of the nature of the external caller.
A good question by the way.

IMHO, your current layout is fine. It's perfectly normal for your UI to reference the 'Data' layer as you are calling it. I think that perhaps your concern is arising due to the terminology. The 'Data' you have described more often referred to as a 'business objects' (BOL) layer. A common layout would then be to have a business logic layer (BLL) which is your 'Managers' layer and a data access layer (DAL). In your scenario, LINQ to Entites (presuming you will use that) is your DAL. A normal reference pattern would then be:-
UI references BLL and BOL.
BLL refences BOL and DAL (LINQ to Entites).
Have a look at this series of articles for more detail.

As for your second question (after the EDIT) if you need or want to add features to your EF objects you can use partial classes. Right click the EDMX file and select view code.
Or if this isn't enough for you can ditch the designer and write your own EF enabled classes.
There is a (brief) discussion of both options here -
http://msdn.microsoft.com/en-us/library/bb738612.aspx

As for your second question in the "EDIT" section:
If I'm not mistaken, the classes generated by EF are not sealed, and they are PARTIAL classes, so you could easily extend those without touching the generated files themselves.
The generated class will be:
public partial class Planet : global::System.Data.Objects.DataClasses.EntityObject
{
...
}
so you can easily create your own "PlanetAddons.cs" (or whatever you want to call it) to extend this class:
public partial class Planet
{
property int Population {get; set;}
...
}
Pretty neat, eh? No need to derive and create artificial object hierarchies....
Marc

I'm no expert, but that sounds pretty good to me. That similar to what I have in my solutions, except I just merge the EF project with the business project. My solutions aren't that big, and my objects don't require a lot of intelligence, so its fine for me. I too have a ton of different methods for each of my static helper classes.
If you don't want the presentation layer knowing about the data access layer, then you have to create some intermediary classes, which would probably be a lot of work. So whats the problem with your current setup?

Your layout looks ok.
I would have added a Utility/Common layer
Web UI
Business Layer
Dataobjects
Utilities layer

I would add DTOs to your Business layer that are "dumb object" representations (i.e. only properties) of the Entities in your data layer. Then your "Manager" classes can return them, such as:
class PlanetManager
{
public static PlanetDTO GetPlanet(int id) { // ... }
}
and your UI can only deal with the BLL layer via POCOs; the Manager (what I would call a "Mapper" class) handles all the translating between the objects and the data layer. Also if you need to extend the class, you can have a "virtual" property on the DTO object and have the Manager translate that back to its components.

Related

DAL, Model Layer, EF code-first and business logic, how do they fit together?

Bit per bit, I am undestanding and becoming more proficient with ASP.NET MVC4. However I'm still failing to be clear in my mind concerning this very important point. Let us say that I have a model :
public class WorkPaper
{
public int WorkPaperID { get; set; }
public string name { get; set; }
public string description {get ; set; }
public bool verified {get; set;}
public DateTime dateAdded {get; set;}
}
I use this model with Entity Framework code first approach and I have the following DB Context :
public class NicodemeContext : DbContext
{
public DbSet<WorkPaper> Workpapers { get; set; }
}
What I don't understand is : what is the Model Layer and what is the Data Access Layer. To me the WorkPaper class tends to be part of the DAL since I designed it and choosed my properties names and types (navigations property etc...) to fit the EF pattern. But the problem is that if i'm right, I really don't see where I should put the business logic.
In that specific case, if I want to add a rule saying that a Workpaper cannot be sent if it hasn't been verified and an other rule saying that if it's been added more than 2 weeks ago it can't be submitted (strange rules but that's just an example). Where should I add them ? Do I have to create an other class, but then where should I put it and what should it contain ? Isn't it going to be very redundant with the class I already have ? But in the other hand, since that class is "database oriented", wouldn't it be messy to add business rules there ?
My point is really in understanding where I have to put my business logic and to understand the difference between the DAL and the model. I have difficulties to identify the DAL and the Model Layer since their look very similar to me. I would like to do things the correct way.
(Basically, I don't know where to code "my program" with MVC ! I'm not comfortable as soon as I want to code the functionality I'm actually interested in. I feel like i'm not doing the things right)
There is no single "right" way to do this. There are probably a number of "wrong" ways, but as with most things much of it is "it depends".
You will find a lot of opinions on this. And much of that depends on how you are approaching it. For instance, #Tarzan suggests using your model for both business logic and data layer entities. Others, suggest creating separate Entity objects, business objects, and view model objects.
If you're making a relatively simple, CRUD type application then you can probably do as #Tarzan suggests and have few problems. However, in more complex applications you start to run into problems doing it this way. For instance, what happens when your business logic is different from your data logic? If you combine them into a single set of entities, then you are forced to make the logic the same everywhere, or spend a lot of time retrofitting.
The most flexible approach is to keep your Presentation, Business, and Data layers completely separate. In this way, the requirements of (for example) the UI don't need to match the other layers. (Here's a simple example, imagine that for some kinds of objects, a particular field is allowed to be null, but not for others. Your data layer would only know that the field is nullable, while your business layer would know that certain objects can't be null).
However, that approach can have a lot of overhead, and require what seems like duplicate code in each layer... creating a lot of extra effort, which is often times not worth it for small or simple applications.
One key thing to remember is that MVC is a Presentation pattern. That means it ONLY concerns itself with the user interface. The "model" is typically considered to be a "view model", which is the model the view uses. This model is customized for the views requirements. For instance, by using Data Attributes that you would not put on a data model.
If you consider MVC to be strictly presentational, then MVC doesn't care what kind of data access you're using, nor does it care what kind of business layer you're using. It could be a service layer, or a repository, or a business objects library (such as CSLA).
A common pattern in MVC applications is to use a service layer of some type, or a repository if you're simply doing CRUD operations. There is typically some kind of mapping system between each layer, using technologies like AutoMapper or custom build mapping classes.
The point here is simply this. MVC doesn't concern itself with business and data, so don't get yourself all worked up over how these fit into an MVC application. They don't, or at least they don't other than with very basic interfacing.
Your application is a collection of objects, in what can be considered an architecture. This architecture is made up of different layers. MVC, Service, Data, etc.. MVC may be the primary user interface, but that doesn't mean everything else is also MVC.
You can put the NicodemeContext class in the DAL and the WorkPaper class in the Model Layer.
The Model Layer is where you define your business domain. This is where your business logic goes.
The Data Access Layer is where you read and write to the database and populate the objects from your Model Layer.
A useful technique is to use partial classes in the Model. In one partial class keep everything that might be written by a code generator and everything handwritten in the other partial class. For example, if you use a tool to generate a partial WorkPaper class it will usually contain all of the properties and associations. In the handwritten partial class you can place your business logic. If something changes with your domain, this allows you to run the class generator again and again without fear of losing your business logic. I know that you said you are using code-first but still, partial classes can be useful.
Models are where you put business logic, and communicate with the Data Access Layer which saves & retrieves data. Your views are for presenting the user interface. The controllers are for passing information between views and models. It takes awhile to get the feel of MVC, but you are asking the right questions and are on the right track.
Hope this helps. Cheers.

Is it reasonable to wrap a domain's BLL into it's own Class Library, and how would I set it up?

I have heard of putting your DAL into a Class Library loads of times. It makes sense, and I want to do it so that I can reduce code duplication across applications. I have decided to use Entity Framework to build that DAL.
From what I understand about N-tiered applications, however, is that the DAL is really just going to expose POCO Classes, which I am going to treat like DTO's. To make that Easy, my DAL.dll is going to expose classes like EmployeeDto and methods like GetEmployeeDtoByID(int ID) just to make it clear that this layer does not produce final domain models.
But what If I want a reusable DLL that produces finalized domain models? It sure would be nice to be able to create a new project, add the CompanyBLL.dll reference, and start calling GetAllEmployees knowing that the Classes exposed here are true representations of Domain Models for that Object. Essentially each project I make just becomes a new Presentation Layer for different tools that are needed.
I know that these are personal choices that I should make as I start deploying N-tiered applications, but Is that a reasonable goal? If so, Do I make the DAL.dll, and only ever really reference it as I build my BLL Class Library? would it just make more sense to combine them into one single Class Library?
I'm just not sure if I'm crazy for not wanting to rebuild my Business Logic for each application that I make, and if this is the correct way to do it.
I would recommend this architecture.
Presentation -> BLL -> Repository -> EF -> Db
Your repository returns collections of your entities that EF is responsible for.
The responsibility of your repository is to perform CRUD operations. Internally it uses EF but your BLL or presentation need not know about this.

ASP.NET 3-Tier / 3-Layer architecture - how to separate UI and BLL

I am currently studying towards my final year of a Computer Science degree, and working on my final project and dissertation. I will be using ASP.NET Web Forms and C# to create a 3-Layer project - I can't really call it 3-Tier as it will most likely never be hosted on anything other than my local PC for testing as it is for uni purposes only.
My main question is this:
From my understanding, the idea of 3-Layer is that the BLL references the DAL, and the UI references the BLL to create complete separation of concerns. However I have made a small mock up project following a few tutorials so get the hang of 3-Layer, and most basic tutorials still require a reference between the UI and BLL.
For example in the project I have created, which is a very basic Products and Categories type e-commerce system, I have created the Product and ProductDAL classes in the DAL, then the ProductBLL class in the BLL. With this setup, of only using one database table (forget categories for now), the BLL seems to only serve as a sort of interface between the UI and DAL, the methods are exactly the same as those in the DAL and only call the DAL version.
The problem is that to access the DAL via the BLL, I have to pass in a Product object to the BLL method arguments, which means creating a Product object in the UI first, which means referencing the DAL from the UI. Is this the correct way of doing things?
I can get around simple cases like this by creating a method in the BLL that takes the appropriate fields, e.g. strings and ints to create the Product Object and returns it to the AddProduct method. However when it comes to binding different product attributes to labels in the UI, I still need access to the Product object.
So essentially, do I need to make a load of methods in the BLL to access properties of the Product Object? If not, what kind of methods would actually go there, can you give me any examples of methods that may go in the BLL in this kind of Product scenario?
Thanks in advance, and apologies if this has been asked before - I did read through a lot of posts about 3-Layer architecture but most are very basic and only access one table.
the BLL seems to only serve as a sort of interface between the UI and DAL
This is only because this application is very simple - just a CRUD interface at the moment. More complex applications have business rules that would be encapsulated in the BLL (and not be in the UI or DAL).
I have to pass in a Product object to the BLL method arguments, which means creating a Product object in the UI first, which means referencing the DAL from the UI. Is this the correct way of doing things?
Well, there are several different options here:
You can have a Product data access object (DAO) that is shared between the different layers. This object is not a DAL object, but the DAL uses it. It is called a DTO - Data Transfer Object.
You can have several different Product object - one to be consumes by the UI, one by the BLL and one by the DAL and have mapping layers to translate between the different objects.
Some combination of the above.
A common way of separating concerns is to start by having a project called YourProject.Entities or something of the like. This contains the main class definitions and you reference it when you need to get a large entity like a customer or a product or something of the like. Alongside, you have another project which acts as a repository. Depending on the technology that you are using this can either implement something like EF to get your objects from your DB or can contain methods which query your DB directly using straight SQL or stored procedures.
What you have to keep in mind is that these projects are primarily going to function based on user input. Your users will act and your program will respond. The idea though is that the actual business logic is separated from your UI and your data access. You can mix and match these ideas as you wish, but what I have tended to see in my professional experience is basic data constraint enforcement done on the DB access side of things, and data validation either done directly when creating your objects in the Entities project or in a separate EntitiesValidation project which takes entities as a parameter.
If you don't want to have a separate validation project, something to keep in mind is that you can implement business logic directly in objects using constructors and properties. Constructors can enforce logic on inputs before creating objects, and using full properties--that is to say this...
private string myProp
public string MyProp
{
get
{
// Some code
}
set
{
// Some code
}
}
instead of this...
public string MyProp { get; set; }
Allows you to implement rules when accessing the data associated with those properties.
In the end, these questions can be answered many different ways and I am sure that every response to this question will give you different ideas and opinions on the best way to do things. For me, the two rules I always follow are DRY (do not repeat yourself) and code maintainability. By separating logic from data access from object design from UI, you will have a much easier time maintaining and updating your program when that time comes... even if it is just a school project ;).

In a layered architecture using Entity Framework, should I return POCO classes from the BLL? (Architecture guidance needed)

I've been reading too much probably and am suffering from some information overload. So I would appreciate some explicit guidance.
From what I've gathered, I can use VS2010's T4 template thingy to generate POCO classes that aren't tied directly to the EF. I would place these in their own project while my DAL would have an ObjectContext-derived class, right?
Once I have these classes, is it acceptable practice to use them in the UI layer? That is, say one of the generated classes is BookInfo that holds stuff about books for a public library (Title, edition, pages, summary etc.).
My BLL would contain a class BooksBLL for example like so:
public class BooksBLL
{
ObjectContext _context;
public void AddBook(BookInfo book) { ... }
public void DeleteBook(int bookID) { ... }
public void UpdateBook(int bookID, BookInfo newBook) { ... }
//Advanced search taking possibly all fields into consideration
public List<BookInfo> ResolveSearch(Func<BookInfo, bool> filter) { ... }
//etc...
}
So, my ViewModels in my MVVM UI app will be communicating with the above BLL class and exchanging BookInfo instances. Is that okay?
Furthermore, MVVM posts on the Web suggest implementing IDataErrorInfo for validation purposes. Is it okay if I implement said interface on the generated POCO class? I see from samples that those generated POCO classes contain all virtual properties and stuf and I hope adding my own logic would be okay?
If it makes any difference, at present, my app does not use WCF (or any networking stuff).
Also, if you see something terribly wrong with the way I'm trying to build my BLL, please feel free to offer help in that area too.
Update (Additional info as requested):
I'm trying to create a library automation application. It is not network based at present.
I am thinking about having layers as follows:
A project consisting of generated POCO classes (BookInfo, Author, Member, Publisher, Contact etc.)
A project with the ObjectContext-derived class (DAL?)
A Business Logic Layer with classes like the one I mentioned above (BooksBLL, AuthorsBLL etc)
A WPF UI layer using the MVVM pattern. (Hence my sub-question about IDataErrorInfo implementation).
So I'm wondering about stuff like using an instance of BooksBLL in a ViewModel class, calling ResolveSearch() on it to obtain a List<BookInfo> and presenting it... that is, using the POCO classes everywhere.
Or should I have additional classes that mirror the POCO classes exposed from my BLL?
If any more detail is needed, please ask.
What you're doing is basically the Repository pattern, for which Entity Framework and POCO are a great fit.
So, my ViewModels in my MVVM UI app will be communicating with the above BLL class and exchanging BookInfo instances. Is that okay?
That's exactly what POCO objects are for; there's no difference between the classes that are generated and how you would write them by hand. It's your ObjectContext that encapsulates all the logic around persisting any changes back to the database, and that's not directly exposed to your UI.
I'm not personally familiar with IDataErrorInfo but if right now your entities will only be used in this single app, I don't see any reason not to put it directly in the generated classes. Adding it to the T4 template would be ideal if that's possible, it would save you having to code it by hand for every class if the error messages follow any logical pattern.
Also, if you see something terribly wrong with the way I'm trying to build my BLL, please feel free to offer help in that area too.
This isn't terribly wrong by any means, but if you plan to write unit tests against your BLL (which I would recommend), you will want to change your ObjectContext member to IObjectContext. That way you can substitute any class implementing the IObjectContext interface at runtime (such as your actual ObjectContext), which will allow you to do testing against an in-memory (i.e. mocked) context and not have to hit the database.
Similarly, think about replacing your List<BookInfo> with an interface of some kind such as IList<BookInfo> or IBindingList<BookInfo> or the lowest common denominator IEnumerable<BookInfo>. That way you're not tied directly to the specific class List<T> and if your needs change over time, which tends to happen, it will reduce the refactoring necessary to replace your List<BookInfo> with something else, assuming whatever you're replacing it with implements the interface you've chosen.
You don't need to do anything in particular... as Mark said, there is no "right" answer. However, if your application is simple enough that you would simply be duplicating your classes (e.g. BookInfoUI & BookInfoBLL), then I'd recommend just using the business classes. The extra layer wouldn't serve a purpose, and so it shouldn't exist. Eric Evans in DDD even recommends putting all your logic in the UI layer if you app is simple and has very little business logic.
To make the distinction, the application layer should have classes that model what happens within the application, and the domain layer should have classes that model what happens in the domain. For example, if you have a search page, your UI layer might retrieve a list of BookSearchResult objects from a BookSearchService in the application layer, which would use the domain to pull a list of BookInfo.
Answers to your questions may depend on the size and complexity of your application. So I am afraid there will be valid arguments to answer your questions with Yes and No as well.
Personally I will answer your two main questions both with Yes:
Is it acceptable practice to use POCO (Domain) classes in the UI layer?
I guess with "UI layer" you don't actually mean the View part of the MVVM pattern but the ViewModels. (Most MVVM specialists would argue against letting a View directly reference the Model at all, I believe.)
It is not unusual to wrap a POCO from your Domain project as a property into a ViewModel and to bind this wrapped POCO directly to the View. The big Pro is: It's easy. You don't need additional ViewModel classes or replicated properties in a ViewModel and then copy those properties between the objects.
However, if you are using WPF you must take into account that the binding engine will directly write into your POCO properties if you bind them to a View. This might not always be what you want, especially if you are working with attached and change-tracked entities in a WPF form. You have to think about cancellation scenarios or how you restore properties after a cancellation which have been changed by the binding engine.
In my current project I am working with detached entities: I load the POCO from the data layer, detach it from context, dispose the context and then work with that copy in the ViewModel and bind it to the View. Updating in the data layer happens by creating a new context, loading the original entity from the DB by ID and then updating the properties from the changed POCO which was bound to the View. So the problem of unwished changes of an attached entity disappears with this approach. But there are also downsides to work with detached entites (updating is more complex for instance).
Is it okay if I implement the IDataErrorInfo interface on the generated POCO class?
If you bind your POCO entities to a View (through a wrapping ViewModel) it is not only OK but you even must implement IDataErrorInfo on the POCO class if you want to leverage the built-in property validation of the WPF binding engine. Although this interface is mainly used together with UI technologies it is part of System.ComponentModel namespace and therefore not directly tied to any UI namespaces. Basically IDataErrorInfo is only a simple contract which supports reporting of the object's state which also might be useful outside of a UI context.
The same is true for the INotifyPropertyChanged interface which you also would need to implement on your POCO classes if you bind them directly to a View.
I often see opinions which would disagree with me for several architectural reasons. But none of those opinions argue that another approach is easier. If you strictly would want to avoid to have POCO model classes in your ViewModel layer, you need to add another mapping layer with additional complexity and programming and maintenance effort. So I would vote: Keep it simple as long as you do not have a convincing reason and clear benefit to make your architecture more complex.

Enterprise Design Pattern Question

Something on my mind about structuring a system at a high level.
Let's say you have a system with the following layers:
UI
Service Layer
Domain Model
Data Access
The service layer is used to populate a graph of objects in the domain model. In an attempt to avoid coupling, the domain model will be not be persistence aware and will not have any dependencies on any data access layer.
However, using this approach how would one object in the domain model be able to call other objects without being able to load them with persistence, thus coupling everything together - which I'd be trying to avoid.
e.g. an Order Object would need to check an Inventory object and would obviously need to tell the Inventory object to load in some way, or populate it somehow.
Any thoughts?
You could inject any dependencies from the service layer, including populated object graphs.
I would also add that a repository can be a dependency - if you have declared an interface for the repository, you can code to it without adding any coupling.
One way of doing this is to have a mapping layer between the Data Layer and the domain model.
Have a look at the mapping, repository and facade patterns.
The basic idea is that on one side you have data access objects and on the other you have domain objects.
To decouple you have to: "Program to an 'interface', not an 'implementation'." (Gang of Four 1995:18)
Here are some links on the subject:
Gamma interview on patterns
Random blog article
Googling for "Program to an interface, not an implementation" will yield many useful resources.
Have the domain model layer define interfaces for the methods you'll need to call, and POCOs for the objects that need to be returned by those methods. The data layer can then implement those interfaces by pulling data out of your data store and mapping it into the domain model POCOs.
Any domain-level class that requires a particular data-access service can just depend on the interface via constructor arguments. Then you can leverage a dependency-injection framework to build the dependency graph and provide the correct implementations of your interfaces wherever they are required.
Before writing tons of code in order to separate everything you might want to ask yourself a few questions:
Is the Domain Model truly separate from the DAL? And yes, I'm serious and you should think about this because it is exceedingly rare for an RDBMS to actually be swapped out in favor of a different one for an existing project. Quite frankly it is much more common for the language the app was written in to be replaced than the database itself.
What exactly is this separation buying you? And, just as important, what are you losing? Separation of Concerns (SoC) is a nice term that is thrown about quite a bit. However, most people rarely understand why they are Concerned with the Separation to begin with.
I bring these up because more often than not applications can benefit from a tighter coupling to the underlying data model. Never mind that most ORM's almost enforce a tight coupling due to the nature of code generation. I've seen lot's of supposedly SoC projects come to a crash during testing because someone added a field to a table and the DAL wasn't regenerated... This kind of defeats the purpose, IMHO...
Another factor is where should the business logic live? No doubt there are strong arguments in favor of putting large swaths of BL in the actual database itself. At the same time there are cases where the BL needs to live in or very near your domain classes. With BL spread in such a way, can you truly separate these two items anyway? Even those who hate the idea of putting BL in a database will fall back on using identity keys and letting the DB enforce referential integrity, which is also business logic..
Without knowing more, I would suggest you consider flattening the Data Access and Domain Model layers. You could move to a "provider" or "factory" type architecture in which the service layer itself doesn't care about the underlying access, but the factory handles it all. Just some radical food for thought.
You should take a look at Martin Fowler's Repository and UnitOfWork patterns to use interfaces in your system
Until now I have seen that application can be well layered into three layers: Presentation-->Logic-->Data--and Entities (or Bussines Object). In the Logic Layer case you can use some pattern such as Transaction Script or Domain Model I'm supposing you're using this last. The domain model can use a Data Mapper for interacting with the data layer and create business objects, but you can also use a Table Module pattern.
All this patterns are described in Marttin's Fowler Patterns of Enterprise Application Architecture book. Personally I use Transaction Script because it is simplest than Domanin Model.
One solution is to make your Data Access layer subclass your domain entities (using Castle DynamicProxy, for example) and inject itself into the derived instances that it returns.
That way, your domain entity classes remain persistence-ignorant while the instances your applications use can still hit databases to lazy-load secondary data.
Having said that, this approach typically requires you to make a few concessions to your ORM's architecture, like marking certain methods virtual, adding otherwise unnecessary default constructors, etc..
Moreover, it's often unnecessary - especially for line-of-business applications that don't have onerous performance requirements, you can consider eagerly loading all the relevant data: just bring the inventory items up with the order.
I felt this was different enough from my previous answer, so here's a new one.
Another approach is to leverage the concept of Inversion of Control (IoC). Build an Interface that your Data Access layer implements. Each of the DAL methods should take a list of parameters and return a Data Table.
The service layer would instantiate the DAL through the interface and pass that reference to your Domain Model. The domain model would then make it's own calls into the DAL, using the interface methods, and decide when it needs to load child objects or whatever.
Something like:
interface IDBModel {
DataTable LoadUser(Int32 userId);
}
class MyDbModel : IDBModel {
DataTable LoadUser(Int32 userId) {
// make the appropriate DB calls here, return a data table
}
}
class User {
public User(IDBModel dbModel, Int32 userId) {
DataTable data = dbModel.LoadUser(userId);
// assign properties.. load any additional data as necessary
}
// You can do cool things like call User.Save()
// and have the object validate and save itself to the passed in
// datamodel. Makes for simpler coding.
}
class MyServiceLayer {
public User GetUser(Int32 userId) {
IDBModel model = new MyDbModel();
return new User(model, userId);
}
}
With this mechanism, you can actually swap out your db models on demand. For example, if you decide to support multiple databases then you can have code that is specific to a particular database vendors way of doing things and just have the service layer pick which one to use.
The domain objects themselves are responsible for loading their own data and you can keep any necessary business logic within the domain model. Another point is that the Domain Model doesn't have a direct dependency on the data layer, which preserves your mocking ability for independent testing of business logic.
Further, the DAL has no knowledge of the domain objects, so you can swap those out as necessary or even just test the DAL independently.

Categories