This site has provided me with many useful answers, however after a hours search I haven't found anything that specifically answers my needs. So here goes...
The company I'm working for is in the process of designing a new Business Objects Layer and a Data Access Layer - these will reside in separate assemblies.
The problem is I'm having a hard time getting my head around the interaction between these two layers - specifically, should the DAL know about the BOL, I've read numerous articles that have said the dependency order should go something like this:
GUI / Presentation --> BOL ---> DAL
But as far as I can see, the DAL needs a reference to the BOL in order to be able to 'return' objects to the BOL layer.
I'm going for a intermediate assembly between the BOL and DAL which will be basically a thin layer filled with interfaces to decouple those two DLL's, so the framework can use different DALs if the need arises.
This lead me to the idea of introducing another thin layer with a bunch of interfaces that the BOs implement, then when the BOL calls the DAL interface, it passes it an object which implements one of these BO interfaces and then the DAL proceeds to populate the object. This removes all dependencies between the BOL and the DAL - however, I'm finding it hard to justify it to be honest.
Ideally we would like to use an ORM as it just removes the need to write CRUD stuff but our clients have a habit of fiddling with column lengths on their database and this is the cause of most of our errors to-date using the strongly typed DataTables. I've heard Linq2SQL also stores column lengths at compile time, not sure if NHibernate does or not (but, I'm not sure if our Database Schema is designed cleanly enough for NHibernate, pitfalls of dealing with legacy systems).
So yea, any insight on the relationship between a BOL and a DAL would be very much welcome - apologies if the above is poorly written, if anyone needs clarification I'll be happy to provide more detail.
Marlon
The way the I do this is the BO expects a DataReader or a DataContext or whatever back from the DAL, not the actual formed object. It is then the job of the BO layer to take and fill itself from the object that came back. The DAL isn't returning back a completed BO. The main thing to remember is that changing something in the BO layer shouldn't cause issues for the DAL layer, but changing something in the DAL layer could cause issues for the BO layer.
A short example of what I typically do
In the BO layer
FillData(){
DataReader dr = DataLayer.GetData("SomePropertyForAStoreProcedure");
If dr.Read(){
Property1 = dr.GetValue("Property1");
//So on and so forth
}
}
In the DAL
DataReader GetData(String SPProperty){
}
take a look at SubSonic http://subsonicproject.com/ it does most of the data access tedious work for you and it's easier than most ORMs out there
The DAL needs a reference to the BOL so that it can populate the objects. What you do not want to have is any reference or coupling from the BOL back to the DAL - doing so causes your BOL to be coupled to a specific database implementation. When you think about it this makes sense. Your DAL knows details about the business objects down to the level of properties and how to retrieve their data from the database - of course the DAL is inherently tightly coupled to the BOL. So the reference that way is fine. And if you think about it what is on the other side? The database. "Tightly coupling" going from your object data to your database? Yeah, it is pretty darn tight. The concept is not very meaningful even.
It is all the other direction where you need to decouple. So yes as long as there is no direct coupling from the DAL into the BOL you can change your data platform anyway you want.
Not much point in creating interfaces for BOs and passing them to DAL in this scenario. You might sometimes need to go the other way however. As a rule business objects should not have to know anything about how they are either created or persisted.
In practice even with most ORMs, for example, creating a business layer completely free of any sort of persistence artifacts can become very difficult, sometimes effectively not possible. So occasionally you have something that is just too difficult to work around though, and you might find that strictly avoiding having any data knowledge in BOs is leading you to over complexity that is degrading rather than adding value.
If you feel like there is no better way and you need to have something persisted from within the BOL, create a simple interface so that the DAL functionality can be passed into the BOL. That way you can still keep the BOL decoupled from the specific database implementation at least.
Also, although it is a lot of additional work, unless this is a very simple throwaway app, I strongly recommend that you also add another layer between the UI and the BOL. The MVP (Model-View-Presenter) pattern is a general purpose design pattern for reducing coupling between the core app and the UI. There are a lot of variants on presenters, don't get too caught up in the specific details, just start off with the simple MVP if you have never used it.
The patterns is not that hard, it is just that UI itself is so messy that it may take you at least a couple of major iterations / applications before you feel like the code you are writing at any time is systematically and methodically working to decouple the UI. Just keep working at it, start to acquire an arsenal of techniques, and don't get hung up on the fact that you really have not achieved a sharp clean separation yet. Anything and everything you learn and can do that even contributes a little to creating creating a well defined boundary at the UI is a big step in the right direction.
The 'correct' approach is going to vary depending on business needs. To be honest, there are many projects where I feel the old style ado recordsets incurred less development time and were easier to maintain than many of the ORM's out now. Take some time to identify what your needs are, and remember that development time and maintainability are design goals that should be properly weighed as well.
It also depends on if/what library/ORM (Object-Relational Mapper) you use. When using a (good) ORM, the DAL should be a very remote concern, because it is almost completely hidden by the ORM; however, best practices dictate that even then, for medium to large size applications, you should introduce another layer between the BOL and ORM, usually DTO (Data Transfer Objects). DTOs can also be used without an ORM, as they are just dumb objects defined in a separate library, and the DAL can be responsible for persisting them (transforming them from/to database structures), while the BOL can query the DAL and receive those objects.
Decoupling the layers can be achieved in a variety of ways, most commonly through interfaces and/or MEF or another DI/IOC framework. Any such technique achieves more than sufficient decoupling if used effectively.
Also, depending on the technology used, as Sisyphus said, one of the layered architectural patterns will help separate concerns nicely: MVC, MVP, MVVM etc. I personally recommend MVVM with WPF (desktop) or Silverlight (web) but I'm highly biased - i.e. I love both of them to death :)
These are my findings,
1. Use interfaces
2. Use DTOs [Data Transfer Objects] between DAL & BLL
3. Split BLL into two,
a. BLL
b. Service Layer
4. Use Inversion of Control (IoC) container for keep coupling as low as possible.
Related
When I first learnt about Domain Driven Design, I was also introduced to the repository and unit of work patterns that once seemed to be top notch for the cool kids that threw SQL queries like cavemans against databases. The deeper I got into that topic, the more I learnt that they don't seem to be necessary anymore because of ORMs like EF and NHibernate that implement both unit of work and repositories into one API, called session or context.
Now I'm unsure what to do. To repository or not to repository. I really understand the argument that such leaky abstractions only over-complicate things while adding absolutely nothing that may simplify data access, however, it doesn't feel right to couple every possible aspect of my application to e.g. Entity Framework. Usually, I follow a few simple guidelines:
The domain layer is the heart of the system, containing entities, services, repositories...
The infrastructure layer provides implementations of domain interfaces of a infrastructural concern, e.g. file, database, protocols..
The application layer hosts a composition root that wire things up and orchestrates everything.
My solutions usually look like this:
Domain.Module1
Domain.Module2
IModule2Repo
IModule2Service
Module2
Infrastructure.Persistence
Repositories
EntityFrameworkRepositoryBase
MyApp
Boostrapper
-> inject EntityFrameworkRepositoryBase into IRepository etc.
I keep my domain layer clean by using a IRepository<'T> which is also a domain concern not depending on anything else that tells me how to access data. When I now would make a concrete implementation of IModule2Service that requires data access, I would have to inject DbContext and by this, coupling it directly to the infrastructure layer.
(Coming to Visual Studio project, this can end up really tricky because of circular dependencies!)
Additionally What can be an alternative to depositories and fucktons of works? CQRS? How does one abstract a pure infrastructural framework?
"Depository" lol....
Anyways, you got right the bit that you don't want the domain coupled to EF that's why the T in your repository interface should be domain aggregate root and NOT an EF entity (or a 'domain' object designed to work properly with EF).
Your Domain layer is never coupled to persistence because it only knows about abstractions. When Module2Service needs data access it either uses a DAO (or a repository - not necessarily the DDD version - if it makes sense) or the service itself is an implemented in DAL (if it doesn't contain business logic).
In your case probably the best approach is the DAO/repository which of course will 'hide' the EF part. If it seems you're writing too much code, you really aren't and I think it matters the most to keep the proper separations of concerns than saving 50 LoC (a whole 5-10 minutes).
CQRS is always good idea with a rich domain but as any solution it comes to the drawback that it requires more code (and I understand that every coder is lazy by definition, but we are required to do a 'fuckton' of work, an app doesn't build itself, a maintainable app is even more work at the beginning).
If by abstracting a pure infrastructural framework you mean hiding EF, the repository is your best bet and you don't even have to name the class 'repository', it's the principle that matters and that's what the repo does: abstracts persistence for the Domain.
I have a project with the following structure:
Project.Domain
Contains all the domain objects
Project.EntityFramework, ref Project.Domain
Contains Entity Framework UnitOfWork
Project.Services, ref Project.Domain and Project.EntityFramework
Contains a list of Service classes that perform some operations on the Domain objects
Project.Web.Mvc, ref to all the projects above
I am trying to enforce some Business rules on top of the Domain objects:
For example, you cannot edit a domain object if it's parent is disabled, or, changing the name of an object, Category for example, needs to update recursively all it's children properties (avoiding / ignoring these rules will result in creating invalid objects)
In order to enforce these rules, i need hide all the public properties setters, making them as internal or private.
In order to do this, i need to move the Project.Services and Project.EntityFramework inside the Project.Domain project.
Is this wrong?
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
its really a bad idea, once i had this opinion but honestly if you dont program to abstraction it will become a pain when the project becomes larger. (a real pain)
IRepositories help you spread the job between different team members also. in addition to that you can write many helper extensions for Irepository to encapsulate Different Jobs for example
IReopisotry<File>.Upload()
you must be able to test each layer independently and tying them together will let you only do an integration tests with alot of bugs in lower layers :))
First, I think this question is really opinion based.
According to the Big Book the domain models must be separated from the data access. Your domain has nothing to with the manner of how storing the data. It can be a simple text file or a clustered mssql servers.
This choice must be decided based on the actual project. What is the size of the application?
The other huge question is: how many concurrent user use the db and how complex your business logic will be.
So if it's a complex project or presumably frequently modified or it has educational purposes then you should keep the domain and data access separated. And should define the repository interfaces in the domain model. Use some DI component (personally I like Ninject) and you should not reference the data access component in the services.
And of course you should create the test projects also using some moq tools to test the layers separately.
Yes this is wrong, if you are following Domain Driven Design, you should not compromise your architecture for the sake of doing less work. Your data-access and domain should be kept apart. I would strongly suggest that you implement the Repository pattern as it would allow you more flexibility in the long run.
There are of course to right answer to whats the right design...I would however argue that EF is you data layer abstraction, there is no way youre going to make anything thats more powerful and flexible with repositories.To avoid code repetition you can easily write extension methods (for IQueryable<>) for common tasks.Unit testing of the domain layer is easily handled by substituting you big DB with some in-proc DB (SqlLite / Sql Server Compact).IMHO with the maturity of current ORMs like nHibernate and EF is a huge waste of money and time to implement repositories for something as simple as DB access.
Blog post with a more detailed reply; http://ayende.com/blog/4784/architecting-in-the-pit-of-doom-the-evils-of-the-repository-abstraction-layer
We are writing some support applications (rather small) to our ERP system.
Thus until now I am feeling that I am using the Data Access Layer for 2 roles: the business layer AND the data access one.
I am having trouble deciding what I have to move to a separate layer and if I need to. I have read somewhere that knowing when to make layer separation is wisdom and knowing the patterns is just knowledge. I have neither in adequate amounts.
So I need some help to determine what is what.
My current DAL deals with fetching the data and applying basic logic on them. For example there are methods like
GetProductAvailabilitybyItem
GetProductAvailabilitybyLot
etc.
If I needed to separate them what I would have to do?
One other matter that is in my head is that in order to normalize my DAL and make it return different entities every time (through one general get method) I would have to use DataTable as return type. Currently I am using things like List<PalletRecord> as return types.
I feel that my apps are so small that its hard (and maybe useless) to discriminate these 2 layers.
My basic need is to build something that can be consumed by multiple front-ends (web pages, WinForms, WPF, and so on).
Additional Example:
Lets talk some barcode. I need to check if a fetched lot record is valid or not. I am fetching the record in DAL and produce a method returning bool in business layer?
Then i can call the bool method from whatever presentation in order to check if a textbox contains a valid lot?
Is this the logic extremely simplified?
Based on your description, you should definitely separate both layers right now, when the application is still small. You might feel a BL is useless when you're just accessing and displaying data, but with time you'll find the need to modify, transform, or manipulate the data, like coordinate object creation from different tables, or update different tables in a single action from the user.
The example you provided helps to support this idea, although is quite simplified.
Pablo's answer does offer some good design ideas too: you should definitely use an ORM to simplify your DAL and keep it very thin. I've found NHibernate and Fluent make a very good job on this. You can use the BL to coordinate access using Data Access Objects.
Given that you are dealing with very small applications, why not just have an ORM provide all data-access for you and just worry about the business layer?
That way you don't have to worry about dealing with DataTable's, mapping data to objects and all that. Much faster development, and you would reduce the size of the codebase.
For example, NHibernate or Microsoft's Entity Framework
Now, if you will be providing data to external consumers (you are implementing a service), you may want to create a separate set of DTOs that go through the wire, instead of trying to send your actual model entities.
I am not a big fan of nTire architecture and have some good reasons for it.
Main objective for such an architecture are
Ability to work with different underlying database separation of
context - i.e. application design and business logic Uniformity and
confirmation of best patterns and practices.
However, while doing so, you also make some sacrifices such as give up provider specific optimizations etc.
My advise is, you can go with two layer architecture,i.e. Data access and business logic layer and GUI or presentation layer. It will still allow you to have a common code for different platforms and at the same time will save you from spaghetti code.
I want to create n-Tier architecture with a repository pattern. I'm wondering does it make sense to just duplicate all my calls up through the BLL layer and then access data only calls via the BLL? Or can I access some things directly through the DAL and some through the BLL?
IMO it doesn't make sense to duplicate just for the sake of it.
(though really every approach has its pros and cons, nothing is always wrong or good per se)
Usually though data layer deals with for example (simplified) a bit 'granulated' data that match tables exactly etc.
While your business layer could combine that and is more centered around the 'logic' and your logical model (then the data model and the data).
If you find yourself having an exact replica of the DAL in your biz layer, then you're most likely missing a point sort of. Some things may need to be reorganized, thrown away, or just simplified.
Or e.g. ask yourself following - if you want to e.g. replace the DAL with to work with different type of storage (different organization of things or anything that requires you to change how your data/DAL operates) - how is your BLL going to look like? The same? Your business layer should not 'follow the data' - it should have its own rules, more again about the logic of your domain, what you're doing. While data should be about data.
So, in short the question is mainly how you design your system - if you make a good use of a business layer (and normally you should unless it's relatively simple or e.g. you decide for entirely different architecture) then use it, if not there's no need to duplicate.
hope this helps.
I have recently joined a company that using typed datasets as their 'Dto'. I think they are really rubbish and want to change it to something a little more modern and user friendly. So, I am trying to update the code so that the data layer is more generic, i.e. using interfaces etc, the other guy does not know what a Dto is and we are having a slight disagreement about how it should be done.
Without trying to sway people to my way of thinking, I would like to get impartial answers from you people as to what layers the Dto can be present in. All layers; DAL, BL and Presentation or a small sub set within these layers only.
Also, whether IList objects should or should not be present in the DAL.
Thanks.
It really depends on your architecture.
For the most point you should try to code to interfaces then it doesn't really matter what your implementation is. If you return ISomething it could be your SomethingEntity or your SomethingDTO but your consuming code doesn't care less as long as it implements the interface.
You should be returning an IList/ICollection/IEnumerable over a concrete collection or array.
Properties should not return arrays
Do not expose generic lists
What you should try to do first is separate your code and make it loosely coupled by inserting some interfaces between your layers such as a repository for your DataAccess layer. Your repository then returns your entities encapsulated by an interface. This will make your code more testable and allow you to mock more easily. Once you have your tests in place you can then start to change the implementations with less risk.
If you do start to use interfaces I would suggest integrating an IoC such as Windsor sooner rather than later. If you do it from the get go it will make things easier later on.
One thing is DataSets are poor to achieve interoperability. Even typed datasets are also not so compatible when it comes to consuming typed datasets from a non .net client. Refer this link. If you have to achieve interoperability then fight hard for DTOs otherwise try to make your team understand DTOs over a period of time because datasets are not so bad after all.
On part of interfaces, yes you should be exposing interfaces. For example - If you are returning List<T> from DAL, instead you should return IList<T>. Some people go to extent of returning only IEnumerable<T> because all you need is capability to enumerate. But then while doing it don't become astronaut architect.
In my applications I have figured out that returning IList<T> instead of List<T> pollutes my code base with codes like this:
//consider personCollection as IList<Person>
(personCollection as List<Person>).ForEach(//Do Something)
So I personally try to maintain a balance between returning interface or concrete object. If you ask me what I am doing right now, then I will tell you I am returning List<T>. I am influenced to not to become astronaut architect.
I always use DTOs, never DataTable. But I only use them to transfer from BL to DL and the other way around. My presentation layers are often only aware of the business and service layers in case of service oriented.
The benefits I can see to use DTOs rather than datatables:
easy refactoring
easy diagram production
cleaner more readable code, especially in the DAL's unit tests
By definition a DTO is a data transfer object, used to (wait for it) transfer data from one layer to another.
DTOs can be used across all layers and I have used them well with web services.