Pulling out business logic from the data access layer - c#

We are writing some support applications (rather small) to our ERP system.
Thus until now I am feeling that I am using the Data Access Layer for 2 roles: the business layer AND the data access one.
I am having trouble deciding what I have to move to a separate layer and if I need to. I have read somewhere that knowing when to make layer separation is wisdom and knowing the patterns is just knowledge. I have neither in adequate amounts.
So I need some help to determine what is what.
My current DAL deals with fetching the data and applying basic logic on them. For example there are methods like
GetProductAvailabilitybyItem
GetProductAvailabilitybyLot
etc.
If I needed to separate them what I would have to do?
One other matter that is in my head is that in order to normalize my DAL and make it return different entities every time (through one general get method) I would have to use DataTable as return type. Currently I am using things like List<PalletRecord> as return types.
I feel that my apps are so small that its hard (and maybe useless) to discriminate these 2 layers.
My basic need is to build something that can be consumed by multiple front-ends (web pages, WinForms, WPF, and so on).
Additional Example:
Lets talk some barcode. I need to check if a fetched lot record is valid or not. I am fetching the record in DAL and produce a method returning bool in business layer?
Then i can call the bool method from whatever presentation in order to check if a textbox contains a valid lot?
Is this the logic extremely simplified?

Based on your description, you should definitely separate both layers right now, when the application is still small. You might feel a BL is useless when you're just accessing and displaying data, but with time you'll find the need to modify, transform, or manipulate the data, like coordinate object creation from different tables, or update different tables in a single action from the user.
The example you provided helps to support this idea, although is quite simplified.
Pablo's answer does offer some good design ideas too: you should definitely use an ORM to simplify your DAL and keep it very thin. I've found NHibernate and Fluent make a very good job on this. You can use the BL to coordinate access using Data Access Objects.

Given that you are dealing with very small applications, why not just have an ORM provide all data-access for you and just worry about the business layer?
That way you don't have to worry about dealing with DataTable's, mapping data to objects and all that. Much faster development, and you would reduce the size of the codebase.
For example, NHibernate or Microsoft's Entity Framework
Now, if you will be providing data to external consumers (you are implementing a service), you may want to create a separate set of DTOs that go through the wire, instead of trying to send your actual model entities.

I am not a big fan of nTire architecture and have some good reasons for it.
Main objective for such an architecture are
Ability to work with different underlying database separation of
context - i.e. application design and business logic Uniformity and
confirmation of best patterns and practices.
However, while doing so, you also make some sacrifices such as give up provider specific optimizations etc.
My advise is, you can go with two layer architecture,i.e. Data access and business logic layer and GUI or presentation layer. It will still allow you to have a common code for different platforms and at the same time will save you from spaghetti code.

Related

How to manage domain model or data access model or data transfer model

In DDD, many models are there:
Domain model, which is used in busiess or application layer.
Data access model, which is used in data access or data repository layer.
Data transfer model (DTO), which is used in presentation layer.
Beacuse of them, the disadvantages raiseļ¼š
They violate the DRY principle, beacuse many duplicate fields exist between them and can't avoid.
They need a lot of irritable mapping to conversion between different layers.
How can we reduce the irritable Models?
Ultimately, it is about choosing your battles. One of the main considerations in a Domain-Driven solution is that it is focused on isolating and encapsulation of the domain. That means that, by that very premise, you are not going to want to use it as an approach in every single project. You need to have an inherent complexity that you are trying to reduce, and you do so by modeling the needs of the business and the needs of the application independently. This means that you are not creating a single system, but one or more systems (your application or applications) that are utilizing a consummate subsystem (your domain).
Personally, I do not have the three types of data-bearing classes that you describe in any of my projects. Instead, I have only my domain model and a view model representation based on each distinct UI requirement. I have found that the need to have a separate set of persistence classes to be superfluous. There are several ways to eliminate the need for persistence classes, such as POCOs with code-first EF, object databases/NoSQL, and separation of your read model and write model. I opt for a combination of the last two, personally, in most cases.
The translation layer between the domain and application/UI is something that I find to be required, because the "truth" that the domain represents is infallible. But, in order to attain a usable interface into the domain, the application will sometimes have additional or differing requirements to the domain. That means that this translation layer is not just about field mapping, it is about encapsulation of differing concepts and isolation of usage scenarios.
Since the domain should have absolutely no knowledge of how it is used, that means that there is a requirement for translation and it is not something that should be attempted to be worked out of the system. I have seen people who have used their domain classes directly in the UI, as the actual model, and that is more heinous (in my opinion) than having to create a translation layer. You are looking at it from the perspective of just being for translation, but remember that it is also about isolation. Introduction of new concepts, changing of existing concepts, and other changes within the domain can effectively be intercepted in translation to prevent impact in the UI, and vice versa.
That sounds trivial, but in an enterprise setting, it is not uncommon for there to not be just a single application that utilizes the domain, but multiple applications. Consider the possibility that your domain is serving a desktop application, a web application, and a REST API. Each of those will have their own specific application requirements that are separate from your business requirements. They also will likely want to represent the domain differently from each other. This means that each of them will likely have things such as different validation requirements, different views into your domain, and unique application functionality. Regardless of all of this, the domain remains happily oblivious to it all. You will have to translate differently for each, somewhere above the domain.
If you are finding that your models are looking exactly like your domain, with a majority of it being one-to-one mappings, you might want to take a step back and look for some other issues that might be occurring. I find a large number of two-way mappings of singular fields to be a code smell. Not always, but often, this is an early sign of an anemic domain model. I personally go as far as to enforce architectural requirements on my domain model, such as requiring that properties be read-only (forcing them to be set through a constructor for existing data and changed through methods on the classes), etc. It is fairly trivial to write architectural unit tests which will reflect your domain classes to inspect them for possible violations of such rules. But, the point is if you find translation to be your pain point, make sure that your domain model is not just acting as a data container. Anemic domain models are an extraordinarily common pitfall in DDD implementations, and with an anemic domain model, there is actually very little benefit to a DDD approach at all, as you are taking the power and responsibility away from the place it should be enforced.
Business requirements should be reflective of the needs of the domain, which is not the same thing as the usage patterns of the domain. Ideally, you should be telling the domain how you want to change its state, and the domain should be applying the domain logic to allow or disallow those changes. If you find yourself pulling down a full domain model representation for a part of the domain, translating the entire thing to a model or DTO, modifying that model or DTO, translating it back to the domain representation, and sending the whole thing back in, you likely have a larger problem.
You may want to review your understanding of DRY. A good starting point is DRY is about Knowledge -- Matthias Verraes (2014).
You may also want to look at Gary Bernhardt's 2012 talk Boundaries.
The key idea is this - the DTO model is an API contract between the domain model and the presentation components. It gives us the freedom to aggressively update/adapt/improve the domain model without breaking the existing presentation component.
Similarly, the data access model is an API contract between the domain and the presentation component. Not only does this mean that the interface between the domain and persistence is stable, but it also provides a stable contract between the current implementation of the domain model (which writes data today) and future implementations of the domain model (which read back that same data tomorrow).
1. This is not about Domain-Driven Design
Separating models comes with your ancestral n-layer architectural style. It dates back long before DDD was around.
2. You don't have to have that many models
If your application is simple enough, you can ditch some of them.
"Data access model" comes to mind first. It's only a recent addition to the DDD paraphernalia. But you can do without just as well, with a minor impact on your domain model.
And guess what, you may even find out that you don't need a rich domain model at all... Some applications are just better off as CRUD.
If you find something "irritable", get rid of it and see for yourself what the benefits/drawbacks are. Code police won't come knocking at your workplace and lock you up for favoring simplicity over so-called orthodoxy.
No, there are not many models. There are multiple representations of the same model. This distinction is very important.
In the domain model, the entity is responsible of making sure that it's state is instact. For instance, the ApprovedAt property may not be set unless ApprovedBy is set at the same time. This is usually done by adding behavior to the entity. I do it by always setting all property setters to private and then add methods every time the entity need to change state.
The DTO is responsible of transferring state between over application boundaries. But when you do DDD correctly you do not want to transfer the domain entity to the client. You want it to remain safe within the server. Thus the role of the DTO is much more important. You might even have multiple DTOs for the same entity. For instance, I might have a UserListDTO which contains a subset for just listing entities and maybe a UserDetailsDTO which is designed for the user details page. For writes you just transfer the mutated state or send command influenced DTOs (ApproveUserDTO).
Finally it's important that the domain entity isn't forced to be designed in a certain way just to be able to persist it. That's why we have the data entity. It will be design so that everything can be persisted in a efficient way. The user might have a data entity called User and one called UserField depending on the domain entity design.
The conclusion is that if you models look exactly the same you are doing something wrong (from the perspective of DDD).

Calling a method in a different solution

Currently I have a solution whose hierarchy looks like this.
-GUI
-ENTITIES
-DATA
-BLL
-ENTITIES
-DATA
-ENTITIES
1) Is that the right way to lay it out. I'm removing the DATA reference from GUI currently (just legacy code that I'm moving to the BLL)
2) Is there any way for ENTITIES to call a method from the BLL or DATA? Like in Entities.Order have Order.GetNextOrderID() that calls a method from DATA?
1) Is that the right way to lay it out. I'm removing the DATA
reference from GUI currently (just legacy code that I'm moving to the
BLL)
This is an extended subject and it is scenario dependant.
Picture a sistem with componentization, integration with other systems and protocols, native code, multiple client protocols, mobile, test, etc. There would be a lot of layers and multiple Solutions would be needed. Same principle apply for different platforms.
There are a lot of criteria you would have to consider, so the answer is: it depends.
But I guess for what you are doing it fits well.
2) Is there any way for ENTITIES to call a method from the BLL or
DATA? Like in Entities.Order have Order.GetNextOrderID() that calls a
method from DATA?
No, you will get cyclic dependency error. A single module would do it tho, but I wouldnt recommend it.
Also, if you are going to define validation in the entities, make sure your design will not allow for duplication in services (bll) or data. This can go out of control if you do not have a good team or pair revision etc.
The main purpose of the layers is to give it specific responsabilites. If you have well defined responsabilities to your layers you should be fine.
I will re-iterate my comment for question 1.
Is that the right way to lay it out.
The "right way" is project dependant.
Is there any way for ENTITIES to call a method from the BLL or DATA? Like in Entities.Order have Order.GetNextOrderID() that calls a method from DATA?
Not with your current setup.. you'll get a circular dependency. You've become confused between a more DDD-approach (which is what you're going for.. nice!) and an Anaemic Domain Model where the logic sits outside of the entities.
You should choose where the bulk of your logic will sit and work from there. For the DDD approach you're asking about, the Entities project will contain 90% of your logic, and it will have a dependency on the "BLL" project for any other "services" the entities may require.
The flipside for the Anaemic Domain Model is that you have a service in the BLL that loads everything it needs and does all of the operations in the actual service. Your entities then become nothing more than POCOs.
Well a good design would be to keep the Data layer separate from both the GUI and BLL. So that each layer can perform a single task i.e GUI should only concern about the User Interface, controls and views. Business Logic Layer should only implement the Business rules and data layer should interact with your database. For your second question all you need to do is add a reference of your Data project to Entity project. Hope it helps you.

Is ok to have Entity Framework inside Domain layer?

I have a project with the following structure:
Project.Domain
Contains all the domain objects
Project.EntityFramework, ref Project.Domain
Contains Entity Framework UnitOfWork
Project.Services, ref Project.Domain and Project.EntityFramework
Contains a list of Service classes that perform some operations on the Domain objects
Project.Web.Mvc, ref to all the projects above
I am trying to enforce some Business rules on top of the Domain objects:
For example, you cannot edit a domain object if it's parent is disabled, or, changing the name of an object, Category for example, needs to update recursively all it's children properties (avoiding / ignoring these rules will result in creating invalid objects)
In order to enforce these rules, i need hide all the public properties setters, making them as internal or private.
In order to do this, i need to move the Project.Services and Project.EntityFramework inside the Project.Domain project.
Is this wrong?
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
its really a bad idea, once i had this opinion but honestly if you dont program to abstraction it will become a pain when the project becomes larger. (a real pain)
IRepositories help you spread the job between different team members also. in addition to that you can write many helper extensions for Irepository to encapsulate Different Jobs for example
IReopisotry<File>.Upload()
you must be able to test each layer independently and tying them together will let you only do an integration tests with alot of bugs in lower layers :))
First, I think this question is really opinion based.
According to the Big Book the domain models must be separated from the data access. Your domain has nothing to with the manner of how storing the data. It can be a simple text file or a clustered mssql servers.
This choice must be decided based on the actual project. What is the size of the application?
The other huge question is: how many concurrent user use the db and how complex your business logic will be.
So if it's a complex project or presumably frequently modified or it has educational purposes then you should keep the domain and data access separated. And should define the repository interfaces in the domain model. Use some DI component (personally I like Ninject) and you should not reference the data access component in the services.
And of course you should create the test projects also using some moq tools to test the layers separately.
Yes this is wrong, if you are following Domain Driven Design, you should not compromise your architecture for the sake of doing less work. Your data-access and domain should be kept apart. I would strongly suggest that you implement the Repository pattern as it would allow you more flexibility in the long run.
There are of course to right answer to whats the right design...I would however argue that EF is you data layer abstraction, there is no way youre going to make anything thats more powerful and flexible with repositories.To avoid code repetition you can easily write extension methods (for IQueryable<>) for common tasks.Unit testing of the domain layer is easily handled by substituting you big DB with some in-proc DB (SqlLite / Sql Server Compact).IMHO with the maturity of current ORMs like nHibernate and EF is a huge waste of money and time to implement repositories for something as simple as DB access.
Blog post with a more detailed reply; http://ayende.com/blog/4784/architecting-in-the-pit-of-doom-the-evils-of-the-repository-abstraction-layer

If I Use a BLL should I still access the DAL?

I want to create n-Tier architecture with a repository pattern. I'm wondering does it make sense to just duplicate all my calls up through the BLL layer and then access data only calls via the BLL? Or can I access some things directly through the DAL and some through the BLL?
IMO it doesn't make sense to duplicate just for the sake of it.
(though really every approach has its pros and cons, nothing is always wrong or good per se)
Usually though data layer deals with for example (simplified) a bit 'granulated' data that match tables exactly etc.
While your business layer could combine that and is more centered around the 'logic' and your logical model (then the data model and the data).
If you find yourself having an exact replica of the DAL in your biz layer, then you're most likely missing a point sort of. Some things may need to be reorganized, thrown away, or just simplified.
Or e.g. ask yourself following - if you want to e.g. replace the DAL with to work with different type of storage (different organization of things or anything that requires you to change how your data/DAL operates) - how is your BLL going to look like? The same? Your business layer should not 'follow the data' - it should have its own rules, more again about the logic of your domain, what you're doing. While data should be about data.
So, in short the question is mainly how you design your system - if you make a good use of a business layer (and normally you should unless it's relatively simple or e.g. you decide for entirely different architecture) then use it, if not there's no need to duplicate.
hope this helps.

Business Objects and Data Layer

This site has provided me with many useful answers, however after a hours search I haven't found anything that specifically answers my needs. So here goes...
The company I'm working for is in the process of designing a new Business Objects Layer and a Data Access Layer - these will reside in separate assemblies.
The problem is I'm having a hard time getting my head around the interaction between these two layers - specifically, should the DAL know about the BOL, I've read numerous articles that have said the dependency order should go something like this:
GUI / Presentation --> BOL ---> DAL
But as far as I can see, the DAL needs a reference to the BOL in order to be able to 'return' objects to the BOL layer.
I'm going for a intermediate assembly between the BOL and DAL which will be basically a thin layer filled with interfaces to decouple those two DLL's, so the framework can use different DALs if the need arises.
This lead me to the idea of introducing another thin layer with a bunch of interfaces that the BOs implement, then when the BOL calls the DAL interface, it passes it an object which implements one of these BO interfaces and then the DAL proceeds to populate the object. This removes all dependencies between the BOL and the DAL - however, I'm finding it hard to justify it to be honest.
Ideally we would like to use an ORM as it just removes the need to write CRUD stuff but our clients have a habit of fiddling with column lengths on their database and this is the cause of most of our errors to-date using the strongly typed DataTables. I've heard Linq2SQL also stores column lengths at compile time, not sure if NHibernate does or not (but, I'm not sure if our Database Schema is designed cleanly enough for NHibernate, pitfalls of dealing with legacy systems).
So yea, any insight on the relationship between a BOL and a DAL would be very much welcome - apologies if the above is poorly written, if anyone needs clarification I'll be happy to provide more detail.
Marlon
The way the I do this is the BO expects a DataReader or a DataContext or whatever back from the DAL, not the actual formed object. It is then the job of the BO layer to take and fill itself from the object that came back. The DAL isn't returning back a completed BO. The main thing to remember is that changing something in the BO layer shouldn't cause issues for the DAL layer, but changing something in the DAL layer could cause issues for the BO layer.
A short example of what I typically do
In the BO layer
FillData(){
DataReader dr = DataLayer.GetData("SomePropertyForAStoreProcedure");
If dr.Read(){
Property1 = dr.GetValue("Property1");
//So on and so forth
}
}
In the DAL
DataReader GetData(String SPProperty){
}
take a look at SubSonic http://subsonicproject.com/ it does most of the data access tedious work for you and it's easier than most ORMs out there
The DAL needs a reference to the BOL so that it can populate the objects. What you do not want to have is any reference or coupling from the BOL back to the DAL - doing so causes your BOL to be coupled to a specific database implementation. When you think about it this makes sense. Your DAL knows details about the business objects down to the level of properties and how to retrieve their data from the database - of course the DAL is inherently tightly coupled to the BOL. So the reference that way is fine. And if you think about it what is on the other side? The database. "Tightly coupling" going from your object data to your database? Yeah, it is pretty darn tight. The concept is not very meaningful even.
It is all the other direction where you need to decouple. So yes as long as there is no direct coupling from the DAL into the BOL you can change your data platform anyway you want.
Not much point in creating interfaces for BOs and passing them to DAL in this scenario. You might sometimes need to go the other way however. As a rule business objects should not have to know anything about how they are either created or persisted.
In practice even with most ORMs, for example, creating a business layer completely free of any sort of persistence artifacts can become very difficult, sometimes effectively not possible. So occasionally you have something that is just too difficult to work around though, and you might find that strictly avoiding having any data knowledge in BOs is leading you to over complexity that is degrading rather than adding value.
If you feel like there is no better way and you need to have something persisted from within the BOL, create a simple interface so that the DAL functionality can be passed into the BOL. That way you can still keep the BOL decoupled from the specific database implementation at least.
Also, although it is a lot of additional work, unless this is a very simple throwaway app, I strongly recommend that you also add another layer between the UI and the BOL. The MVP (Model-View-Presenter) pattern is a general purpose design pattern for reducing coupling between the core app and the UI. There are a lot of variants on presenters, don't get too caught up in the specific details, just start off with the simple MVP if you have never used it.
The patterns is not that hard, it is just that UI itself is so messy that it may take you at least a couple of major iterations / applications before you feel like the code you are writing at any time is systematically and methodically working to decouple the UI. Just keep working at it, start to acquire an arsenal of techniques, and don't get hung up on the fact that you really have not achieved a sharp clean separation yet. Anything and everything you learn and can do that even contributes a little to creating creating a well defined boundary at the UI is a big step in the right direction.
The 'correct' approach is going to vary depending on business needs. To be honest, there are many projects where I feel the old style ado recordsets incurred less development time and were easier to maintain than many of the ORM's out now. Take some time to identify what your needs are, and remember that development time and maintainability are design goals that should be properly weighed as well.
It also depends on if/what library/ORM (Object-Relational Mapper) you use. When using a (good) ORM, the DAL should be a very remote concern, because it is almost completely hidden by the ORM; however, best practices dictate that even then, for medium to large size applications, you should introduce another layer between the BOL and ORM, usually DTO (Data Transfer Objects). DTOs can also be used without an ORM, as they are just dumb objects defined in a separate library, and the DAL can be responsible for persisting them (transforming them from/to database structures), while the BOL can query the DAL and receive those objects.
Decoupling the layers can be achieved in a variety of ways, most commonly through interfaces and/or MEF or another DI/IOC framework. Any such technique achieves more than sufficient decoupling if used effectively.
Also, depending on the technology used, as Sisyphus said, one of the layered architectural patterns will help separate concerns nicely: MVC, MVP, MVVM etc. I personally recommend MVVM with WPF (desktop) or Silverlight (web) but I'm highly biased - i.e. I love both of them to death :)
These are my findings,
1. Use interfaces
2. Use DTOs [Data Transfer Objects] between DAL & BLL
3. Split BLL into two,
a. BLL
b. Service Layer
4. Use Inversion of Control (IoC) container for keep coupling as low as possible.

Categories