I'm writing a C# application, I want to follow a 3-tier programming architecture. I've been programming my application based on this article.
I have some questions that I hope someone can help me with:
Where do i put the domain objects (for instance a Person class, where i put the getters and setters and the constructor, and all its properties (age, name,..). Do i put these in the BLL folder or someowhere else?
Should I put all my BLL functions that call functions from my DAL-layer in one controller or seperated among all specific business classes (for instance person, order,..)?
Do I need to create a DAL-object in every BLL function before calling a DAL-function, or do I use a singleton pattern where I only create one DAL-class object at a time?
A screenshot of my classes (Program.cs is the main class):
class structure
I would say the domain objects would go inside DAL folder as the these object will be storing the data inside the instance of an object.
I wouldn't suggest to place all BLL functions under one controller. One of the reason for 3 tier architecture even for a "single machine, single project" is to have code segregation so that it would be easy to understand and maintain.
Singleton pattern means the same object would be shared with all BLL functions. If your main aim of the DAL is have one single storage interaction (e.g. Database) then having multiple DAL objects would mean multiple Database connections which means resource utilization concern. Even in case off multi threaded situations you can increase the Database Connection pool size to a constant number and make sure you share the pool with threads. Important of this that you do not request unnecessary resources from Database.
There various solutions are possible, but e.g. Core platform shows that "more abstractions to the God of abstraction" is a trend. I guess because they find it is more easy to manage it this way in theirs development process (cross-platform, open source).
So do it with maximum possible abstraction and check how it is comfortable for you.
I have entities, service interfaces in one assembly. "Business" code I store in the entity (may be in instance method, maybe in static method, may be in static extension - there are no big difference between them). POCO doesn't mean "can't contain method".
Related
I have a project with the following structure:
Project.Domain
Contains all the domain objects
Project.EntityFramework, ref Project.Domain
Contains Entity Framework UnitOfWork
Project.Services, ref Project.Domain and Project.EntityFramework
Contains a list of Service classes that perform some operations on the Domain objects
Project.Web.Mvc, ref to all the projects above
I am trying to enforce some Business rules on top of the Domain objects:
For example, you cannot edit a domain object if it's parent is disabled, or, changing the name of an object, Category for example, needs to update recursively all it's children properties (avoiding / ignoring these rules will result in creating invalid objects)
In order to enforce these rules, i need hide all the public properties setters, making them as internal or private.
In order to do this, i need to move the Project.Services and Project.EntityFramework inside the Project.Domain project.
Is this wrong?
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
PS: i don't want to over complicate the project by adding IRepositories interfaces which would probably allow me to keep EntityFramework and Domain separate.
its really a bad idea, once i had this opinion but honestly if you dont program to abstraction it will become a pain when the project becomes larger. (a real pain)
IRepositories help you spread the job between different team members also. in addition to that you can write many helper extensions for Irepository to encapsulate Different Jobs for example
IReopisotry<File>.Upload()
you must be able to test each layer independently and tying them together will let you only do an integration tests with alot of bugs in lower layers :))
First, I think this question is really opinion based.
According to the Big Book the domain models must be separated from the data access. Your domain has nothing to with the manner of how storing the data. It can be a simple text file or a clustered mssql servers.
This choice must be decided based on the actual project. What is the size of the application?
The other huge question is: how many concurrent user use the db and how complex your business logic will be.
So if it's a complex project or presumably frequently modified or it has educational purposes then you should keep the domain and data access separated. And should define the repository interfaces in the domain model. Use some DI component (personally I like Ninject) and you should not reference the data access component in the services.
And of course you should create the test projects also using some moq tools to test the layers separately.
Yes this is wrong, if you are following Domain Driven Design, you should not compromise your architecture for the sake of doing less work. Your data-access and domain should be kept apart. I would strongly suggest that you implement the Repository pattern as it would allow you more flexibility in the long run.
There are of course to right answer to whats the right design...I would however argue that EF is you data layer abstraction, there is no way youre going to make anything thats more powerful and flexible with repositories.To avoid code repetition you can easily write extension methods (for IQueryable<>) for common tasks.Unit testing of the domain layer is easily handled by substituting you big DB with some in-proc DB (SqlLite / Sql Server Compact).IMHO with the maturity of current ORMs like nHibernate and EF is a huge waste of money and time to implement repositories for something as simple as DB access.
Blog post with a more detailed reply; http://ayende.com/blog/4784/architecting-in-the-pit-of-doom-the-evils-of-the-repository-abstraction-layer
I am pretty new to MVC 4, and I have worked mostly with web forms up to this moment in C#. I understand the pattern of MVC, the routing, calling actions and so on.
But what about the actions which are responsible for fetching data from the database, for example by firing stored procedures? I have seen some tutorials where they put the logic for connecting to the database directly in the actions.
However I am thinking of a more centralized way to do it. For example, I can put all the functions which are firing stored procedures in a separate class named DatabaseCoordinator.cs in a folder named Helpers for example. Then I can call them from the actions in the controllers.
In that way I will know that I can find all of my methods for the database in one class, which is a very clean solution, I think (or at least in web forms). However I want to follow the pattern of MVC, and use only models, views and controllers as the name of the pattern itself implies.
So what is the best practice for that? Should I make a separate class for this, or implement the logic directly in the controllers, or perhaps somewhere else?
You should certainly make a separate repository class to contain all of your data access operations.
There is a good worked example here:
http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
I recommend that you put your data access code somewhere other than in your controller. The controller's primary purpose is to gather together the information for display on a page or the reverse - to take the data from the page that is posted back and feed it to the code responsible for business rules and data access.
For most MVC projects (heck, for most projects really!) I build separate class library projects - at minimum one for business rules and data access, though typically I'll make those two separate projects. The purpose of separating the logic is really for simpler future maintenance and reusability. If you keep your various logical parts separate, you can easily swap them out if your logic or database needs to change, or you can easily consume the business rules and data from a new type of user interface; for example, if you decided to implement your project as a Windows forms application in addition to your web system, you could (theoretically) just reuse your business logic and data access logic libraries and only rebuild the user layer. However, if you build your logic into your controller, you really can't reuse that logic without extracting it and converting it to the new application model you're using.
So, simply put, definitely keep 99% of your logic and data access out of your controller. Only put what you must put into your controller, the rest in a separate class, or where appropriate, in separate class libraries.
Good luck!
The Controllers and Views tend to stay within the same project, but it's common to split the data access classes and models into their own seperate class library, as this allows other projects to utilise them.
This will allow you, in the future, to maybe add a windows forms/wpf interface or maybe a mobile device interface, leveraging the work you already have in the standalone class library.
Another thing to consider, is looking into how to use ViewModels in your MVC application. It's a common technique when Views require more than one domain object. Using View Models in MVC.
Check out the Unit of Work Pattern (UOW) combined with the Repository Pattern. It doesn't matter if you ultimately call a stored procedure or an inline linq query to return results, your caller shouldn't know or care how GetPersons is ultimately implemented. The UOW pattern combined with the Repository pattern is a very popular way to expose an Entity Framework database in the ASP.NET community. You will find different ways to do it, some are over-kill and some just create dependencies with no actual benefit but you will find a way that feels right to you with those patterns.
After more experience, I would like to change my answer and state that the Repository Pattern and thus the Unit of Work pattern are pointless layers of abstraction to prevent you from working with Entity Framework, which is your data layer abstraction! directly.
Other than being able to swap out databases from say Microsoft SQL PostgreSQL (when would this ever happen in the real world?) and control the structure of complex queries that you don't want repeated in your code, I see no real value to the repository pattern. To include CreatedBy,ModifiedBy values on Insert/Update you need only override EntityFramework. To encapsulate queries that include business rules such as where active = 1 and isdeleted = 0 just extend Linq queries with extension methods.
I am currently studying towards my final year of a Computer Science degree, and working on my final project and dissertation. I will be using ASP.NET Web Forms and C# to create a 3-Layer project - I can't really call it 3-Tier as it will most likely never be hosted on anything other than my local PC for testing as it is for uni purposes only.
My main question is this:
From my understanding, the idea of 3-Layer is that the BLL references the DAL, and the UI references the BLL to create complete separation of concerns. However I have made a small mock up project following a few tutorials so get the hang of 3-Layer, and most basic tutorials still require a reference between the UI and BLL.
For example in the project I have created, which is a very basic Products and Categories type e-commerce system, I have created the Product and ProductDAL classes in the DAL, then the ProductBLL class in the BLL. With this setup, of only using one database table (forget categories for now), the BLL seems to only serve as a sort of interface between the UI and DAL, the methods are exactly the same as those in the DAL and only call the DAL version.
The problem is that to access the DAL via the BLL, I have to pass in a Product object to the BLL method arguments, which means creating a Product object in the UI first, which means referencing the DAL from the UI. Is this the correct way of doing things?
I can get around simple cases like this by creating a method in the BLL that takes the appropriate fields, e.g. strings and ints to create the Product Object and returns it to the AddProduct method. However when it comes to binding different product attributes to labels in the UI, I still need access to the Product object.
So essentially, do I need to make a load of methods in the BLL to access properties of the Product Object? If not, what kind of methods would actually go there, can you give me any examples of methods that may go in the BLL in this kind of Product scenario?
Thanks in advance, and apologies if this has been asked before - I did read through a lot of posts about 3-Layer architecture but most are very basic and only access one table.
the BLL seems to only serve as a sort of interface between the UI and DAL
This is only because this application is very simple - just a CRUD interface at the moment. More complex applications have business rules that would be encapsulated in the BLL (and not be in the UI or DAL).
I have to pass in a Product object to the BLL method arguments, which means creating a Product object in the UI first, which means referencing the DAL from the UI. Is this the correct way of doing things?
Well, there are several different options here:
You can have a Product data access object (DAO) that is shared between the different layers. This object is not a DAL object, but the DAL uses it. It is called a DTO - Data Transfer Object.
You can have several different Product object - one to be consumes by the UI, one by the BLL and one by the DAL and have mapping layers to translate between the different objects.
Some combination of the above.
A common way of separating concerns is to start by having a project called YourProject.Entities or something of the like. This contains the main class definitions and you reference it when you need to get a large entity like a customer or a product or something of the like. Alongside, you have another project which acts as a repository. Depending on the technology that you are using this can either implement something like EF to get your objects from your DB or can contain methods which query your DB directly using straight SQL or stored procedures.
What you have to keep in mind is that these projects are primarily going to function based on user input. Your users will act and your program will respond. The idea though is that the actual business logic is separated from your UI and your data access. You can mix and match these ideas as you wish, but what I have tended to see in my professional experience is basic data constraint enforcement done on the DB access side of things, and data validation either done directly when creating your objects in the Entities project or in a separate EntitiesValidation project which takes entities as a parameter.
If you don't want to have a separate validation project, something to keep in mind is that you can implement business logic directly in objects using constructors and properties. Constructors can enforce logic on inputs before creating objects, and using full properties--that is to say this...
private string myProp
public string MyProp
{
get
{
// Some code
}
set
{
// Some code
}
}
instead of this...
public string MyProp { get; set; }
Allows you to implement rules when accessing the data associated with those properties.
In the end, these questions can be answered many different ways and I am sure that every response to this question will give you different ideas and opinions on the best way to do things. For me, the two rules I always follow are DRY (do not repeat yourself) and code maintainability. By separating logic from data access from object design from UI, you will have a much easier time maintaining and updating your program when that time comes... even if it is just a school project ;).
I have a quick question that I am hoping is fairly simple to answer. I am attempting to develop a shared Employee object library for my company. The idea is to create a centralized database that contains information about our employees (Reporting Hierarchy, Office Locations, General Info, etc) and then create an shared object library for this database.
My question is what is the best way to create this library so it can be shared among applications.
Do I create a self contained library that stores the database connection (I can see concurrency issues here and it doesn't feel right).
Client -> Server and then deploy the "client library" for use among any application.
OR would a Web/WCF service be more ideally suited to this situation.
There are many options because the question can be translated broadly. I suggest taking to heart all answers. Having said that here's my spin on it...
I used to view software layers as vertical because of n-tier training, and have a hard time breaking away from those notions to something conceptually broader and less restrictive. I strive to view .NET assembles as just pieces of a puzzle.
You're right to separate connection string from code and that's easily supported by .NET .config file, or application settings.
I often prefer a small, core library having the business logic, concepts and flows although each of those can be broken out. And within that concept you can still break out business from data access as different assemblies to swap in a new kind of data access. But sticking with the core module (a kind of "business kernel" or "engine" if you will).
You can express your "business kernel" through many presentation types, for example
textual/Console I-O
GUI: WinForms, WPF, Silverlight, ASP.NET, LED/pixelboard, etc
as cmdlets for Powershell interactions
web service expressions
kinds of mobile apps
etc.
You can accelerate development using patterns to bend software to your will and related implementations like: Microsoft Enterprise Library, loosen the coupling with dependency injection e.g. Ninject (one of many), or inversion of control techniques, etc.
I usually prefer to have a middle tier layer (so some sort of Web/WCF service between the client and the database). This way you separate the clients from the database, so that you can control the number of connections, or you can change the schema of the database in a way that will be transparent for the clients.
Depending on your situation, you can either make the clients connect to the WCF service (preferred in most cases), or create a dll that will wrap the connection to the service and perform some additional processing on the client side.
It depends how deep you need to integrate you library into main application. If you want to extend application domain with custom entities, you have following options:
Built-in persistence into library. You will need to pass connection string to repository class, but also database must include the hardcoded scheme for your library. If you use LINQ to SQL as data access library, you may mark up you entities with mapping attributes (see http://msdn.microsoft.com/en-us/library/system.data.linq.mapping.aspx)
Provide domain library only, but implement persistence outside, if your data layer supports POCO mapping (EF 4 do).
Usually, putting domain model into separated assembly causes few problems:
Integration into application. Application itself usually provides few services, like data access, security, logging, web services etc. If your application have ideal design and layers fully decoupled from each other, there is no problem to add new entities, but usually data access layer requires inheritance from base class, logger is singleton, security checks are hardcoded into business logic methods etc. Such applications must be refactored, services must be extracted into interfaces, and such interfaces must be passed to components in separated assembly.
Entity references. If you use rich domain model, you probably want to reference entities declared in another assembly . Partially this problem can be solved by generics, but you need to have special design of your data access layer that allows you to get lists of generic entities, or get entity by id etc.
Database integration. It may be hard to maintain database changes, if some entities are developed separately from others, espesially by other team.
Just be sure to keep your connection method separate from your data access layer, and then you can change the connection method later if requirements change. If you have a simple DLL that holds your real logic, then adding a communication layer on top should be simple. This will also allow you to use all three methods you mentioned and have all your actual logic in a single DLL used amongst all three.
Im new to the DDD thing. I have a PROFILE class and a PROFILE REPOSITORY CLASS.
The PROFILE class contains the following fields -> Id, Description, ImageFilePath
So when I add a new Profile, I upload then image to the server and store the path to it in my db.
When I delete the profile, the image should be removed from my file system aswell.
My Question:
Where do I add logic for this. My profile repository has a Delete method. Should I add this logic here. Or should I add a service to encapsulate both actions.
Any comment would be appreciated...
Thanks
You have two different "actions" related to the images. You have a "physical" process and a "logical" process. The logical process is persisting the information about the image into the domain repository, since it is part of the domain. The physical process of add (and delete) are a prerequisite to the logical process.
Taking a step back, the physical process is completely independent of the logical process, but the opposite is not true. You obviously do not want to persist meta-information about the image (in the domain) if the image was not saved. Also, you don't want to remove the information from the domain if you cannot remove the physical file.
The domain should contain the information required to remove the logical instance of the image from the datasource. Think of the domain as a physically separate application. In this case, the domain has no actual knowledge that the data it is persisting has anything to do with a physical file. Make sure to keep it this way.
Generally, I have my entities in an assembly, then my repositories and domain services in another. The application services live outside of the domain model, but leverage it to do its work. So application services use one or domain services or other application services and domain services can use one or more repositories.
Keeping this in mind, you have two places for the actual deletion logic, and a third place to coordinate them. Here is how it would work if I were doing it. The domain service will leverage the repository for the logical delete from the underlying datasource (as well as a retrieval which you will need, as well). It is not aware of anything else other than working with the domain object instance. I also would have an application service (outside of the domain) which specifically dealt with removing the physical instance. For argument sake, I will assume you have an "ImageRepository" class and an "ImageServices" class, which contain your domain repository and your domain services, respectively. Your ImageServices needs a Delete() method, as well as whatever Find() methods you are using. I usually explicitly call the find methods as FindBy...() (i.e, FindByKey(), FindByName(), etc.).
You don't want to remove the logical instance if you haven't been able to remove the physical instance, so make sure you have a means of measuring success of the removal operation for the physical image. I would probably go with some sort of a custom exception in this case (since I would consider deleting a file to be a standard operation that should not commonly fail). This usually falls in the realm of "management". So usually I have an application service named something like "ImageManagementService". For simplicity sake, this service (since it is part of the application and not the domain) can have a private method to do the physical delete. Let's call it "DeleteImageFile()".
The third place is a coordination of these two operations, also as an application service. I would just make this the public method in the "ImageManagementService". We can call this one "RemoveImage". This application service will do the following:
Retrieve the instance information from the domain services (a passthrough call to your repository).
Use the instance information to locate the physical file and remove it (the first application service mentioned, again).
If the physical removal is successful, delete the instance (back to the domain service, facading the repository again).
So, what happens is the application itself calls the "RemoveImage()" method from the "ImageManagementService" instance. Internally, "RemoveImage()" first calls the "FindBy..()" from the domain's "ImageServices" to get an instance from the domain. The filepath is used from there to call to the private "DeleteImageFile() method in the "ImageManagementService" instance. Upon success, it will then call the "Delete()" methods in the domain's "ImageService", which is acting as a facade to your repository.
I think it is very important to focus on the separation of concerns in this case, because if you have an explicit separation (which you can do with different assemblies) you will become comfortable with knowing which kind of logic can go in which place. I highly recommend the Evan's book. Also, for a quick hit on the SOC concept as it relates to DDD, I recommend taking a look at Jeffrey Palermo's three part series on the "Onion Architecture".
Just a couple of notes as to why you would use a domain service instead of calling the repository directly from the application service. Primarily, the repository has more complicated instancing then the domain service. Remember, it is mostly a facade, but might have additional logic that does not fit in anywhere else in the domain. A good example of this might be if you wanted to enforce a unique filename. The domain object itself has no knowledge of other domain objects in other aggregates directly, so the domain service might check for an existing instance with the same name prior to a save operation. Very handy, indeed! Also, a domain service is not limited to a single repository. You can have a domain service coordinate efforts between multiple repositories. If you have overlapping aggregates, you might need to call work with two related aggregate roots at the same time. you can do this in the domain service, keeping that sort of logic in the domain and not bleeding into the application.
Hope this helps. I am sure that there are other ways to do this, but this is the way that I have found success in my own applications with similar scenarios.
#joseph.ferris: "Generally, I have my entities in an assembly, then my repositories and domain services in another. "
Personally, I prefer to see assemblies as a unit of deployment, not a separation of concerns design tool. For that, I'd rather use namespaces.
Ensuring no cyclic-dependencies (between those namespaces) that way is harder, but tools like NDepend can help out.
On a first approach, I think I would opt for the most simple approach, and delete the physical image from disk inside the ImageRepository.
It is maybe not the most 'correct' or 'pure' solution, but it is the most simple one, and this conforms to the 'choose the most simple solution that works' adagio.
When, in a later phase of the project, you feel that this solution is not good, and you feel you need a more complex (and maybe more pure) solution like the one proposed by joseph.ferris, then you can always refactor it.
It is easier to refactor a simple solution, then to refactor a complex solution. :)