Im new to the DDD thing. I have a PROFILE class and a PROFILE REPOSITORY CLASS.
The PROFILE class contains the following fields -> Id, Description, ImageFilePath
So when I add a new Profile, I upload then image to the server and store the path to it in my db.
When I delete the profile, the image should be removed from my file system aswell.
My Question:
Where do I add logic for this. My profile repository has a Delete method. Should I add this logic here. Or should I add a service to encapsulate both actions.
Any comment would be appreciated...
Thanks
You have two different "actions" related to the images. You have a "physical" process and a "logical" process. The logical process is persisting the information about the image into the domain repository, since it is part of the domain. The physical process of add (and delete) are a prerequisite to the logical process.
Taking a step back, the physical process is completely independent of the logical process, but the opposite is not true. You obviously do not want to persist meta-information about the image (in the domain) if the image was not saved. Also, you don't want to remove the information from the domain if you cannot remove the physical file.
The domain should contain the information required to remove the logical instance of the image from the datasource. Think of the domain as a physically separate application. In this case, the domain has no actual knowledge that the data it is persisting has anything to do with a physical file. Make sure to keep it this way.
Generally, I have my entities in an assembly, then my repositories and domain services in another. The application services live outside of the domain model, but leverage it to do its work. So application services use one or domain services or other application services and domain services can use one or more repositories.
Keeping this in mind, you have two places for the actual deletion logic, and a third place to coordinate them. Here is how it would work if I were doing it. The domain service will leverage the repository for the logical delete from the underlying datasource (as well as a retrieval which you will need, as well). It is not aware of anything else other than working with the domain object instance. I also would have an application service (outside of the domain) which specifically dealt with removing the physical instance. For argument sake, I will assume you have an "ImageRepository" class and an "ImageServices" class, which contain your domain repository and your domain services, respectively. Your ImageServices needs a Delete() method, as well as whatever Find() methods you are using. I usually explicitly call the find methods as FindBy...() (i.e, FindByKey(), FindByName(), etc.).
You don't want to remove the logical instance if you haven't been able to remove the physical instance, so make sure you have a means of measuring success of the removal operation for the physical image. I would probably go with some sort of a custom exception in this case (since I would consider deleting a file to be a standard operation that should not commonly fail). This usually falls in the realm of "management". So usually I have an application service named something like "ImageManagementService". For simplicity sake, this service (since it is part of the application and not the domain) can have a private method to do the physical delete. Let's call it "DeleteImageFile()".
The third place is a coordination of these two operations, also as an application service. I would just make this the public method in the "ImageManagementService". We can call this one "RemoveImage". This application service will do the following:
Retrieve the instance information from the domain services (a passthrough call to your repository).
Use the instance information to locate the physical file and remove it (the first application service mentioned, again).
If the physical removal is successful, delete the instance (back to the domain service, facading the repository again).
So, what happens is the application itself calls the "RemoveImage()" method from the "ImageManagementService" instance. Internally, "RemoveImage()" first calls the "FindBy..()" from the domain's "ImageServices" to get an instance from the domain. The filepath is used from there to call to the private "DeleteImageFile() method in the "ImageManagementService" instance. Upon success, it will then call the "Delete()" methods in the domain's "ImageService", which is acting as a facade to your repository.
I think it is very important to focus on the separation of concerns in this case, because if you have an explicit separation (which you can do with different assemblies) you will become comfortable with knowing which kind of logic can go in which place. I highly recommend the Evan's book. Also, for a quick hit on the SOC concept as it relates to DDD, I recommend taking a look at Jeffrey Palermo's three part series on the "Onion Architecture".
Just a couple of notes as to why you would use a domain service instead of calling the repository directly from the application service. Primarily, the repository has more complicated instancing then the domain service. Remember, it is mostly a facade, but might have additional logic that does not fit in anywhere else in the domain. A good example of this might be if you wanted to enforce a unique filename. The domain object itself has no knowledge of other domain objects in other aggregates directly, so the domain service might check for an existing instance with the same name prior to a save operation. Very handy, indeed! Also, a domain service is not limited to a single repository. You can have a domain service coordinate efforts between multiple repositories. If you have overlapping aggregates, you might need to call work with two related aggregate roots at the same time. you can do this in the domain service, keeping that sort of logic in the domain and not bleeding into the application.
Hope this helps. I am sure that there are other ways to do this, but this is the way that I have found success in my own applications with similar scenarios.
#joseph.ferris: "Generally, I have my entities in an assembly, then my repositories and domain services in another. "
Personally, I prefer to see assemblies as a unit of deployment, not a separation of concerns design tool. For that, I'd rather use namespaces.
Ensuring no cyclic-dependencies (between those namespaces) that way is harder, but tools like NDepend can help out.
On a first approach, I think I would opt for the most simple approach, and delete the physical image from disk inside the ImageRepository.
It is maybe not the most 'correct' or 'pure' solution, but it is the most simple one, and this conforms to the 'choose the most simple solution that works' adagio.
When, in a later phase of the project, you feel that this solution is not good, and you feel you need a more complex (and maybe more pure) solution like the one proposed by joseph.ferris, then you can always refactor it.
It is easier to refactor a simple solution, then to refactor a complex solution. :)
Related
I'm writing a C# application, I want to follow a 3-tier programming architecture. I've been programming my application based on this article.
I have some questions that I hope someone can help me with:
Where do i put the domain objects (for instance a Person class, where i put the getters and setters and the constructor, and all its properties (age, name,..). Do i put these in the BLL folder or someowhere else?
Should I put all my BLL functions that call functions from my DAL-layer in one controller or seperated among all specific business classes (for instance person, order,..)?
Do I need to create a DAL-object in every BLL function before calling a DAL-function, or do I use a singleton pattern where I only create one DAL-class object at a time?
A screenshot of my classes (Program.cs is the main class):
class structure
I would say the domain objects would go inside DAL folder as the these object will be storing the data inside the instance of an object.
I wouldn't suggest to place all BLL functions under one controller. One of the reason for 3 tier architecture even for a "single machine, single project" is to have code segregation so that it would be easy to understand and maintain.
Singleton pattern means the same object would be shared with all BLL functions. If your main aim of the DAL is have one single storage interaction (e.g. Database) then having multiple DAL objects would mean multiple Database connections which means resource utilization concern. Even in case off multi threaded situations you can increase the Database Connection pool size to a constant number and make sure you share the pool with threads. Important of this that you do not request unnecessary resources from Database.
There various solutions are possible, but e.g. Core platform shows that "more abstractions to the God of abstraction" is a trend. I guess because they find it is more easy to manage it this way in theirs development process (cross-platform, open source).
So do it with maximum possible abstraction and check how it is comfortable for you.
I have entities, service interfaces in one assembly. "Business" code I store in the entity (may be in instance method, maybe in static method, may be in static extension - there are no big difference between them). POCO doesn't mean "can't contain method".
Application Services in DDD are supposed to orchestrate full business use cases, using Repositories to fetch Aggregates, calling methods on the Aggregates and managing infrastructure concerns like database transactions.
When reading books from Eric Evans, Vaughn Vernon and Scott Millett, you can find great examples on how separate your projects. But I never found clear answers for this situation.
Suppose you have a Domain, and three "entry points" to communicate with this domain:
Rest API for synchronous actions
Messenger "daemon" / "service" running on the OS for asynchronous actions
Powershell cmdlets for administrative users for maintenance actions
where do you place those Application Services if you have one DLL per entry point for deployment purpose?
Option A: dedicated Application Service project (DLL) referenced by all entry point DLLs.
Option B: Application Services located in each entry point's DLL.
In the first option, you can benefit from code reuse when multiple entry points share the same use cases. Same thing for unit tests. However, you theoretically have to deploy an Application Service DLL having too much features for some entry points.
In the second option, you have to duplicate code (and tests) in each entry point's dll when they share the same use cases, but you can theoretically have the control on infrastructure concern like database transaction that could be different depending the execution is in a Powershell Cmdlet on in an API.
In my opinion, the real answer is a question of personal preference.
Anyone having experience with both approaches (success or failure) have some tips or recommandations?
Option A: dedicated Application Service project (DLL) referenced by all entry point DLLs.
This is roughly what I would expect to see. You have three composition roots here, that should always share the same model (to ensure that all paths enforce the current business invariant) and the same book of record (if they don't share the same book of record, they really don't need to share anything at all).
In fact, I strongly suspect that you could separate these completely -- run "the model" in a "microservice", and deploy your three interfaces above that each uses a common service client DLL to talk to that core service.
You might, for instance, review the onion architecture. It aligns fairly closely with the image of a single dll for the application services, with each of your compositions roots using a different interface to adapt their own API to that of the model.
you theoretically have to deploy an Application Service DLL having too much features for some entry points.
That's so; there's a trade off there. My guess is that in most deployments, shipping a single fat DLL is going to be more cost effective than trying to deploy multiple jars with different subsets of the same model.
Personally, I'd start with a fat microservice, a well designed API, and fat clients in each of the composition roots above, and then if necessarily replace the fat clients with thinner, more specialized ones if the trade offs support that choice.
Just to be sure I understand one of your point. Are you suggesting that my domain (what you called "the model") should expose an API, and my different entry points (what you called "composition root") should call this API?
Yes, that's a fair description of the proposal, except I want to be more clear on the "should expose an API" part. The API should be explicit. That is to say, looking at the code, you should be able to point to a seam in your code where the separation of concerns happens
This part is where the model lives
That part is where the specialization lives
Your option B is (provided you make the seam explicit) is this idea within a single library. Your option A is this idea, with seam as the interface between two libraries (still running in the same process). Microservices is this idea, with the two libraries running in different processes.
You get different tradeoffs - for instance, if the model runs in a dedicated microservice, then (a) changing the model is "easy", because there's exactly one authority to swap out, and (b) you now have the freedom to implement your specialized interfaces in any technology that can exchange messages with your domain service, (c) you can also scale out the model independently of how you scale out the specializations.
But you also get additional complexity, in that you need to think more about the stability of the API when the client and server have independent deployment cycles.
My question is: how do I implement caching in my domain project, which is working like a normal stack with the repository pattern.
I have a setup that looks like the following:
ASP.NET MVC website
Web API
Domain project (using IoC, with Windsor)
My domain project for instance have:
IOrderRepository.cs
OrderRepository.cs
Order.cs
My ASP.NET MVC website calls the Web API and gets back some DTO classes. My Web API then maps these objects to business objects in my domain project, and makes the application work.
Nowhere in my application have I implemented caching.
Where should be caching be implemented?
I thought about doing it inside the methods in the OrderRepository, so my Get, GetBySpecification and Update methods has to call some generic cache handler injected by the OrderRepository.
This obviously gives some very ugly code, and isn't very generic.
How to maintain the cache?
Let's say we have a cache key like "OrderRepostory_123". When I call the Update method, should I call cacheHandler.Delete("OrderRepository_123") ? Because that seems very ugly as well
My own thoughts...
I can't really see a decent way to do it besides some of the messy methods I have described. Maybe I could make some cache layer, but I guess that would mean my WebAPI wouldn't call my OrderRepository anymore, but my CacheOrderRepository-something?
Personally, I am not a fan of including caching directly in repository classes. A class should have a single reason to change, and adding caching often adds a second reason. Given your starting point you have at least two likely reasonable options:
Create a new class that adds caching to the repository and exposes the same interface
Create a new service interface that uses one or more repositories and adds caching
In my experience #2 is often more valuable, since the objects you'd like to cache as a single unit may cross repositories. Of course, this depends on how you have scoped your repositories. A lot may depend on whether your repositories are based on aggregate roots (ala DDD), tables, or something else.
There are probably a million different ways to do this, but it seems to me (given the intent of caching is to improve performance) implementing the cache similar to a repository pattern - where the domain objects interact with the cache instead of the database, then perhaps a background thread could keep the database and cache in sync, and the initial startup of the app pool would fill the cache (assuming eager loading is desired). A whole raft of technical issues start to crop up, such as what to do if the cache is modified in a way that violates a database constraint. Code maintenance becomes a concern where any data structure related concerns possibly need to be implemented in multiple places. Concurrency issues start to enter the fray. Just some thoughts...
SQLCacheDependency with System.Web.Caching.Cache, http://weblogs.asp.net/andrewrea/archive/2008/07/13/sqlcachedependency-i-think-it-is-absolutely-brilliant.aspx . This will get you caching that gets invalidated based on other systems applying updates also.
there are multiple levels of caching depending on the situation however if you are looking for generic centralized caching with low number of changes I think you will be looking for EF second level caching and for more details check the following http://msdn.microsoft.com/en-us/magazine/hh394143.aspx
Also you can use caching on webapi level
Kindly consider if MVC and WebAPI the network traffic if they are hosted in 2 different data centers
and for huge read access portal you might consider Redis http://Redis.io
It sounds like you want to use a .NET caching mechanism rather than a distributed cache like Redis or Memcache. I would recommend using the System.Runtime.Caching.MemoryCache class instead of the traditional System.Web.Caching.Cache class. Doing this allows you to create your caching layer independent of your MVC/API layer because the MemoryCache has no dependencies on System.Web.
Caching your DTO objects would speed up your application greatly. This prevents you from having to wait for data to be assembled from a cache that mirrors your data layer. For example, requesting Order123 would only require a single cache read rather than to several reads to any FK data. Your caching layer would of course need to contain the logic to invalidate the cache on UPDATEs you perform. A recommended way would be to retrieve the cached order object and modify its properties directly, then persist to the DB asynchronously.
Assuming that I'm using no ORM, and following DDD, consider the following case:
A Project has a set of Files.
I've created both a Project and a ProjectRepository and a File and a FileRepository classes.
My original idea was having all the File entities for a given Project being passed to it in its constructor. This Project instance would, of course, be created through the ProjectRepository.
The problem is that if I have a million files (and although I won't have a million files, I'll have enough ones to make this take a while), I'll have to load them all, even when I don't really need them.
What's the standard approach to this? I can't think of anything better than to pass a FileRepository to each Project instance.
Since you mention DDD: if there are two repositories it indicates that there are two Aggregate Roots. The whole point of the Aggregate Root concept is that each root is responsible for its entire object graph.
If you try to mix Files into a Project object graph, then the ownership of the Files is ambiguous. In other words, don't do this:
project
- file
- file
- file
Either treat them as two (associated) object graphs, or remodel the API so that there's only a single Aggregate Root (repository).
There is no standard way. This is domain driven design, so it depends on the domain, if you ask me.
Maybe you could add some more domain to your design.
You only have two concepts: a Project and a File. But you say you don't want to load the file (assuming that File will always load the content of the file).
So maybe you should think about a FileReference, which is a lightweight representation of a file (Name, Path, Size?).
For me it sounds like your problem is the handling of a large set of files and not OOP.
You could implement a service layer which your clients interact with which co-ordinates the repositories and returns the domain entities. This would provide a better separation of concerns; I personally don't think that your client should have access to your repositories.
I have a quick question that I am hoping is fairly simple to answer. I am attempting to develop a shared Employee object library for my company. The idea is to create a centralized database that contains information about our employees (Reporting Hierarchy, Office Locations, General Info, etc) and then create an shared object library for this database.
My question is what is the best way to create this library so it can be shared among applications.
Do I create a self contained library that stores the database connection (I can see concurrency issues here and it doesn't feel right).
Client -> Server and then deploy the "client library" for use among any application.
OR would a Web/WCF service be more ideally suited to this situation.
There are many options because the question can be translated broadly. I suggest taking to heart all answers. Having said that here's my spin on it...
I used to view software layers as vertical because of n-tier training, and have a hard time breaking away from those notions to something conceptually broader and less restrictive. I strive to view .NET assembles as just pieces of a puzzle.
You're right to separate connection string from code and that's easily supported by .NET .config file, or application settings.
I often prefer a small, core library having the business logic, concepts and flows although each of those can be broken out. And within that concept you can still break out business from data access as different assemblies to swap in a new kind of data access. But sticking with the core module (a kind of "business kernel" or "engine" if you will).
You can express your "business kernel" through many presentation types, for example
textual/Console I-O
GUI: WinForms, WPF, Silverlight, ASP.NET, LED/pixelboard, etc
as cmdlets for Powershell interactions
web service expressions
kinds of mobile apps
etc.
You can accelerate development using patterns to bend software to your will and related implementations like: Microsoft Enterprise Library, loosen the coupling with dependency injection e.g. Ninject (one of many), or inversion of control techniques, etc.
I usually prefer to have a middle tier layer (so some sort of Web/WCF service between the client and the database). This way you separate the clients from the database, so that you can control the number of connections, or you can change the schema of the database in a way that will be transparent for the clients.
Depending on your situation, you can either make the clients connect to the WCF service (preferred in most cases), or create a dll that will wrap the connection to the service and perform some additional processing on the client side.
It depends how deep you need to integrate you library into main application. If you want to extend application domain with custom entities, you have following options:
Built-in persistence into library. You will need to pass connection string to repository class, but also database must include the hardcoded scheme for your library. If you use LINQ to SQL as data access library, you may mark up you entities with mapping attributes (see http://msdn.microsoft.com/en-us/library/system.data.linq.mapping.aspx)
Provide domain library only, but implement persistence outside, if your data layer supports POCO mapping (EF 4 do).
Usually, putting domain model into separated assembly causes few problems:
Integration into application. Application itself usually provides few services, like data access, security, logging, web services etc. If your application have ideal design and layers fully decoupled from each other, there is no problem to add new entities, but usually data access layer requires inheritance from base class, logger is singleton, security checks are hardcoded into business logic methods etc. Such applications must be refactored, services must be extracted into interfaces, and such interfaces must be passed to components in separated assembly.
Entity references. If you use rich domain model, you probably want to reference entities declared in another assembly . Partially this problem can be solved by generics, but you need to have special design of your data access layer that allows you to get lists of generic entities, or get entity by id etc.
Database integration. It may be hard to maintain database changes, if some entities are developed separately from others, espesially by other team.
Just be sure to keep your connection method separate from your data access layer, and then you can change the connection method later if requirements change. If you have a simple DLL that holds your real logic, then adding a communication layer on top should be simple. This will also allow you to use all three methods you mentioned and have all your actual logic in a single DLL used amongst all three.