Application Services in DDD are supposed to orchestrate full business use cases, using Repositories to fetch Aggregates, calling methods on the Aggregates and managing infrastructure concerns like database transactions.
When reading books from Eric Evans, Vaughn Vernon and Scott Millett, you can find great examples on how separate your projects. But I never found clear answers for this situation.
Suppose you have a Domain, and three "entry points" to communicate with this domain:
Rest API for synchronous actions
Messenger "daemon" / "service" running on the OS for asynchronous actions
Powershell cmdlets for administrative users for maintenance actions
where do you place those Application Services if you have one DLL per entry point for deployment purpose?
Option A: dedicated Application Service project (DLL) referenced by all entry point DLLs.
Option B: Application Services located in each entry point's DLL.
In the first option, you can benefit from code reuse when multiple entry points share the same use cases. Same thing for unit tests. However, you theoretically have to deploy an Application Service DLL having too much features for some entry points.
In the second option, you have to duplicate code (and tests) in each entry point's dll when they share the same use cases, but you can theoretically have the control on infrastructure concern like database transaction that could be different depending the execution is in a Powershell Cmdlet on in an API.
In my opinion, the real answer is a question of personal preference.
Anyone having experience with both approaches (success or failure) have some tips or recommandations?
Option A: dedicated Application Service project (DLL) referenced by all entry point DLLs.
This is roughly what I would expect to see. You have three composition roots here, that should always share the same model (to ensure that all paths enforce the current business invariant) and the same book of record (if they don't share the same book of record, they really don't need to share anything at all).
In fact, I strongly suspect that you could separate these completely -- run "the model" in a "microservice", and deploy your three interfaces above that each uses a common service client DLL to talk to that core service.
You might, for instance, review the onion architecture. It aligns fairly closely with the image of a single dll for the application services, with each of your compositions roots using a different interface to adapt their own API to that of the model.
you theoretically have to deploy an Application Service DLL having too much features for some entry points.
That's so; there's a trade off there. My guess is that in most deployments, shipping a single fat DLL is going to be more cost effective than trying to deploy multiple jars with different subsets of the same model.
Personally, I'd start with a fat microservice, a well designed API, and fat clients in each of the composition roots above, and then if necessarily replace the fat clients with thinner, more specialized ones if the trade offs support that choice.
Just to be sure I understand one of your point. Are you suggesting that my domain (what you called "the model") should expose an API, and my different entry points (what you called "composition root") should call this API?
Yes, that's a fair description of the proposal, except I want to be more clear on the "should expose an API" part. The API should be explicit. That is to say, looking at the code, you should be able to point to a seam in your code where the separation of concerns happens
This part is where the model lives
That part is where the specialization lives
Your option B is (provided you make the seam explicit) is this idea within a single library. Your option A is this idea, with seam as the interface between two libraries (still running in the same process). Microservices is this idea, with the two libraries running in different processes.
You get different tradeoffs - for instance, if the model runs in a dedicated microservice, then (a) changing the model is "easy", because there's exactly one authority to swap out, and (b) you now have the freedom to implement your specialized interfaces in any technology that can exchange messages with your domain service, (c) you can also scale out the model independently of how you scale out the specializations.
But you also get additional complexity, in that you need to think more about the stability of the API when the client and server have independent deployment cycles.
Related
Hi I am trying to create a project skeleton uses CQRS pattern and some external services. Below are the structure of the solution.
WebApi
Query Handlers
Command Handlers
Repository
ApiGateways ( here is the interfaces and implementation of microservice calls)
We want to keep controller as thin. So we are using query handlers and command handlers to handle respective operations.
However, we use a external microservices to get the data we are calling them from Query handlers.
All the http clinet construction and calls will be abstracted in them.The response will be converted to a view model and pass it back to Query handler.
We name it as a ApiGateways. But it is not composing from multiple services.
How do we call this part in our solution? Proxy or something? Any good example for thin controllers and microservice architecture
We name it as API Gateways. But it is not composed from multiple
services. How do we call this part in our solution? Proxy or
something? Any good example for thin controllers and microservice
architecture
Assumption:
From the image you attached, I see Command Handler and Query Handler are calling "external/micro-services". I guess by this "external/micro-services" you mean that you are calling another micro-service from your current micro-service Handler(Command and Query). These "external/micro-services" are part of your architecture and deployed on the same cluster and not some external system that just exposes a public API?
If this is correct I will try to answer based on this assumption.
API Gateway would probably be misleading in this case as the concept of API Gateway is something different then what you are trying to do here.
API Gateway per definition:
Quote from here:
An API Gateway is a server that is the single entry point into the
system. It is similar to the Facade pattern from object-oriented
design. The API Gateway encapsulates the internal system architecture
and provides an API that is tailored to each client. It might have
other responsibilities such as authentication, monitoring, load
balancing, caching, request shaping and management, and static
response handling.
What you actually are trying to do is to call from your Command or Query Handler from one of your micro-service A another micro-service B. This is an internal micro-service communication that should not be done through API Gateway as that would be the approach for outside calls. For example, with "outside calls" I mean frontend application API or public API calls that are trying to call your micro-services. In that case, you would use API Gateways.
A better name for this component would be something like "CrossMicroServiceGateway" or "InterMicroServiceGateway"; if you want to have it as the full CQRS way you could have it like a direct call to other Command or Query and then you could use something like "QueryGate" or "CommandGate" or similar.
Other suggestions:
WebApi
Query Handlers
Command Handlers
Repository
API Gateways ( here is the interfaces and implementation of
microservice calls)
This sounds reasonable except for the point of API Gateway which I described above. Of course, it is hard for me to tell based on the limited information that I have about your project. To give you a more precise suggestion here I would need to know whether you use DDD or not? How do you use CQRS and other information?
However, we use an external microservices to get the data we are
calling them from Query handlers. All the HTTP client construction and
calls will be abstracted in them. The response will be converted to a
view model and pass it back to Query handler.
You could extract all this code/logic that handles the cross micro-service communication over HTTP or other protocols, handling general responses and similar to some core library and include it into each of your micro-service as a package. In this way, you will reuse the solution for all your micro-service. You can extend that and add all core domain-agnostic things (like data access or repository base classes, wrappers around HTTP, unit-test infrastructure setup, and similar) to that or other shared libraries. This way your micro-services will only focus on the part of the Domain it is supposed to do.
I think CQRS is the right choice to keep the reading and writing operations decoupled.
The integration with third party systems (if it's the case), need some attention.
Do not call these services directly from your handlers, this could lead to various performance and/or maintainability issues.
You have to keep these integrations very well separated, because them are out of your domain. They may be subject to inefficiencies, changes or a number of problems out of your control.
One solution that I could recommend is a "Middleware" service.
In your application context this can be constituted by another service (always REST for example) that will have the task of 'talk' (only him) with external systems, acting as a single point of integration between your domain and the external environment. This can be realized from scratch or using a commercial/opens-source solution like (just as example) this.
This lead to many benefits, same of these:
A middleware is a unique mockable point during integration test of your application.
You can change the middleware implementation in the future without touch your handlers.
Of course, changing 3pty providers won't affect your domain services.
Middleware is the unique point dedicated to manage 3pty service interruptions.
Your services remain agnostic compared to the outside world.
Focus on these questions can be useful to design your integration middleware service:
Which types of 3pty data do they provide? Are they on time? This might help you figure out whether to introduce a cache system into your integration service.
Can 3pty be subject to frequent interruptions? Then you must ensure that your system must tolerate any disruption of external services. In other words, you must ensure a certain resilience of your services. There are many techniques to do that.
Do you really need to interrogate these 3pty services all the time? Maybe a more or less sophisticated cache system could speed up your services a lot.
Finally, it is also very important to understand if the need to have a microservices-oriented system is a real and immediate need.
Due to the fact these architectures are more expensive and complex then the classic ones, it might be reasonable to think about starting by building a monolith system and then moving towards a more segmented solution later.
Thinking (organizing) your system as many "bounded context" does not prevent you from creating a good monolith system and at the same time, it prepares you for a possible switch to microservices-oriented one.
As a summary advice, start by keeping things as separate as possible. Define a language to speak about your business model. These lead to you potentially change a lot when needs will come without to much effort during the inevitable evolution of your software. "Hexagonal" architecture is a good starting point to do that for both choises (Microservices vs Monolith).
Recently, Netflix posted a nice article about this architecture with a lot of ideas for a fresh start.
I will give my answer from DDD and the clean architecture perspective. Ideally, you application should have following layers.
Api (ideally very thin layer of Controllers).The controller will create queries and commands and push them on a common channel. (refer MediatR)
Application This will be your orchestration layer. It will contain definitions of queries and command and their handlers. For queries, you will directly interact form your infrastructure layer. For commands, you will interact with domain and then save them through repositories in infrastructure.
Domain Depends upon your business logic and complexity, this layer will contain all your business models.
Infrastructure It will contain mostly two types of objects Providers and Repositories. Providers should be used with queries and will return DAO. Repositories should be used where ever domain is involved, ideally commands in CQRS. Repositories should always receive and return only domain objects.
So after setting the base context about different layers on clean architecture, the answer to your original question is --> I would create third party interactions in the provider layer. For example, you need to connect with a user microservice, I will create a UserProvider in the provider folder in the infrastructure layer and consume it through a interface.
I have recently done some analysis of ASP.Net Boilerplate (https://aspnetboilerplate.com). I have noticed that the domain layer (MyProject.Core) has folders for the following (these are created by default):
Authorization
Confirguration
Editions
Features
Identity
Localization
MultiTenancy
etc
Why would you put all of this in the Domain Layer of an application? From what I can see; I believe most of this code should be found in the Application Layer (which could also be the service layer).
Good question, if you just look at the folder names. But I suppose you haven't investigated the source code in the folders much.
First of all, I don't say it's the best solution architecture. We are constantly improving it and we may have faults. Notice that our approach is a mix of best practices & pragmatic approach. I will try to explain it briefly.
You are talking about this project: https://github.com/aspnetboilerplate/module-zero-core-template/tree/master/aspnet-core/src/AbpCompanyName.AbpProjectName.Core So, let's investigate the folders:
Localization
It does not include any localization logic (it's done in framework level, in ABP. Thus, it's in infrastructure layer). It just defines localization texts.
While normally it can be easily moved to web layer (no direct dependency in Core project), we put it in the Core layer since we think it may be needed in another application too. Think that you have a Windows Service has only Reference to the .Core project and want to use localization texts, say to send email to a user in his own language. Notice that Windows Service should not have a reference to Web layer normally. So, we have a pragmatic approach here. We could add localization to another dll project, but that would make the solution more complicated.
Authorization
Mainly includes User, Role.. entities and UserManager and RoleManager domain classes. Similar to localization, it does not include actual authorization logic. It also includes some other classes but they do not make much. We thought putting these here would help us if we have more application layers. As you know every application can have it's own application layer as a best practice.
Confirguration
AppConfigurations is here to share 'configuration reading' code between different apps (Migrator and Web app). Again, this could be inside another "Shared Utils" library. But we wanted to keep solution structure balanced, so it reflects major layer and structures yet is not so complicated for intermediate level developers.
Editions
Just includes EditionManager class which is a domain service for Edition management.
Features
Just includes FeatureValueStore which is a repository-like adapter class. See it's code, it's already empty.
MultiTenancy
Includes Tenant entity and TenantManager class which are already parts of domain layer. Again, nothing here includes infrastructure-related multi-tenancy features (like data filtering or determining current tenant).
... and so on...
So, do not just see names and have idea, please check the project deeper. Some code can be moved to upper layers or an utils library, but I think general structure is good to start a DDD architected application.
What you see it is called Module Zero, it aims to implements all fundamental concepts of ASP.NET Boilerplate framework such as tenant management (multi-tenancy), role management, user management, session, authorization (permission management), setting management, language management, audit logging and so on.
Module-Zero defines entities and implements domain logic (domain layer) because it is part of the configuration context of your system.
We are developing multiple web services in C# using WCF, but we´re new doing it.
So, for what we have read and learnt, this is our approach:
We have a class library that we called CommonLibrary that has a few classes that are going to be used on all our services (language stuff, type of user connected and a common object that all the services are meant to return).
We have another class library called SecurityLibrary which validates the user that is consuming the method.
At the moment we have 2 services that are almost at 90% finished, both of them use CommonLibrary and SecurityLibrary.
Now the questions:
Is this a bad approach?
Are we violating the SOA principles of encapsulation and autonomy by using common/shared library with each of our services?
A third person told us to copy all the code of those libraries on each of our services so we have a 100% autonomous service, is this the right way? I think is hard for maintenance and shows a lot of duplicity. Any update made on one has to be replicated or merged on those other services...
No, it is not a bad approach?
If using libaries in your service, you should also keep away from the .NET-library. I am wonding why you are thinking that a service process is only allowed to exist of only one assembly.
Furthermore, copy-paste code is a very, very bad habbit. It is known as a anti-design-pattern. I duplicates the maintainance and also all the bugs inside it.
Sharing libaries does not make your service less "autonomous". I think it could make them more compatible if they are sharing types.
A good service is just a process, existing of one or more (shared) assemblies, with a well defined service contract. This service contract is never allowed to be broken.
BTW: In my answer I did not include problems which shared assemblies in the GAC. That is a feature or problem shared by all processes, not only services.
I would like to split my C# Web API project, such that a given feature (or a set of features) are maintained in separate projects. Ideally, I would still like to maintain layered separation as well within each feature package.
Example: I would like to ensure there is a separate API project for each main feature (i.e. a business suite would be separated into sales API, inventory API, payroll API etc. etc.). Each feature would be divided into API (top layer), Models (DTO/ViewModels sent and received from/to the API), Service (business logic) and Tests. There could be more layers, i.e. separate layers for entity classes etc.
There is a certain amount of shared code that must be reused within these projects, both on the top layer (such as error handling, logging etc.) and other layers as well (database connections, repositories...).
Does anyone have a good example of how to do this separation, such that everything is DRY, while maintaining a clear separation of features?
Best regards,
Daniel
What you're trying to achieve sounds very akin to a micro-service architecture. Here are some good links that describe what this means..
http://blog.cleancoder.com/uncle-bob/2014/09/19/MicroServicesAndJars.html
http://martinfowler.com/articles/microservices.html#CharacteristicsOfAMicroserviceArchitecture
The idea is to build your system in a modularised manor, where each component can talk to each other, usually over HTTP. Which seems to be what you want to achieve with having "features" each exposing an API. There is a whole heap of material on this so I'd read around it.
As for sharing code between them, this can be tricky. If you're thinking of this in terms of a modularised system, perhaps the shared stuff should be its own "feature"/"component"/"service"/"module" (whatever you want to call it). Or perhaps there is some stuff you just want to pull out into its own project- if so consider building a Nuget package to share common code across the components?
I have a quick question that I am hoping is fairly simple to answer. I am attempting to develop a shared Employee object library for my company. The idea is to create a centralized database that contains information about our employees (Reporting Hierarchy, Office Locations, General Info, etc) and then create an shared object library for this database.
My question is what is the best way to create this library so it can be shared among applications.
Do I create a self contained library that stores the database connection (I can see concurrency issues here and it doesn't feel right).
Client -> Server and then deploy the "client library" for use among any application.
OR would a Web/WCF service be more ideally suited to this situation.
There are many options because the question can be translated broadly. I suggest taking to heart all answers. Having said that here's my spin on it...
I used to view software layers as vertical because of n-tier training, and have a hard time breaking away from those notions to something conceptually broader and less restrictive. I strive to view .NET assembles as just pieces of a puzzle.
You're right to separate connection string from code and that's easily supported by .NET .config file, or application settings.
I often prefer a small, core library having the business logic, concepts and flows although each of those can be broken out. And within that concept you can still break out business from data access as different assemblies to swap in a new kind of data access. But sticking with the core module (a kind of "business kernel" or "engine" if you will).
You can express your "business kernel" through many presentation types, for example
textual/Console I-O
GUI: WinForms, WPF, Silverlight, ASP.NET, LED/pixelboard, etc
as cmdlets for Powershell interactions
web service expressions
kinds of mobile apps
etc.
You can accelerate development using patterns to bend software to your will and related implementations like: Microsoft Enterprise Library, loosen the coupling with dependency injection e.g. Ninject (one of many), or inversion of control techniques, etc.
I usually prefer to have a middle tier layer (so some sort of Web/WCF service between the client and the database). This way you separate the clients from the database, so that you can control the number of connections, or you can change the schema of the database in a way that will be transparent for the clients.
Depending on your situation, you can either make the clients connect to the WCF service (preferred in most cases), or create a dll that will wrap the connection to the service and perform some additional processing on the client side.
It depends how deep you need to integrate you library into main application. If you want to extend application domain with custom entities, you have following options:
Built-in persistence into library. You will need to pass connection string to repository class, but also database must include the hardcoded scheme for your library. If you use LINQ to SQL as data access library, you may mark up you entities with mapping attributes (see http://msdn.microsoft.com/en-us/library/system.data.linq.mapping.aspx)
Provide domain library only, but implement persistence outside, if your data layer supports POCO mapping (EF 4 do).
Usually, putting domain model into separated assembly causes few problems:
Integration into application. Application itself usually provides few services, like data access, security, logging, web services etc. If your application have ideal design and layers fully decoupled from each other, there is no problem to add new entities, but usually data access layer requires inheritance from base class, logger is singleton, security checks are hardcoded into business logic methods etc. Such applications must be refactored, services must be extracted into interfaces, and such interfaces must be passed to components in separated assembly.
Entity references. If you use rich domain model, you probably want to reference entities declared in another assembly . Partially this problem can be solved by generics, but you need to have special design of your data access layer that allows you to get lists of generic entities, or get entity by id etc.
Database integration. It may be hard to maintain database changes, if some entities are developed separately from others, espesially by other team.
Just be sure to keep your connection method separate from your data access layer, and then you can change the connection method later if requirements change. If you have a simple DLL that holds your real logic, then adding a communication layer on top should be simple. This will also allow you to use all three methods you mentioned and have all your actual logic in a single DLL used amongst all three.