Microservices design part in WebApi - c#

Hi I am trying to create a project skeleton uses CQRS pattern and some external services. Below are the structure of the solution.
WebApi
Query Handlers
Command Handlers
Repository
ApiGateways ( here is the interfaces and implementation of microservice calls)
We want to keep controller as thin. So we are using query handlers and command handlers to handle respective operations.
However, we use a external microservices to get the data we are calling them from Query handlers.
All the http clinet construction and calls will be abstracted in them.The response will be converted to a view model and pass it back to Query handler.
We name it as a ApiGateways. But it is not composing from multiple services.
How do we call this part in our solution? Proxy or something? Any good example for thin controllers and microservice architecture

We name it as API Gateways. But it is not composed from multiple
services. How do we call this part in our solution? Proxy or
something? Any good example for thin controllers and microservice
architecture
Assumption:
From the image you attached, I see Command Handler and Query Handler are calling "external/micro-services". I guess by this "external/micro-services" you mean that you are calling another micro-service from your current micro-service Handler(Command and Query). These "external/micro-services" are part of your architecture and deployed on the same cluster and not some external system that just exposes a public API?
If this is correct I will try to answer based on this assumption.
API Gateway would probably be misleading in this case as the concept of API Gateway is something different then what you are trying to do here.
API Gateway per definition:
Quote from here:
An API Gateway is a server that is the single entry point into the
system. It is similar to the Facade pattern from object-oriented
design. The API Gateway encapsulates the internal system architecture
and provides an API that is tailored to each client. It might have
other responsibilities such as authentication, monitoring, load
balancing, caching, request shaping and management, and static
response handling.
What you actually are trying to do is to call from your Command or Query Handler from one of your micro-service A another micro-service B. This is an internal micro-service communication that should not be done through API Gateway as that would be the approach for outside calls. For example, with "outside calls" I mean frontend application API or public API calls that are trying to call your micro-services. In that case, you would use API Gateways.
A better name for this component would be something like "CrossMicroServiceGateway" or "InterMicroServiceGateway"; if you want to have it as the full CQRS way you could have it like a direct call to other Command or Query and then you could use something like "QueryGate" or "CommandGate" or similar.
Other suggestions:
WebApi
Query Handlers
Command Handlers
Repository
API Gateways ( here is the interfaces and implementation of
microservice calls)
This sounds reasonable except for the point of API Gateway which I described above. Of course, it is hard for me to tell based on the limited information that I have about your project. To give you a more precise suggestion here I would need to know whether you use DDD or not? How do you use CQRS and other information?
However, we use an external microservices to get the data we are
calling them from Query handlers. All the HTTP client construction and
calls will be abstracted in them. The response will be converted to a
view model and pass it back to Query handler.
You could extract all this code/logic that handles the cross micro-service communication over HTTP or other protocols, handling general responses and similar to some core library and include it into each of your micro-service as a package. In this way, you will reuse the solution for all your micro-service. You can extend that and add all core domain-agnostic things (like data access or repository base classes, wrappers around HTTP, unit-test infrastructure setup, and similar) to that or other shared libraries. This way your micro-services will only focus on the part of the Domain it is supposed to do.

I think CQRS is the right choice to keep the reading and writing operations decoupled.
The integration with third party systems (if it's the case), need some attention.
Do not call these services directly from your handlers, this could lead to various performance and/or maintainability issues.
You have to keep these integrations very well separated, because them are out of your domain. They may be subject to inefficiencies, changes or a number of problems out of your control.
One solution that I could recommend is a "Middleware" service.
In your application context this can be constituted by another service (always REST for example) that will have the task of 'talk' (only him) with external systems, acting as a single point of integration between your domain and the external environment. This can be realized from scratch or using a commercial/opens-source solution like (just as example) this.
This lead to many benefits, same of these:
A middleware is a unique mockable point during integration test of your application.
You can change the middleware implementation in the future without touch your handlers.
Of course, changing 3pty providers won't affect your domain services.
Middleware is the unique point dedicated to manage 3pty service interruptions.
Your services remain agnostic compared to the outside world.
Focus on these questions can be useful to design your integration middleware service:
Which types of 3pty data do they provide? Are they on time? This might help you figure out whether to introduce a cache system into your integration service.
Can 3pty be subject to frequent interruptions? Then you must ensure that your system must tolerate any disruption of external services. In other words, you must ensure a certain resilience of your services. There are many techniques to do that.
Do you really need to interrogate these 3pty services all the time? Maybe a more or less sophisticated cache system could speed up your services a lot.
Finally, it is also very important to understand if the need to have a microservices-oriented system is a real and immediate need.
Due to the fact these architectures are more expensive and complex then the classic ones, it might be reasonable to think about starting by building a monolith system and then moving towards a more segmented solution later.
Thinking (organizing) your system as many "bounded context" does not prevent you from creating a good monolith system and at the same time, it prepares you for a possible switch to microservices-oriented one.
As a summary advice, start by keeping things as separate as possible. Define a language to speak about your business model. These lead to you potentially change a lot when needs will come without to much effort during the inevitable evolution of your software. "Hexagonal" architecture is a good starting point to do that for both choises (Microservices vs Monolith).
Recently, Netflix posted a nice article about this architecture with a lot of ideas for a fresh start.

I will give my answer from DDD and the clean architecture perspective. Ideally, you application should have following layers.
Api (ideally very thin layer of Controllers).The controller will create queries and commands and push them on a common channel. (refer MediatR)
Application This will be your orchestration layer. It will contain definitions of queries and command and their handlers. For queries, you will directly interact form your infrastructure layer. For commands, you will interact with domain and then save them through repositories in infrastructure.
Domain Depends upon your business logic and complexity, this layer will contain all your business models.
Infrastructure It will contain mostly two types of objects Providers and Repositories. Providers should be used with queries and will return DAO. Repositories should be used where ever domain is involved, ideally commands in CQRS. Repositories should always receive and return only domain objects.
So after setting the base context about different layers on clean architecture, the answer to your original question is --> I would create third party interactions in the provider layer. For example, you need to connect with a user microservice, I will create a UserProvider in the provider folder in the infrastructure layer and consume it through a interface.

Related

DDD - Application Service location with multiple "entry points"

Application Services in DDD are supposed to orchestrate full business use cases, using Repositories to fetch Aggregates, calling methods on the Aggregates and managing infrastructure concerns like database transactions.
When reading books from Eric Evans, Vaughn Vernon and Scott Millett, you can find great examples on how separate your projects. But I never found clear answers for this situation.
Suppose you have a Domain, and three "entry points" to communicate with this domain:
Rest API for synchronous actions
Messenger "daemon" / "service" running on the OS for asynchronous actions
Powershell cmdlets for administrative users for maintenance actions
where do you place those Application Services if you have one DLL per entry point for deployment purpose?
Option A: dedicated Application Service project (DLL) referenced by all entry point DLLs.
Option B: Application Services located in each entry point's DLL.
In the first option, you can benefit from code reuse when multiple entry points share the same use cases. Same thing for unit tests. However, you theoretically have to deploy an Application Service DLL having too much features for some entry points.
In the second option, you have to duplicate code (and tests) in each entry point's dll when they share the same use cases, but you can theoretically have the control on infrastructure concern like database transaction that could be different depending the execution is in a Powershell Cmdlet on in an API.
In my opinion, the real answer is a question of personal preference.
Anyone having experience with both approaches (success or failure) have some tips or recommandations?
Option A: dedicated Application Service project (DLL) referenced by all entry point DLLs.
This is roughly what I would expect to see. You have three composition roots here, that should always share the same model (to ensure that all paths enforce the current business invariant) and the same book of record (if they don't share the same book of record, they really don't need to share anything at all).
In fact, I strongly suspect that you could separate these completely -- run "the model" in a "microservice", and deploy your three interfaces above that each uses a common service client DLL to talk to that core service.
You might, for instance, review the onion architecture. It aligns fairly closely with the image of a single dll for the application services, with each of your compositions roots using a different interface to adapt their own API to that of the model.
you theoretically have to deploy an Application Service DLL having too much features for some entry points.
That's so; there's a trade off there. My guess is that in most deployments, shipping a single fat DLL is going to be more cost effective than trying to deploy multiple jars with different subsets of the same model.
Personally, I'd start with a fat microservice, a well designed API, and fat clients in each of the composition roots above, and then if necessarily replace the fat clients with thinner, more specialized ones if the trade offs support that choice.
Just to be sure I understand one of your point. Are you suggesting that my domain (what you called "the model") should expose an API, and my different entry points (what you called "composition root") should call this API?
Yes, that's a fair description of the proposal, except I want to be more clear on the "should expose an API" part. The API should be explicit. That is to say, looking at the code, you should be able to point to a seam in your code where the separation of concerns happens
This part is where the model lives
That part is where the specialization lives
Your option B is (provided you make the seam explicit) is this idea within a single library. Your option A is this idea, with seam as the interface between two libraries (still running in the same process). Microservices is this idea, with the two libraries running in different processes.
You get different tradeoffs - for instance, if the model runs in a dedicated microservice, then (a) changing the model is "easy", because there's exactly one authority to swap out, and (b) you now have the freedom to implement your specialized interfaces in any technology that can exchange messages with your domain service, (c) you can also scale out the model independently of how you scale out the specializations.
But you also get additional complexity, in that you need to think more about the stability of the API when the client and server have independent deployment cycles.

Restful service layer with MVC

I need a advice on creating a architecture where i want API layer in between UI layer and Business Layer. UI layer should only consume rest services for displaying data.
Reason for doing this is that we need to expose same service for others clients like Ipad, Android etc.
Now my question is:-
1) Do we need dependence inject in this case? (i don't think so coz we are not going to use any reference at UI layer. The only thing is, we are manipulating the JSON returning by service.)
2) Will it hurt performance?
3) Is this the right approach?
Any help will be appreciated. Thanks
we're doing roughly the same thing now.
1) No, you can't.
2) No, twitter is api first, they seem to be doing ok. I guess technically it will, but it does mean you can scale horizontally so the extra hop overhead can easily be counteracted.
3) You have multiple ui clients so it seems like a decent viable solution.
Security
Security: Basic Authentication
Its the easiest to setup, but be aware the token is reversible, so use HTTPS to encrypt the communication.
The HTTP Authorization header containing the username and password is sent with every request to the api level.
You could use session but that requires a bit more setup.
There are plenty of how to's on setting up basic authenication in C# and web api.
The way i created an API for me was:
Project 1 : WebAPI serving as a portal to fetch data
Project 2 : Class Library, providing services to the WebAPI layer.
Project 3 : Class Library, providing data to my services layer using EF.
Now, different controllers in the web api project require different service objects(from project 2) to work with. I had to provide constructors for those controllers using DI. for this i used Autofac.
For you, your business layer would be Project 2.
Data flowing through one more Project layer might take some time, and you will need to put up exception handling and logging again in the API layer. i don't think performance should be big problem here.
In my experience I've seen such platform oriented approach - providing mSOA to N amount of clients. The architectural solution was a Facade that was hiding all the complex Business Layer requests and in the same time providing UI insensitive processing.
Will it hurt performance?
Not necessary - since it has knowledge of how to handle all required sub-systems requests. All the clients just know that they need a single JSON contract to get the job done, not which and how many of the services to call. By doing so - we have a much better and simplified communication. Take a look at the Mediation (intra-communication) pattern:

Dependency Injection vs Layered Architecture

I've been reading a lot about dependency injection and the service locator (anti-?) pattern - a lot of it on StackOverflow (thanks guys :). I have a question about how this pattern works when it's within a n-layer architecture.
I've seen a lot of blog posts where they describe injecting a IDataAccess component into the business objects. E.g.
public class Address
{
IDataAccess _dataAccess;
public Address(IDataAccess dataAccess)
{
this._dataAccess = dataAccess;
}
}
However, I was under the impression that in an n-layer architecture, the UI layer should not need to have any knowledge of the data access layer... or even know that there /is/ a data access layer! If DI requires exposing the IDataAccess interface in the constructors of the BusinessObjects, this then exposes to the UI the fact that the Business Layer uses a data access layer under the hood - something the UI doesn't need to know or care about surely?
So, my fundamental question is: Does DI require that I expose all my lower layer interfaces to all upper layers and is this a good or a bad thing?
Thanks
Edit: To clarify (after a few comments), I know my business object should be ignorant of the which specific implementation of which IDataAccess it uses (hence the Dependency being injected in the constructor) but I thought that the layers above the BO should not know that the Business Object even requires a dependency on a DAL.
This is really a fairly complex topic, and there are many ways of doing an n-tier architecture. No one way is "the right way", and how you do it depends on your needs as much as it does your personal preferences.
Dependency Injection is about managing dependencies. If your object should be unaware of any dependency, then you would not write your objet in the way you mentioned. You would instead have some other service or method that would populate the data in an agnostic way. Data doesn't mean "Database" either. So IDataAccess could mean it comes from a database, or it comes from a network socket or it comes from a file on disk. The whole point here is that Address does not choose what dependencies it creates. This is done through configuration at the composition root.
Things need data, otherwise your app is probably useless. Making your Address object load itself, however, may not be the best way to go about things. A better approach may be with a factory class or service method.
I think the answer is rather simple. Your bottom layers (interface, bll, dal, entities) are just a bunch of libraries. It is up to the client to decide which libraries to be used and it will increase client's flexibility. Moreover they are libraries, so any application-related configurations (connection strings, data caching, etc) lies on the client. Those configuration itself, sometimes also need to be injected and included into Composition Root.
However, if you want to has an uniform logic and not client's flexibility, you can choose web/app services as an additional layer.
1st Layer Entities
2nd Layer Interface
3rd Layer BLL & DAL
4th Layer Web/App Services
5th Layer UI
This way, your composition root exists in one layer (4th). And add your UI just need to add service reference to 4th layer (or 1st if needed). However, this implies the same Mark Seeman's article again, layering is worth the mapping. I assume that you can change the app/web service to Composition Root.
Moreover, this (app/web service) design has pros/cons. Pros:
Your app is encapsulated
Your app is being bridged by app/web services. It is guranteed that your UI don't know the DataAccess, thus fulfill your requirements.
Your app is secured
Simply said, having UI need to access app service is a huge gain in security aspect.
Access Portability
Now your app can be accessed everywhere. It can be connected by 3rd party app (other web) without has relying on dlls.
Cons:
Overhead cost during service call
Authentication, network connection, etc, will cause overhead during webservice call. I'm inexperienced for the performance impact but it should be enough for high traffic app.
Inflexibility of client
Client now need to access BLL/Services by using services instead of normal objects.
More Service for Different Type of Client
Now you need to provide more service than needed. Such as WebRequestRetriever, MobileRequestRetriever instead of accessing to a mere IRequestRetriever and let the composition root wire up the rest.
Apologize if this answer boarden the topic (just realized after finished).
IMHO:
It depends on who does the injection !-
It seems you need to/expect to have an MVC or MVP architecture to be in place, where a controller or Presenter does the job of translating the UI calls to business objects ,back and forth -
Creating concrete implementations of IDataAccess, Sending it to Address class.
So that the UI is totally unaware of who is providing the data it needs, and it provides you the expected scalability.
Thanks
Tarriq

Linq To SQL, WebServices, Websites - Planning it all

Several "parts" (a WinForms app for exmaple) of my project use a DAL that I coded based on L2SQL.
I'd like to throw in several WebApps into the mix, but the issue is that the DAL "offers" much more data than the WebApps need. Way more.
Would it be OK if I wrapped the data that the websites need within a web-service, and instead of the website connecting directly to the DAL it would go through the web-service which in turn would access the DAL?
I feel like that would add a lot of overhead, but on the other hand, I definitely don't like the feeling of knowing that the WebApps have the "capabilities" of accessing much more data than they actually need.
Any input would be greatly appreciated.
Thank you very much for the help.
You can either create web services, or add a repository layer that presents only the data that your applications require. A repository has the additional benefit of being a decoupling layer, making it easier to unit test your application (by providing a mock repository).
If you plan on eventually creating different frontends (say, a web UI and a WPF or Silverlight UI), then web services make a lot of sense, since they provide a common data foundation to build on, and can be accessed across tiers.
If your data access layer were pulling all data as IQueryable, then you would be able to query your DAL and drill down your db calls with more precision.
See the very brief blog entry I wrote on Repository and Service layers using Linq to SQL. My article is built around MVC but the concept of Repository and Service layers would work just fine with WebForms, WinForms, Web Services, etc.
Again, the key here is to have your Repository or your Dal return an object AsQueryable whereby you wait until the last possible moment to actually commit to requesting data.
Your structure would look something like this
Domain Layer
Repository Layer (IQueryable)
Service layer for Web App
Website
Service layer for Desktop App
Desktop App
Service layer for Web Services
Web Service
Inside your Service layer is where you customize the specific calls based on the application your developing for. This allows for greater security and configuration on a per-app basis while maintaining a complete repository that doesn't need to be modified until you swap out your ORM (if you ever decide you need to swap out your ORM)
There is nothing inherently wrong with having more than you need in this case. The entire .NET 4 Client Profile contains over 50MB of assemblies, classes, etc. I might use 5% of it in my entire career. That doesn't mean I don't appreciate having all of it available in case I need it.
If you plan to provide the DAL to developers that should not have access to portions of the data, write a wrapper or derive a new DAL. I would avoid the services route unless you're confident you can accommodate for the overhead.
Sounds like you are on the right track. If many applications are going to use the this data you gain a few advantages by having services with DTOs.
If the domain model changes, just the mapping to the DTO needs to change. You can isolate the consuming application from these changes.
Less data over the wire
You can isolate you applications from the implementation of the DAL.
You can expose different services (maybe different DTOs) for different applications if it is necessary to restrict what parts of the object model should be exposed.

SOA with WCF responsibilities and dependencies

I am moving onto a new team that has implemented a solution using SOA with WCF. The services are all very vertical, for example: a CustomerService, an AddressService, an AccountService, etc. To return the fully populated objects the services may call another service over a wcf endpoint.
There are a few very high level vertical areas, but underneath they can reuse a lot of the core service logic.
How valid is the following new architecture:
The webservices are thin layers that handle remote calls; they are strictly for communication. The real functionality would be implemented in something lets call, "business or domain services".
Domain Service responsibilities:
Reference data access / repository interfaces for working with the infrastructure
Call multiple repository methods to create fully populated objects
Process data against the complex business rules
Call other domain services (not having to call WCF)
This would give us domain services that can be tested outside of specific WCF and SQL Server implementations.
The web services reusing the different business services seems to be the biggest gain and yet the biggest potential pitfall.
On one hand the logic can be reused for multiple services, eliminating web service calling web service calling web service.
On the other hand, if someone changes one of the assemblies multiple services need to be updated, potentially breaking multiple applications.
Have people tried this and had success? Are there better approaches?
At first blush, it sounds like the design you've walked into might be an SOA antipattern identified in this article: a group of 'chatty services,' a term the authors use to describe a situation in which ...
developers realize a service by implementing a
number of Web services where each
communicates a tiny piece of data.
Another flavor of the same antipattern
is when the implementation of a
service ends up in a chatty dialog
communicating tiny pieces of
information rather than composing the
data in a comprehensive document-like
form.
The authors continue:
Degradation in performance and costly
development are the major consequences
of this antipattern. Additionally,
consumers have to expend extra effort
to aggregate these too finely grained
services to realize any benefit as
well as have the knowledge of how to
use these services together.
That can be a valid approach. The pitfall about updating multiple services depends on how closely related the services are. Do you have a use case where if Customer Service is updated and Address Service is not the clients can still work or is it more common that all services are used by the same client and hence should be updated together. Remember the service only changes if the WSDL changes and not implementation. If you manage not to change the DataContracts and OperationContracts of the front end services there is no worries.
One approach you may investigate is using in-proc WCF services for your domain services. Alternately the front end webservices can use domain managers/engines in separate;y layered assemblies which in turn uses repositories. You can have a coarse grain of webservice class implementations and fine grained managers for domain entities that are mocakble and unit testable.

Categories