I'm working on a project that is our companies first foray into Domain Driven Development.
Our Web API originally simply provided CRUD operations and the project exposed OData controllers, but I'm not sure if that is still a good idea.
Is OData a good way to expose non-CRUD APIs?
More info:
Initially our web api basically exposed CRUD functions. To create a new User you would simply create one and post it to the service. To change, for example, an address you would get a copy of the user entity, make changes, then perform an update operation. Basic OData stuff.
Beyond providing query support, OData also exposed the service in a readily consumable way, so it could be added to other projects as a service reference and accessed with a proxy.
Since we have moved over to a DDD approach, things have changed significantly. Our Web API is now simply a gateway to a number of independent sub-domain services. We no longer provide CRUD operations or direct access to entities, instead making service calls to manipulate entities. Instead of creating a User entity sending it to the User service via a Put request, a consumer must generate a CreateUserBindingModel and send it to the User/Create service and let the service generate the entity. Changing an address is done through the ChangeAddress(ChangeAddressBindingModel model) method, rather than just updating the whole object. Queries are much more targeted and rarely if ever return entire domain objects.
Is it a bad idea to keep using OData as a basis for our Web API, when we no longer provide CRUD operations? Is there another way to expose the details of our service the way you can with OData? I know WCF services provide similar functionality, but I was under the impression it was even more tied to CRUD than OData.
OData is a data oriented API spec, it's anti-DDD. Although it can satisfy all your requirements to implement REST APIs but it's best for data processing API. I guess you already know that using OData feels like operating the database via HTTP. If you are using DDD you should forget OData totally.
In OData, actions and functions are a way to add server-side behaviors that are not easily defined as CRUD operations on entities
https://learn.microsoft.com/en-us/aspnet/web-api/overview/odata-support-in-aspnet-web-api/odata-v4/odata-actions-and-functions
https://blogs.msdn.microsoft.com/alexj/2012/02/03/cqrs-with-odata-and-actions/
https://github.com/OData/ODataSamples/blob/master/WebApiCore/ODataActionSample/ODataActionSample/
Related
Hi I am trying to create a project skeleton uses CQRS pattern and some external services. Below are the structure of the solution.
WebApi
Query Handlers
Command Handlers
Repository
ApiGateways ( here is the interfaces and implementation of microservice calls)
We want to keep controller as thin. So we are using query handlers and command handlers to handle respective operations.
However, we use a external microservices to get the data we are calling them from Query handlers.
All the http clinet construction and calls will be abstracted in them.The response will be converted to a view model and pass it back to Query handler.
We name it as a ApiGateways. But it is not composing from multiple services.
How do we call this part in our solution? Proxy or something? Any good example for thin controllers and microservice architecture
We name it as API Gateways. But it is not composed from multiple
services. How do we call this part in our solution? Proxy or
something? Any good example for thin controllers and microservice
architecture
Assumption:
From the image you attached, I see Command Handler and Query Handler are calling "external/micro-services". I guess by this "external/micro-services" you mean that you are calling another micro-service from your current micro-service Handler(Command and Query). These "external/micro-services" are part of your architecture and deployed on the same cluster and not some external system that just exposes a public API?
If this is correct I will try to answer based on this assumption.
API Gateway would probably be misleading in this case as the concept of API Gateway is something different then what you are trying to do here.
API Gateway per definition:
Quote from here:
An API Gateway is a server that is the single entry point into the
system. It is similar to the Facade pattern from object-oriented
design. The API Gateway encapsulates the internal system architecture
and provides an API that is tailored to each client. It might have
other responsibilities such as authentication, monitoring, load
balancing, caching, request shaping and management, and static
response handling.
What you actually are trying to do is to call from your Command or Query Handler from one of your micro-service A another micro-service B. This is an internal micro-service communication that should not be done through API Gateway as that would be the approach for outside calls. For example, with "outside calls" I mean frontend application API or public API calls that are trying to call your micro-services. In that case, you would use API Gateways.
A better name for this component would be something like "CrossMicroServiceGateway" or "InterMicroServiceGateway"; if you want to have it as the full CQRS way you could have it like a direct call to other Command or Query and then you could use something like "QueryGate" or "CommandGate" or similar.
Other suggestions:
WebApi
Query Handlers
Command Handlers
Repository
API Gateways ( here is the interfaces and implementation of
microservice calls)
This sounds reasonable except for the point of API Gateway which I described above. Of course, it is hard for me to tell based on the limited information that I have about your project. To give you a more precise suggestion here I would need to know whether you use DDD or not? How do you use CQRS and other information?
However, we use an external microservices to get the data we are
calling them from Query handlers. All the HTTP client construction and
calls will be abstracted in them. The response will be converted to a
view model and pass it back to Query handler.
You could extract all this code/logic that handles the cross micro-service communication over HTTP or other protocols, handling general responses and similar to some core library and include it into each of your micro-service as a package. In this way, you will reuse the solution for all your micro-service. You can extend that and add all core domain-agnostic things (like data access or repository base classes, wrappers around HTTP, unit-test infrastructure setup, and similar) to that or other shared libraries. This way your micro-services will only focus on the part of the Domain it is supposed to do.
I think CQRS is the right choice to keep the reading and writing operations decoupled.
The integration with third party systems (if it's the case), need some attention.
Do not call these services directly from your handlers, this could lead to various performance and/or maintainability issues.
You have to keep these integrations very well separated, because them are out of your domain. They may be subject to inefficiencies, changes or a number of problems out of your control.
One solution that I could recommend is a "Middleware" service.
In your application context this can be constituted by another service (always REST for example) that will have the task of 'talk' (only him) with external systems, acting as a single point of integration between your domain and the external environment. This can be realized from scratch or using a commercial/opens-source solution like (just as example) this.
This lead to many benefits, same of these:
A middleware is a unique mockable point during integration test of your application.
You can change the middleware implementation in the future without touch your handlers.
Of course, changing 3pty providers won't affect your domain services.
Middleware is the unique point dedicated to manage 3pty service interruptions.
Your services remain agnostic compared to the outside world.
Focus on these questions can be useful to design your integration middleware service:
Which types of 3pty data do they provide? Are they on time? This might help you figure out whether to introduce a cache system into your integration service.
Can 3pty be subject to frequent interruptions? Then you must ensure that your system must tolerate any disruption of external services. In other words, you must ensure a certain resilience of your services. There are many techniques to do that.
Do you really need to interrogate these 3pty services all the time? Maybe a more or less sophisticated cache system could speed up your services a lot.
Finally, it is also very important to understand if the need to have a microservices-oriented system is a real and immediate need.
Due to the fact these architectures are more expensive and complex then the classic ones, it might be reasonable to think about starting by building a monolith system and then moving towards a more segmented solution later.
Thinking (organizing) your system as many "bounded context" does not prevent you from creating a good monolith system and at the same time, it prepares you for a possible switch to microservices-oriented one.
As a summary advice, start by keeping things as separate as possible. Define a language to speak about your business model. These lead to you potentially change a lot when needs will come without to much effort during the inevitable evolution of your software. "Hexagonal" architecture is a good starting point to do that for both choises (Microservices vs Monolith).
Recently, Netflix posted a nice article about this architecture with a lot of ideas for a fresh start.
I will give my answer from DDD and the clean architecture perspective. Ideally, you application should have following layers.
Api (ideally very thin layer of Controllers).The controller will create queries and commands and push them on a common channel. (refer MediatR)
Application This will be your orchestration layer. It will contain definitions of queries and command and their handlers. For queries, you will directly interact form your infrastructure layer. For commands, you will interact with domain and then save them through repositories in infrastructure.
Domain Depends upon your business logic and complexity, this layer will contain all your business models.
Infrastructure It will contain mostly two types of objects Providers and Repositories. Providers should be used with queries and will return DAO. Repositories should be used where ever domain is involved, ideally commands in CQRS. Repositories should always receive and return only domain objects.
So after setting the base context about different layers on clean architecture, the answer to your original question is --> I would create third party interactions in the provider layer. For example, you need to connect with a user microservice, I will create a UserProvider in the provider folder in the infrastructure layer and consume it through a interface.
The reason why I need loosely-coupled WCF because Entity Framework is tightly-coupled. When I say loosely-coupled, there's no need to instantiate the database context or add the service reference of WCF. It just rely on web configuration or some .ini file that does not require compilation when developers need to change servers, ip address or service url's.
Instead, the MVC(say controller) will just send request message and then gets the response data from WCF service. But still we cannot afford without having Models based on the database (since we need it in intellisense for views markup), where the WCF will get the data. Let say we have those database objects class already, create some repository that binds the WCF data to the MVC Models.
What I mean of WCF web service, it ONLY contains messages, no more passing of object reference, because thats the new SOA definition. It makes more sense to pass messages instead of objects.
Is this a better approach? In terms of scalability and performance, I don't mean to offend the Entity Framework Fans.
It is an entirely valid approach to define a WCF web service in terms of message schemas which just use basic types, so that clients need know nothing about WCF in order to use the service. WCF would be useless for interop with other platforms (e.g. Java) otherwise.
Understand that WCF is a general and powerful framework for implementing communication over a variety of transport protocols. It can be equally effectively used for raw XML messaging as for programming in terms of objects. Object serialisation and deserialisation is an optional extra of the framework, not a requirement. (There is really no such thing as "passing of object reference" - ultimately it is an XML infoset which travels across the communication channel. Also, Entity Framework is not part of WCF - it is a distinct ORM Framework which you can use with WCF if you want, but that's your choice.)
Scalability and performance is entirely orthogonal to the design of the service in terms of its data and operation contracts. You should feel free to adopt whatever approach to defining your services is best for your application. If that's XML messages, that's fine - don't let anyone tell you otherwise.
Struggling with this one today.
Rewriting a web-based application; I would like to do this in such a way that:
All transactions go through a web services API (something like http://api.myapplication.com) so that customers can work with their data the same way that we do / everything they can do through our provided web interface they can also do programmatically
A class library serves as a data layer (SQL + Entity Framework), for a couple of design reasons not related to this question
Problem is, if I choose not to expose the Entity Framework objects through the web service, it's a lot of work to re-create "API" versions of the Entity Framework objects and then write all the "proxy" code to copy properties back and forth.
What's the best practice here? Suck it up and create an API model class for each object, or just use the Entity Framework versions?
Any shortcuts here from those of you who have been down this road and dealt with versioning / backwards compatibility, other headaches?
Edit: After feedback, what makes more sense may be:
Data/Service Layer - DLL used by public web interface directly as well as the Web Services API
Web Services API - almost an exact replica of the Service Layer methods / objects, with API-specific objects and proxy code
I would NOT have the website post data through the web services interface for the API. That way leads to potential performance issues of your main website. Never mind that as soon as you deploy a breaking API change you have to redeploy the main website at the same time. There are reasons why you wouldn't want to be forced to do this.
Instead, your website AND web services should both communicate directly to the underlying business/data layer(s).
Next, don't expose the EF objects themselves. The web service interface should be cleaner than this. In other words it should try and simplify the act of working with your backend as much as possible. Will this require a fair amount of effort on your part? yes. However, it will pay dividends when you have to change the model slightly without impacting currently connected clients.
It depends on project complexity and how long you expect it to live. For small, short living projects you can share domain objects across all layer's. But if it's big project, and you expect it to exist, work well, and update for next 5 years....
In my current project (which is big), I first started with shared entities across all layers, then i discovered that I need separate entities for Presentation, and now (6 month's passed) I'm using separate classes for each layer (persistence, service, domain, presentation) and that's not because i'm paranoid or was following some rules, just I couldn't make all work with single set of classes across layers... Make you conclusions..
P.S. There are tools that can help you convert your objects, like Automapper and Value Injecter.
I would just buck up and create an API specifically aimed at the needs of the application. It doesn't make much sense to what amounts to exposing the whole DB layer. Just expose what needs to be exposed in order to make the app work, and nothing else.
Several "parts" (a WinForms app for exmaple) of my project use a DAL that I coded based on L2SQL.
I'd like to throw in several WebApps into the mix, but the issue is that the DAL "offers" much more data than the WebApps need. Way more.
Would it be OK if I wrapped the data that the websites need within a web-service, and instead of the website connecting directly to the DAL it would go through the web-service which in turn would access the DAL?
I feel like that would add a lot of overhead, but on the other hand, I definitely don't like the feeling of knowing that the WebApps have the "capabilities" of accessing much more data than they actually need.
Any input would be greatly appreciated.
Thank you very much for the help.
You can either create web services, or add a repository layer that presents only the data that your applications require. A repository has the additional benefit of being a decoupling layer, making it easier to unit test your application (by providing a mock repository).
If you plan on eventually creating different frontends (say, a web UI and a WPF or Silverlight UI), then web services make a lot of sense, since they provide a common data foundation to build on, and can be accessed across tiers.
If your data access layer were pulling all data as IQueryable, then you would be able to query your DAL and drill down your db calls with more precision.
See the very brief blog entry I wrote on Repository and Service layers using Linq to SQL. My article is built around MVC but the concept of Repository and Service layers would work just fine with WebForms, WinForms, Web Services, etc.
Again, the key here is to have your Repository or your Dal return an object AsQueryable whereby you wait until the last possible moment to actually commit to requesting data.
Your structure would look something like this
Domain Layer
Repository Layer (IQueryable)
Service layer for Web App
Website
Service layer for Desktop App
Desktop App
Service layer for Web Services
Web Service
Inside your Service layer is where you customize the specific calls based on the application your developing for. This allows for greater security and configuration on a per-app basis while maintaining a complete repository that doesn't need to be modified until you swap out your ORM (if you ever decide you need to swap out your ORM)
There is nothing inherently wrong with having more than you need in this case. The entire .NET 4 Client Profile contains over 50MB of assemblies, classes, etc. I might use 5% of it in my entire career. That doesn't mean I don't appreciate having all of it available in case I need it.
If you plan to provide the DAL to developers that should not have access to portions of the data, write a wrapper or derive a new DAL. I would avoid the services route unless you're confident you can accommodate for the overhead.
Sounds like you are on the right track. If many applications are going to use the this data you gain a few advantages by having services with DTOs.
If the domain model changes, just the mapping to the DTO needs to change. You can isolate the consuming application from these changes.
Less data over the wire
You can isolate you applications from the implementation of the DAL.
You can expose different services (maybe different DTOs) for different applications if it is necessary to restrict what parts of the object model should be exposed.
The objective is to build a service that I will then consume via jQuery and a standards based web front-end, mobile device "fat-clients," and very likely a WPF desktop application.
It seems like WCF would be a good option, but I've never built a RESTful service with WCF, so I'm not sure where to even begin on that approach.
The other option I'm thinking about is using ASP.NET MVC, adding some custom routes, add a few controller actions and using different views to push out JSON, xml, and other return types.
This project is mostly a learning exercise for myself, and I'd like to spend some extra time and do it "right" so I have a better undertanding of how the pieces fit together.
So my question is this, which approach should I use to build this RESTful service, and what are some advantages of doing it that way?
Normally, I would say WCF for any kind of hosted serice, but in the specific case for RESTful services using JSON as a serialization mechanism, I prefer ASP.NET MVC (which I will refer to as ASP.NET for the remainder of this answer).
One of the first reasons is because of the routing mechanism. In WCF, you have to define it on the contract, which is all well and good, but if you have to make quick changes to your routing, from my point of view, it's much easier to do them using the routing mechanism in ASP.NET.
Also, to the point above, if you have multiple services exposed over multiple interfaces in WCF, it's hard to get a complete image of your URL structure (which is important), whereas in ASP.NET you (typically) have all of the route assignments in one place.
The second thing about ASP.NET is that you are going to have access to all of the intrinsic objects that ASP.NET is known for (Request, Response, Server, etc, etc), which is essential when exposing an HTTP-specific endpoint (which is what you are creating). Granted, you can use many of these same things in WCF, but you have to specifically tell WCF that you are doing so, and then design your services with that in mind.
Finally, through personal experience, I've found that the DataContractJsonSerializer doesn't handle DateTimeOffset values too well, and it is the type that you should use over DateTime when working with a service (over any endpoint) which can be called by people over multiple timezones. In ASP.NET, there is a different serializer that you can use, or if you want, you can create your own ActionResult which uses a custom serializer for you. I personally prefer the JSON.Net serializer.
One of the nice things about the JSON.Net serializer and ASP.NET that I like is that you can use anonymous types with it, if you are smart. If you create a static generic method on a non-generic type which then delegates to an internal generic type, you can use type inference to easily utilize anonymous types for your serialized return values (assuming they are one-offs, of course, if you have a structure that is returned consistently, you should define it and use that).
It should also be mentioned that you don't have to completely discount WCF if developing a RESTful service. If you are pushing an ATOM or RSS feed out from your service then the classes in the System.ServiceModel.Syndication namespace of massive help in the construction and serialization of those feeds. Creating a simple subclass of the ActionResult class to take an instance of SyndicationFeed and then serialize it to the output stream when the ActionResult is executed is quite simple.
Here is a a thought that may help you make the decision between ASP.NET MVC and WCF. In the scenarios you describe, do you expect to need to use a protocol other than HTTP?
WCF is designed to be transport protocol agnostic and so it is very different than ASP.NET. It has channels and bindings, messages, service contracts, data contracts and behaviours. It provides very little in the way of guidance when it comes to building distributed applications. What it gives you is a clean slate to build on.
ASP.Net MVC is naturally a Http based framework. It deals with HTTP verbs, media types, URLs, response headers and request headers.
The question is which model is closer to what you are trying to build?
Now you mentioned ReST. If you really do want to build your distributed applications following the ReST constraints then you would be better to start with OpenRasta. It will guide you down that path.
You can do Rest in ASP.Net MVC and you can do it in WCF, but with those solutions, you will not fall into the pit of success ;-)
Personally, I am not crazy about implementing REST services in WCF. I find the asp.net mvc framework a more natural programming model for this.
The implementor of http://atomsite.net/ originally implemented the atompub specification in WCF and then rewrote the entire service using asp.net mvc. His experience echoed my comment above that for a pure REST service asp.net mvc is the way to go.
The only exception would be if I wanted to potentially expose a service in a restful and non restful way. Or if I was exposing an existing WCF service via REST.