Restful service layer with MVC - c#

I need a advice on creating a architecture where i want API layer in between UI layer and Business Layer. UI layer should only consume rest services for displaying data.
Reason for doing this is that we need to expose same service for others clients like Ipad, Android etc.
Now my question is:-
1) Do we need dependence inject in this case? (i don't think so coz we are not going to use any reference at UI layer. The only thing is, we are manipulating the JSON returning by service.)
2) Will it hurt performance?
3) Is this the right approach?
Any help will be appreciated. Thanks

we're doing roughly the same thing now.
1) No, you can't.
2) No, twitter is api first, they seem to be doing ok. I guess technically it will, but it does mean you can scale horizontally so the extra hop overhead can easily be counteracted.
3) You have multiple ui clients so it seems like a decent viable solution.
Security
Security: Basic Authentication
Its the easiest to setup, but be aware the token is reversible, so use HTTPS to encrypt the communication.
The HTTP Authorization header containing the username and password is sent with every request to the api level.
You could use session but that requires a bit more setup.
There are plenty of how to's on setting up basic authenication in C# and web api.

The way i created an API for me was:
Project 1 : WebAPI serving as a portal to fetch data
Project 2 : Class Library, providing services to the WebAPI layer.
Project 3 : Class Library, providing data to my services layer using EF.
Now, different controllers in the web api project require different service objects(from project 2) to work with. I had to provide constructors for those controllers using DI. for this i used Autofac.
For you, your business layer would be Project 2.
Data flowing through one more Project layer might take some time, and you will need to put up exception handling and logging again in the API layer. i don't think performance should be big problem here.

In my experience I've seen such platform oriented approach - providing mSOA to N amount of clients. The architectural solution was a Facade that was hiding all the complex Business Layer requests and in the same time providing UI insensitive processing.
Will it hurt performance?
Not necessary - since it has knowledge of how to handle all required sub-systems requests. All the clients just know that they need a single JSON contract to get the job done, not which and how many of the services to call. By doing so - we have a much better and simplified communication. Take a look at the Mediation (intra-communication) pattern:

Related

Microservices design part in WebApi

Hi I am trying to create a project skeleton uses CQRS pattern and some external services. Below are the structure of the solution.
WebApi
Query Handlers
Command Handlers
Repository
ApiGateways ( here is the interfaces and implementation of microservice calls)
We want to keep controller as thin. So we are using query handlers and command handlers to handle respective operations.
However, we use a external microservices to get the data we are calling them from Query handlers.
All the http clinet construction and calls will be abstracted in them.The response will be converted to a view model and pass it back to Query handler.
We name it as a ApiGateways. But it is not composing from multiple services.
How do we call this part in our solution? Proxy or something? Any good example for thin controllers and microservice architecture
We name it as API Gateways. But it is not composed from multiple
services. How do we call this part in our solution? Proxy or
something? Any good example for thin controllers and microservice
architecture
Assumption:
From the image you attached, I see Command Handler and Query Handler are calling "external/micro-services". I guess by this "external/micro-services" you mean that you are calling another micro-service from your current micro-service Handler(Command and Query). These "external/micro-services" are part of your architecture and deployed on the same cluster and not some external system that just exposes a public API?
If this is correct I will try to answer based on this assumption.
API Gateway would probably be misleading in this case as the concept of API Gateway is something different then what you are trying to do here.
API Gateway per definition:
Quote from here:
An API Gateway is a server that is the single entry point into the
system. It is similar to the Facade pattern from object-oriented
design. The API Gateway encapsulates the internal system architecture
and provides an API that is tailored to each client. It might have
other responsibilities such as authentication, monitoring, load
balancing, caching, request shaping and management, and static
response handling.
What you actually are trying to do is to call from your Command or Query Handler from one of your micro-service A another micro-service B. This is an internal micro-service communication that should not be done through API Gateway as that would be the approach for outside calls. For example, with "outside calls" I mean frontend application API or public API calls that are trying to call your micro-services. In that case, you would use API Gateways.
A better name for this component would be something like "CrossMicroServiceGateway" or "InterMicroServiceGateway"; if you want to have it as the full CQRS way you could have it like a direct call to other Command or Query and then you could use something like "QueryGate" or "CommandGate" or similar.
Other suggestions:
WebApi
Query Handlers
Command Handlers
Repository
API Gateways ( here is the interfaces and implementation of
microservice calls)
This sounds reasonable except for the point of API Gateway which I described above. Of course, it is hard for me to tell based on the limited information that I have about your project. To give you a more precise suggestion here I would need to know whether you use DDD or not? How do you use CQRS and other information?
However, we use an external microservices to get the data we are
calling them from Query handlers. All the HTTP client construction and
calls will be abstracted in them. The response will be converted to a
view model and pass it back to Query handler.
You could extract all this code/logic that handles the cross micro-service communication over HTTP or other protocols, handling general responses and similar to some core library and include it into each of your micro-service as a package. In this way, you will reuse the solution for all your micro-service. You can extend that and add all core domain-agnostic things (like data access or repository base classes, wrappers around HTTP, unit-test infrastructure setup, and similar) to that or other shared libraries. This way your micro-services will only focus on the part of the Domain it is supposed to do.
I think CQRS is the right choice to keep the reading and writing operations decoupled.
The integration with third party systems (if it's the case), need some attention.
Do not call these services directly from your handlers, this could lead to various performance and/or maintainability issues.
You have to keep these integrations very well separated, because them are out of your domain. They may be subject to inefficiencies, changes or a number of problems out of your control.
One solution that I could recommend is a "Middleware" service.
In your application context this can be constituted by another service (always REST for example) that will have the task of 'talk' (only him) with external systems, acting as a single point of integration between your domain and the external environment. This can be realized from scratch or using a commercial/opens-source solution like (just as example) this.
This lead to many benefits, same of these:
A middleware is a unique mockable point during integration test of your application.
You can change the middleware implementation in the future without touch your handlers.
Of course, changing 3pty providers won't affect your domain services.
Middleware is the unique point dedicated to manage 3pty service interruptions.
Your services remain agnostic compared to the outside world.
Focus on these questions can be useful to design your integration middleware service:
Which types of 3pty data do they provide? Are they on time? This might help you figure out whether to introduce a cache system into your integration service.
Can 3pty be subject to frequent interruptions? Then you must ensure that your system must tolerate any disruption of external services. In other words, you must ensure a certain resilience of your services. There are many techniques to do that.
Do you really need to interrogate these 3pty services all the time? Maybe a more or less sophisticated cache system could speed up your services a lot.
Finally, it is also very important to understand if the need to have a microservices-oriented system is a real and immediate need.
Due to the fact these architectures are more expensive and complex then the classic ones, it might be reasonable to think about starting by building a monolith system and then moving towards a more segmented solution later.
Thinking (organizing) your system as many "bounded context" does not prevent you from creating a good monolith system and at the same time, it prepares you for a possible switch to microservices-oriented one.
As a summary advice, start by keeping things as separate as possible. Define a language to speak about your business model. These lead to you potentially change a lot when needs will come without to much effort during the inevitable evolution of your software. "Hexagonal" architecture is a good starting point to do that for both choises (Microservices vs Monolith).
Recently, Netflix posted a nice article about this architecture with a lot of ideas for a fresh start.
I will give my answer from DDD and the clean architecture perspective. Ideally, you application should have following layers.
Api (ideally very thin layer of Controllers).The controller will create queries and commands and push them on a common channel. (refer MediatR)
Application This will be your orchestration layer. It will contain definitions of queries and command and their handlers. For queries, you will directly interact form your infrastructure layer. For commands, you will interact with domain and then save them through repositories in infrastructure.
Domain Depends upon your business logic and complexity, this layer will contain all your business models.
Infrastructure It will contain mostly two types of objects Providers and Repositories. Providers should be used with queries and will return DAO. Repositories should be used where ever domain is involved, ideally commands in CQRS. Repositories should always receive and return only domain objects.
So after setting the base context about different layers on clean architecture, the answer to your original question is --> I would create third party interactions in the provider layer. For example, you need to connect with a user microservice, I will create a UserProvider in the provider folder in the infrastructure layer and consume it through a interface.

MVC as a presentation layer only

I have a WebAPI web service, which acts as a business logic layer for client applications (WinForms and Mobile).
Now I want to create an MVC application which will act as presentation layer only, and I am having doubts weather this architecture makes sense or does it break MVC concepts.
If it makes sence, what is the right/correct way of interaction between MVC application (as a presentation layer) and WebAPI service (as a business logic layer)?
Will appreciate if anyone can give me some code examples.
It's fine if you use mvc this way, your controllers can access the webapi and serve the data to the templates.
You might also consider angularjs as views/templates and the controllers there can call the webapi for data.
While I think other answers are accurate, here is some other concerns you may think of.
First, your WebAPI is probably where your business are implemented. Indeed, you may already deal with:
Business related exception
Validation
Operations available
etc.
Your Api is what should not change, unless the business rule behind a certain functionnality changes too.
What I want to point out here is one thing:
Keep your user interface completely independant from your API
The risk of using an MVC app with a WebApi
All the code together = mutiple reasons to change the same thing
By using an MVC app, you could be tempted to package the WebApi and the MVC app in the same solution. You would also be able to deploy all your stuff together. But doing it this way, you may end up with a big bunch of code where parts are not evolving at the same speed (i.e: user interface will change oftently, but do the Api will change every time a UI fix is need. NO. And do every changes to the API will impact the UI. No.)
All code together enables shortcuts
What I mean by that, is that if everything is package together, a developer could be tempted to call some method directly instead of calling the API that should be the only valid facade. Any shortcut taken can lead will lead to code duplication, bugs, validation error, etc.
Again: do not package your MVC app with your API.
Solutions
Use a Javascript framework
The other suggestions are good. AngularJS, ReactJS, EmberJS are good frameworks. (There are other, pick the one that fits your needs) But again, it will be a good choice for your architecture because you will create a clear separation between your UI app and your API app which are separated concerns. Your business logic will be well protected, and you will be sure that your code is only call via HTTP calls, the only valid facade of your API. In other words, you make sure nobody will take shortcuts.
Use .NET MVC app in its own project
If you still want to use .NET MVC, I would suggest that you call your API via HTTP: no shortcuts. I would create a different solution and with a separated MVC project where calls to the API would be made using the HttpClient or something like RestSharp. What you want here is to avoid to bind your UI to your API code. You want to bind your UI to the contract define by the API facade (api controllers) not their implementation.
I think better, of course if it possible in your situation, to use one of the JavaScript MVC frameworks.
I think AngularJS, ReactJS or EmberJS will be the most coolest variants for your purpose. I don't think that calling ASP.MVC actions and then do another call to WEB API from there it's good idea, imho.

database access from multiple applications

I have a windows form application(c#) and an asp.NET web application which both access Sql Server database. I want to centralize the database access. Which metedologies should i follow? What is the common approach to this issue?
Writing DAL and Model Libraries and using them in both application?
Writing WCF service including DAL model and using this service with both applicaiton?
None of the above?
Can you give me any idea?
Thank you.
I would go with the WCF approach. Keep in mind that when (not if, when) you have to make changes that pertain to one app, but not the other (yet), you will have to account for that in the common layer, so using interfaces may make your life a little easier.
The cleanest way is to wrap the DB with a WCF services.
If you don't write large amounts of data in one go you can use a WCF Data Service; this directly wraps an Entity Framework model and you can configure access to tables and methods in various ways.
What you want is to have one place where the DB is accessed, so that if there is an issue, you can fix it in one location, for instance.
Furthermore, if you want to log all calls to a particular table, for instance, the only way to make sure that will be done is by centralizing all calls to the DB this way and not allow anybody direct access to the DB.
Wrap the service, then keep the connection string secret.
I think using the SOA approach is really better (WCF or WebServices with a DAL layer) because this way you don't need to publish your DAL dll with the Windows Forms exe. Then, all changes to your data model will automatically happens to your both UI clients.
Remember that this can cause its own problems:
Concern with security so that your Services cannot be accessed directly by URL, allowing someone to run your methods.
Concern about maintenance, because changes in data layer that needs to affect only one interface will be more difficult to control and needs to be better planned before (with the creation of new methods specific to certain intercace).
Decrease in performance, because the HTTP access is always more costly than direct communication with a dll.
Risk of lack of communication with the server, something that is expected to ASP.NET but requires additional concerns in the Windows Forms client to behave properly in these cases.
Option 1 seems simpler and I would do the same.
Option 2 with WCF will add additional code to your product and hence maintenance. Also this would mean an additional layer as well.
Corporate programmers like the second option (WCF service including DAL).

Designing an API: Use the Data Layer objects or copy/duplicate?

Struggling with this one today.
Rewriting a web-based application; I would like to do this in such a way that:
All transactions go through a web services API (something like http://api.myapplication.com) so that customers can work with their data the same way that we do / everything they can do through our provided web interface they can also do programmatically
A class library serves as a data layer (SQL + Entity Framework), for a couple of design reasons not related to this question
Problem is, if I choose not to expose the Entity Framework objects through the web service, it's a lot of work to re-create "API" versions of the Entity Framework objects and then write all the "proxy" code to copy properties back and forth.
What's the best practice here? Suck it up and create an API model class for each object, or just use the Entity Framework versions?
Any shortcuts here from those of you who have been down this road and dealt with versioning / backwards compatibility, other headaches?
Edit: After feedback, what makes more sense may be:
Data/Service Layer - DLL used by public web interface directly as well as the Web Services API
Web Services API - almost an exact replica of the Service Layer methods / objects, with API-specific objects and proxy code
I would NOT have the website post data through the web services interface for the API. That way leads to potential performance issues of your main website. Never mind that as soon as you deploy a breaking API change you have to redeploy the main website at the same time. There are reasons why you wouldn't want to be forced to do this.
Instead, your website AND web services should both communicate directly to the underlying business/data layer(s).
Next, don't expose the EF objects themselves. The web service interface should be cleaner than this. In other words it should try and simplify the act of working with your backend as much as possible. Will this require a fair amount of effort on your part? yes. However, it will pay dividends when you have to change the model slightly without impacting currently connected clients.
It depends on project complexity and how long you expect it to live. For small, short living projects you can share domain objects across all layer's. But if it's big project, and you expect it to exist, work well, and update for next 5 years....
In my current project (which is big), I first started with shared entities across all layers, then i discovered that I need separate entities for Presentation, and now (6 month's passed) I'm using separate classes for each layer (persistence, service, domain, presentation) and that's not because i'm paranoid or was following some rules, just I couldn't make all work with single set of classes across layers... Make you conclusions..
P.S. There are tools that can help you convert your objects, like Automapper and Value Injecter.
I would just buck up and create an API specifically aimed at the needs of the application. It doesn't make much sense to what amounts to exposing the whole DB layer. Just expose what needs to be exposed in order to make the app work, and nothing else.

Linq To SQL, WebServices, Websites - Planning it all

Several "parts" (a WinForms app for exmaple) of my project use a DAL that I coded based on L2SQL.
I'd like to throw in several WebApps into the mix, but the issue is that the DAL "offers" much more data than the WebApps need. Way more.
Would it be OK if I wrapped the data that the websites need within a web-service, and instead of the website connecting directly to the DAL it would go through the web-service which in turn would access the DAL?
I feel like that would add a lot of overhead, but on the other hand, I definitely don't like the feeling of knowing that the WebApps have the "capabilities" of accessing much more data than they actually need.
Any input would be greatly appreciated.
Thank you very much for the help.
You can either create web services, or add a repository layer that presents only the data that your applications require. A repository has the additional benefit of being a decoupling layer, making it easier to unit test your application (by providing a mock repository).
If you plan on eventually creating different frontends (say, a web UI and a WPF or Silverlight UI), then web services make a lot of sense, since they provide a common data foundation to build on, and can be accessed across tiers.
If your data access layer were pulling all data as IQueryable, then you would be able to query your DAL and drill down your db calls with more precision.
See the very brief blog entry I wrote on Repository and Service layers using Linq to SQL. My article is built around MVC but the concept of Repository and Service layers would work just fine with WebForms, WinForms, Web Services, etc.
Again, the key here is to have your Repository or your Dal return an object AsQueryable whereby you wait until the last possible moment to actually commit to requesting data.
Your structure would look something like this
Domain Layer
Repository Layer (IQueryable)
Service layer for Web App
Website
Service layer for Desktop App
Desktop App
Service layer for Web Services
Web Service
Inside your Service layer is where you customize the specific calls based on the application your developing for. This allows for greater security and configuration on a per-app basis while maintaining a complete repository that doesn't need to be modified until you swap out your ORM (if you ever decide you need to swap out your ORM)
There is nothing inherently wrong with having more than you need in this case. The entire .NET 4 Client Profile contains over 50MB of assemblies, classes, etc. I might use 5% of it in my entire career. That doesn't mean I don't appreciate having all of it available in case I need it.
If you plan to provide the DAL to developers that should not have access to portions of the data, write a wrapper or derive a new DAL. I would avoid the services route unless you're confident you can accommodate for the overhead.
Sounds like you are on the right track. If many applications are going to use the this data you gain a few advantages by having services with DTOs.
If the domain model changes, just the mapping to the DTO needs to change. You can isolate the consuming application from these changes.
Less data over the wire
You can isolate you applications from the implementation of the DAL.
You can expose different services (maybe different DTOs) for different applications if it is necessary to restrict what parts of the object model should be exposed.

Categories