I am building a web site in C# using asp.NET MVC
How can I secure that no unauthorized persons can access my methods?
What I mean is that I want to make sure that only admins can create articles on my page. If I put this logic in the method actually adding this to the database, wouldn't I have business logic in my data layer?
Is it a good practise to have a seperate security layer that is always in between of the data layer and the business layer to make?
The problem is that if I protect at a higher level I will have to have checks on many places and it is more likely that I miss one place and users can bypass security.
Thanks!
Authorize filters (as pmarflee said) are sort of the canonical example of how to secure your controllers, though that doesn't always satisfy your requirements (e.g. if you're exposing your model through other means such as if you're also exposing a WCF service).
The more global and flexible means is to require a security service somewhere (your choice where, but commonly in either the controller or repository base) and then pass in a user context somehow (either through params or constructor). yes, that means you have to be sure to call that service in each action, but it's pretty hard to avoid that unless you decide to go with some sort of aspect-oriented programming container.
Have a look at this post, which explains how to use action filters to provide authorization on controller actions.
For your problem there are Policy base authorization: https://learn.microsoft.com/en-US/aspnet/core/security/authorization/policies?view=aspnetcore-6.0
Related
Hi I am trying to create a project skeleton uses CQRS pattern and some external services. Below are the structure of the solution.
WebApi
Query Handlers
Command Handlers
Repository
ApiGateways ( here is the interfaces and implementation of microservice calls)
We want to keep controller as thin. So we are using query handlers and command handlers to handle respective operations.
However, we use a external microservices to get the data we are calling them from Query handlers.
All the http clinet construction and calls will be abstracted in them.The response will be converted to a view model and pass it back to Query handler.
We name it as a ApiGateways. But it is not composing from multiple services.
How do we call this part in our solution? Proxy or something? Any good example for thin controllers and microservice architecture
We name it as API Gateways. But it is not composed from multiple
services. How do we call this part in our solution? Proxy or
something? Any good example for thin controllers and microservice
architecture
Assumption:
From the image you attached, I see Command Handler and Query Handler are calling "external/micro-services". I guess by this "external/micro-services" you mean that you are calling another micro-service from your current micro-service Handler(Command and Query). These "external/micro-services" are part of your architecture and deployed on the same cluster and not some external system that just exposes a public API?
If this is correct I will try to answer based on this assumption.
API Gateway would probably be misleading in this case as the concept of API Gateway is something different then what you are trying to do here.
API Gateway per definition:
Quote from here:
An API Gateway is a server that is the single entry point into the
system. It is similar to the Facade pattern from object-oriented
design. The API Gateway encapsulates the internal system architecture
and provides an API that is tailored to each client. It might have
other responsibilities such as authentication, monitoring, load
balancing, caching, request shaping and management, and static
response handling.
What you actually are trying to do is to call from your Command or Query Handler from one of your micro-service A another micro-service B. This is an internal micro-service communication that should not be done through API Gateway as that would be the approach for outside calls. For example, with "outside calls" I mean frontend application API or public API calls that are trying to call your micro-services. In that case, you would use API Gateways.
A better name for this component would be something like "CrossMicroServiceGateway" or "InterMicroServiceGateway"; if you want to have it as the full CQRS way you could have it like a direct call to other Command or Query and then you could use something like "QueryGate" or "CommandGate" or similar.
Other suggestions:
WebApi
Query Handlers
Command Handlers
Repository
API Gateways ( here is the interfaces and implementation of
microservice calls)
This sounds reasonable except for the point of API Gateway which I described above. Of course, it is hard for me to tell based on the limited information that I have about your project. To give you a more precise suggestion here I would need to know whether you use DDD or not? How do you use CQRS and other information?
However, we use an external microservices to get the data we are
calling them from Query handlers. All the HTTP client construction and
calls will be abstracted in them. The response will be converted to a
view model and pass it back to Query handler.
You could extract all this code/logic that handles the cross micro-service communication over HTTP or other protocols, handling general responses and similar to some core library and include it into each of your micro-service as a package. In this way, you will reuse the solution for all your micro-service. You can extend that and add all core domain-agnostic things (like data access or repository base classes, wrappers around HTTP, unit-test infrastructure setup, and similar) to that or other shared libraries. This way your micro-services will only focus on the part of the Domain it is supposed to do.
I think CQRS is the right choice to keep the reading and writing operations decoupled.
The integration with third party systems (if it's the case), need some attention.
Do not call these services directly from your handlers, this could lead to various performance and/or maintainability issues.
You have to keep these integrations very well separated, because them are out of your domain. They may be subject to inefficiencies, changes or a number of problems out of your control.
One solution that I could recommend is a "Middleware" service.
In your application context this can be constituted by another service (always REST for example) that will have the task of 'talk' (only him) with external systems, acting as a single point of integration between your domain and the external environment. This can be realized from scratch or using a commercial/opens-source solution like (just as example) this.
This lead to many benefits, same of these:
A middleware is a unique mockable point during integration test of your application.
You can change the middleware implementation in the future without touch your handlers.
Of course, changing 3pty providers won't affect your domain services.
Middleware is the unique point dedicated to manage 3pty service interruptions.
Your services remain agnostic compared to the outside world.
Focus on these questions can be useful to design your integration middleware service:
Which types of 3pty data do they provide? Are they on time? This might help you figure out whether to introduce a cache system into your integration service.
Can 3pty be subject to frequent interruptions? Then you must ensure that your system must tolerate any disruption of external services. In other words, you must ensure a certain resilience of your services. There are many techniques to do that.
Do you really need to interrogate these 3pty services all the time? Maybe a more or less sophisticated cache system could speed up your services a lot.
Finally, it is also very important to understand if the need to have a microservices-oriented system is a real and immediate need.
Due to the fact these architectures are more expensive and complex then the classic ones, it might be reasonable to think about starting by building a monolith system and then moving towards a more segmented solution later.
Thinking (organizing) your system as many "bounded context" does not prevent you from creating a good monolith system and at the same time, it prepares you for a possible switch to microservices-oriented one.
As a summary advice, start by keeping things as separate as possible. Define a language to speak about your business model. These lead to you potentially change a lot when needs will come without to much effort during the inevitable evolution of your software. "Hexagonal" architecture is a good starting point to do that for both choises (Microservices vs Monolith).
Recently, Netflix posted a nice article about this architecture with a lot of ideas for a fresh start.
I will give my answer from DDD and the clean architecture perspective. Ideally, you application should have following layers.
Api (ideally very thin layer of Controllers).The controller will create queries and commands and push them on a common channel. (refer MediatR)
Application This will be your orchestration layer. It will contain definitions of queries and command and their handlers. For queries, you will directly interact form your infrastructure layer. For commands, you will interact with domain and then save them through repositories in infrastructure.
Domain Depends upon your business logic and complexity, this layer will contain all your business models.
Infrastructure It will contain mostly two types of objects Providers and Repositories. Providers should be used with queries and will return DAO. Repositories should be used where ever domain is involved, ideally commands in CQRS. Repositories should always receive and return only domain objects.
So after setting the base context about different layers on clean architecture, the answer to your original question is --> I would create third party interactions in the provider layer. For example, you need to connect with a user microservice, I will create a UserProvider in the provider folder in the infrastructure layer and consume it through a interface.
I'm using EF Core 3.0 and I like to implement a ABAC system to control rights in my app. To do it I'd like to do all the permissions stuff in just one layer and control it using some decorators in the controllers. The idea is to follow a bit this example. I think using the new UseAuthorization method will be also helpful.
I am still designing the solution and I have an issue: currently in my controllers I have functions such as (the AuthorisationFilter is not implemented yet, it's precisely where I'm working on)
// GET api/project/:id
[HttpGet("{projectId}")]
[AuthorisationFilter]
public async Task<ProjectDTO> GetProjectById(int projectId)
{
return await this.projectService.GetProject(projectId);
}
and I also have some others that return all projects:
// GET api/project/projects
[HttpGet("projects")]
[AuthorisationFilter]
public async Task<IEnumerable<ProjectDTO>> GetAllProjects()
{
return await this.projectService.GetAllProjects();
}
Now, in my first case, my authorization filter should simply consider according to some attributes if a certain user is able to access this project or not. Clear.
However, in the second case, it could be that one user can see some projects and a different user can see some different projects, and I don't know exactly what my authorisation filter should return. Allow or deny? If I deny, I lose control of what happens next.
I understand the authorisation filter is not the place to create the conditions to generate the SQL query, but I don't like either to simply accept the action and lose control of the permissions. In other words: if there is a bug in the implementation of GetAllProjects which returns more projects than the authoirised ones, I should not send these projects to the user.
Hence: how should the authorisation layer work? Should I filter there the valid projectId's and then call GetAllProjects with this list as an argument?
In a nutshell: is there a way put all the rights control in a single layer?
Authorization layer should provide details about what the user can do, but enforcing that is up to the individual components.
Your ProjectService need to know what the user is authorized to do and enforce it.
If your authorization layer decides that, it can become very involved in pretty much everything your application does, as it need to know way too much about each controller action or the database access, or whatever else which will not be very maintainable.
What if one of your services decides to access a 3rd party service over a 3rd party SDK?, it would make sense for your ..say MyTwitterService to enforce that rather than a generic authorization layer.
Usually you pass in the context of the user (usually some sort of "rights" the user has) and the ProjectService will decide what things to return. (or fail if the rights are insufficient or invalid).
Our Aspnet Core 2.2 app needs to implement the front-controller design pattern to select an arbitrary controller based on the app's internal logic. We are not as concerned with selecting an action because we intend to be as RESTful as possible with GET, POST, DELETE, etc. Here is an example of what we want to do:
Given a request to http://example.com/DomainObjectX/, one customer's business rules might use the DomainObjectXController, but another customer's business rule might provide CustomDomainObjectXController. We do not want to redirect the request but simply use a different controller to handle it the same URL.
Can custom middleware choose an arbitrary controller? I cannot find any examples where middleware does all the routing or passes the request along to the default route. My google-fu does not get me there.
Or should we use the application/controller model to select an a controller based on our app's business rules? Can someone point to an example that does that?
Or should we implement our own MatcherPolicy? I have not seen any examples that do this.
Any and others would probably work, but I'm not sure how/when to specify the arbitrary controller. So many things to learn ...
I need a advice on creating a architecture where i want API layer in between UI layer and Business Layer. UI layer should only consume rest services for displaying data.
Reason for doing this is that we need to expose same service for others clients like Ipad, Android etc.
Now my question is:-
1) Do we need dependence inject in this case? (i don't think so coz we are not going to use any reference at UI layer. The only thing is, we are manipulating the JSON returning by service.)
2) Will it hurt performance?
3) Is this the right approach?
Any help will be appreciated. Thanks
we're doing roughly the same thing now.
1) No, you can't.
2) No, twitter is api first, they seem to be doing ok. I guess technically it will, but it does mean you can scale horizontally so the extra hop overhead can easily be counteracted.
3) You have multiple ui clients so it seems like a decent viable solution.
Security
Security: Basic Authentication
Its the easiest to setup, but be aware the token is reversible, so use HTTPS to encrypt the communication.
The HTTP Authorization header containing the username and password is sent with every request to the api level.
You could use session but that requires a bit more setup.
There are plenty of how to's on setting up basic authenication in C# and web api.
The way i created an API for me was:
Project 1 : WebAPI serving as a portal to fetch data
Project 2 : Class Library, providing services to the WebAPI layer.
Project 3 : Class Library, providing data to my services layer using EF.
Now, different controllers in the web api project require different service objects(from project 2) to work with. I had to provide constructors for those controllers using DI. for this i used Autofac.
For you, your business layer would be Project 2.
Data flowing through one more Project layer might take some time, and you will need to put up exception handling and logging again in the API layer. i don't think performance should be big problem here.
In my experience I've seen such platform oriented approach - providing mSOA to N amount of clients. The architectural solution was a Facade that was hiding all the complex Business Layer requests and in the same time providing UI insensitive processing.
Will it hurt performance?
Not necessary - since it has knowledge of how to handle all required sub-systems requests. All the clients just know that they need a single JSON contract to get the job done, not which and how many of the services to call. By doing so - we have a much better and simplified communication. Take a look at the Mediation (intra-communication) pattern:
I usually write use cases for all the software that I develop. For each use case I generally write a controller which directs the flow (implements a use case).
I have recently started developing web apps using Asp.net MVC. One of the best practices of Asp.net MVC is to keep very less logic in the controllers. I am not able to figure out how will I change my design to reflect this.
I basically want a way to encapsulate my use cases.
I think having a fat model and skinny controller is generally a good practice in any language and not specifically .NET MVC. Checkout this nice article that goes through a sample scenario showing the advantages of a fat mode in Ruby on Rails (but the ideas apply to any language).
For representing the use-cases in your code, I think a much better place for them is in test-cases rather than the controller.
Push as much business logic to your models and helper classes as possible, and use controllers mainly for handling URL calls and instantiating the relevant models, retrieving data from them, and pushing data to the views. Views and controllers should have as few decisions to make as possible.
Create a business component to encapsulate use cases. For instance if you have a leave management system you would have use cases like apply for a leave, approve a leave request, reject a leave request, etc. For this you can create a business component (class) called Leave Manager with methods (functions/operations) like "Apply", "Approve", "Reject", etc. These methods will encapsulate your use cases. These methods would take your business entities and data store classes as input and execute the use case.
class LeaveManager{
int Apply(from, to);
bool Approve(leaveApplicationId, approverId);
bool Reject(leaveApplicationId, approverId);
}
You can then use this business component in your controllers to execute the use case by supplying the required parameters.