Make a Method of the Business Layer secure. best practice / best pattern - c#

We are using ASP.NET with a lot of AJAX "Page Method" calls.
The WebServices defined in the Page invokes methods from our BusinessLayer.
To prevent hackers to call the Page Methods, we want to implement some security in the BusinessLayer.
We are struggling with two different issues.
First one:
public List<Employees> GetAllEmployees()
{
// do stuff
}
This Method should be called by Authorized Users with the Role "HR".
Second one:
public Order GetMyOrder(int orderId)
{
// do sutff
}
This Method should only be called by the owner of the Order.
I know it's easy to implement the security for each method like:
public List<Employees> GetAllEmployees()
{
// check if the user is in Role HR
}
or
public Order GetMyOrder(int orderId)
{
// check if the order.Owner = user
}
What I'm looking for is some pattern/best practice to implement this kind of security in a generic way (without coding the the if then else every time)
I hope you get what i mean :-)

User #mdma describes a bit about Aspect Oriented Programming. For this you will need to use an external library (such as the great PostSharp), because .NET doesn’t have much AOP functionality. However, .NET already has a AOP mechanism for role based security, that can solve part of your problem. Look at the following example of standard .NET code:
[PrincipalPermission(SecurityAction.Demand, Role="HR")]
public List<Employees> GetAllEmployees()
{
// do stuff
}
The PrincipalPermissionAttribute is part of the System.Security.Permissions namespace and is part of .NET (since .NET 1.0). I’ve been using it for years already to implement role based security in my web applications. Nice thing about this attribute is that the .NET JIT compiler does all the weaving for you on the background and you can even define it on a class level. In that case all members of that type will inherit that attribute and its security settings.
Of course it has its limitations. Your second code sample can't be implemented using the .NET role based security attribute. I think you can’t really come around some custom security checks in this method, or calling some internal security library.
public Order GetMyOrder(int orderId)
{
Order o = GetOrderInternal(orderId);
BusinessSecurity.ValidateOrderForCurrentUser(o);
}
Of course you can use an AOP framework, but you would still have to write an framework specific attribute that will again call your own security layer. This would only get useful when such an attribute would replace multiple method calls, for instance when having to put code inside try,catch,finally statements. When you would be doing a simple method call, there wouldn’t be much difference between a single method call or a single attribute IMO.
When you are returning a collection of objects and want to filter out all objects for which the current user doesn't have the proper rights, LINQ expression trees can come in handy:
public Order[] GetAllOrders()
{
IQueryable orders = GetAllOrdersInternal();
orders = BusinessSecurity.ApplySecurityOnOrders(orders);
return orders.ToArray();
}
static class BusinessSecurity
{
public static IQueryable<Order> ApplySecurityOnOrders(
IQueryable<Order> orders)
{
var user = Membership.GetCurrentUser();
if (user.IsInRole("Administrator"))
{
return orders;
}
return
from order in orders
where order.Customer.User.Name == user.Name
select order;
}
}
When your O/RM supports LINQ through expression trees (such as NHibernate, LINQ to SQL and Entity Framework) you can write such a security method once and apply it everywhere. Of course the nice thing about this is, that the query to your database will always be optimal. In other words, no more records will be retrieved than needed.
UPDATE (years later):
I used this attribute for a long time in my code base, but several years back, I came to the conclusion that attribute based AOP has terrible downsides. For instance, it hinders testability. Since security code is weaved with normal code, you can't run normal unit tests without having to impersonate a valid user. This is brittle and should not be a concern of the unit test (the unit test itself violates the Single Responsibility Principle). Besides that, it forces you to litter your code base with that attribute.
So instead of using the PrincipalPermissionAttribute, I rather apply cross-cutting concerns like security by wrapping code with decorators. This makes my application much more flexible and much easier to test. I've written several articles about this technique the last couple of years (for instance this one and this one).

One "best practice" is to implement Security an aspect. This keeps the security rules separate from the primary business logic, avoiding hard-coding and making it easy to change the security rules in different environments.
The article below lists 7 ways of implementing aspects and keeping the code separate. One approach that is simple and doesn't change your business logic interface is to use a proxy. This exposes the same interface as you have currently, yet allows an alternative implementation, which can decorate the existing implementation. The security requirements can be injected into this interface, using either hard-coding or custom attributes. The proxy intercepts method calls to your business layer and invokes the appropriate security checks. Implementing interception via proxies is described in detail here - Decouple Components by Injecting Custom Services into your Object's Invocation Chain. Other AOP approaches are given in Understanding AOP in .NET.
Here's a forum post discussing security as an aspect, with implementation using advice and security attributes. The end result is
public static class Roles
{
public const string ROLE_ADMIN = "Admin";
public const string ROLE_CONTENT_MANAGER = "Content Manager";
}
// business method
[Security(Roles.ROLE_HR)]
public List<Employee> GetAllEmployees();
You can put the attribute directly on your business method, tight coupling, or create a service proxy with these attributes, so the security details are kept separate.

If you are using SOA, you can create a Security Service, and each action (method) will send it's context (UserId, OrderId etc.). Security Service knows about business security rules.
Scheme may be something like this
UI -> Security -> BLL -> DAL

Related

Working with concrete type of a base parameter passed in a strategy method

I ran at a major architectural problem.
CONTEXT
I'm trying to build an ASP.NET Core microservice application that implements the strategy pattern.
The application communicates with other microservices.
I have a main entity that aggregates all the information I need to work with, let's call it "MainContext". The goal is that this entiy is loaded and built only one time (as we need to get that information from other microservices) and then is processed throughout the whole application.
public class MainContext
{
public DeterminerAttribute Attribute {get; set; }
public OtherContextA ContextA { get; set; }
public OtherContextB ContextB { get; set; }
}
As you can see, the MainContext aggregates other contexts. These 'OtherContexts' are base classes that have their own child classes. They are somehow different and have different types and quantities of fields.
The application builds the MainContext in one separate place. The process looks something like this:
We get a specific attribute from other microservice and use this attribute as a determiner in a switch expression. The attribute is also saved in MainContext.
In switch expression we load specific implementations of OtherContextA and OtherContextB classes and wrap them up in their base classes. This step is important, as I don't want to ask for information that I don't need from other services.
The method returns MainContext with all information loaded, ready to use.
Then, I use strategy pattern, because different contexts require different treatment.
THE PROBLEM
The strategies have the same interface, and thus should implement the same methods that have the same signature. In my case, there is only one method, that looks something like this:
public class SomeStrategyToProcessContext : StrategyInterface
{
public async Task ProcessContext(MainContext mainContext, ...);
}
Now, in strategies I want to work with concrete implementations of Contexts. It makes sense as I KNOW, as a programmer who made that mess, that the strategies to be used are chosen based on the same attribute that I used to load contexts and therefore should work with the concrete implementations, as I need data stored in them. But this:
var concreteContext = (OtherConcreteContextA) mainContex.ContextA
is considered a bad pratice, AFAIK.
Obviously, base classes have only base, unspecific data. In strategy classes, I want to provide access only to the NEEDED data, no more, no less.
My quistion is: is there any safe and sustainable way of implementing this witin OOP (or other) paradigm? I want to avoid the casting, as it breaks the abstraction and contradics every programming principle I've learned about. Any advice, even if it's toxic or/and suggests to change the whole architecture is as good as gold. Thanks!

How do I deal with two situations that could be candidates for a strategy pattern solution?

I am designing a client that will call methods based on certain inputs. I will be sending in a billing system enum and calling an endpoint to determine which billing system is appropriate for an existing patient. Once I get the billing system, I have to check to see what type of operation I need to perform and make an API call based on the billing system.
For example, if I need to update a patient record and the patient is in BillingSystemA, I need to call a PUT-based method of the API for BillingSystemA.
I need to have CRUD methods for each billing system.
Selecting between the two billing systems and allowing for future growth made me think that the strategy pattern was a good fit. Strategy seems to work for the billing system, but what about the CRUD operations?
I have a BillingStrategy abstract class that has Create, Update, Get and Delete methods, but I need those methods to work against a variety of types. Can I just make the methods generic, like T Create<T> or bool Update<T> or do I need a strategy within a strategy to manage this? I've analyzed myself into a corner and could use some advice.
Here's a rough illustration. I invented a lot of the specifics, and the names aren't so great. I tend to revisit names as I refactor. The main point is to illustrate how we can break up the problem into pieces.
This assumes that there are classes for Patientand Treatment and an enum for InsuranceType. The goal is to bill a patient for a treatment, and determine where to send the bill based on the patient's insurance.
Here's a class:
public class PatientBilling
{
private readonly IBillingHandlerByInsuranceSelector _billingHandlerSelector;
private readonly IBillingHandler _directPatientBilling;
public PatientBilling(
IBillingHandlerByInsuranceSelector billingHandlerSelector,
IBillingHandler directPatientBilling)
{
_billingHandlerSelector = billingHandlerSelector;
_directPatientBilling = directPatientBilling;
}
public void BillPatientForTreatment(Patient patient, Treatment treatment)
{
var billingHandler = _billingHandlerSelector.GetBillingHandler(patient.Insurance);
var result = billingHandler.BillSomeone(patient, treatment);
if (!result.Accepted)
{
_directPatientBilling.BillSomeone(patient, treatment);
}
}
}
and a few interfaces:
public interface IBillingHandler
{
BillSomeoneResult BillSomeone(Patient patient, Treatment treatment);
}
public interface IBillingHandlerByInsuranceSelector
{
IBillingHandler GetBillingHandler(InsuranceType insurance);
}
As you can see this will rely heavily on dependency injection. This class is simple because it doesn't know anything at all about the different insurance types.
All it does is
Select a billing handler based on the insurance type
try to submit the bill to the insurance
if it's rejected, bill the patient
It doesn't know or care how any of that billing is implemented. It could be a database call, an API call, or anything else. That makes this class very easy to read and test. We've deferred whatever isn't related to this class. That's going to make it easier to solve future problems one at a time.
The implementation of IBillingHandlerByInsuranceSelector can be an abstract factory that will create and return the correct implementation of IBillingHandler according to the patient's insurance. (I'm glossing over that but there's plenty of information on how to create abstract factories with dependency injection containers.)
In a sense we could say that the first part of this problem is solved (although we're likely to refactor some more.) The reason why is that we can write unit tests for it, and any of the work specific to one insurance type or another will be in different classes.
Next we can write those insurance-specific implementations. Suppose one of the insurance types is WhyCo, and now we need to create an IBillingHandler for them. We're essentially going to repeat the same process.
For the sake of illustration, let's say that submitting a bill to WhyCo is done in two steps. First we have to make a request to check eligibility, and then we have to submit the bill. Maybe other insurance APIs do this in one step. That's okay, because no two implementations have to have anything in common with each other. They just implement the interface.
At this point we're dealing with the specifics of one particular insurance company, so somewhere in here we'll need to convert our Patient and Treatment information into whatever data they expect to receive.
public class WhyCoBillingHandler : IBillingHandler
{
private readonly IWhyCoService _whyCoService;
public WhyCoBillingHandler(IWhyCoService whyCoService)
{
_whyCoService = whyCoService;
}
public BillSomeoneResult BillSomeone(Patient patient, Treatment treatment)
{
// populate this from the patient and treatment
WhyCoEligibilityRequest eligibilityRequest = ...;
var elibility = _whyCoService.CheckEligibility(eligibilityRequest);
if(!elibility.IsEligible)
return new BillSomeoneResult(false, elibility.Reason);
// create the bill
WhyCoBillSubmission bill = ...;
_whyCoService.SubmitBill(bill);
return new BillSomeoneResult(true);
}
}
public interface IWhyCoService
{
WhyCoEligibilityResponse CheckEligibility(WhyCoEligibilityRequest request);
void SubmitBill(WhyCoBillSubmission bill);
}
At this point we still haven't written any code that talks to the WhyCo API. That makes WhyCoBillingHandler easy to unit test. Now we can write an implementation of IWhyCoService that calls the actual API. We can write unit tests for WhyCoBillingHandler and integration tests for the implementation of IWhyCoService.
(Perhaps it would have been better if translating our Patient and Treatment data into what they expect happened even closer to the concrete implementation.)
At each step we're writing pieces of the code, testing them, and deferring parts for later. The API class might be the last step in implementing WhyCo billing. Then we can move on to the next insurance company.
At each step we also decide how much should go into each class. Suppose we have to write a private method, and that method ends up being so complicated that it's bigger than the public method that calls it and it's hard to test. That might be where we replace that private method with another dependency (abstraction) that we inject into the class.
Or we might realize up front that some new functionality should be separated into its own class, and we can just start off with that.
The reason why I illustrated it this way is this:
I've analyzed myself into a corner
It's easy to become paralyzed when our code has to do so many things. This helps to avoid paralysis because it continually gives us a path forward. We write part of it to depend on abstractions, and then that part is done (sort of.) Then we implement those abstractions. The implementations require more abstractions, and we repeat (writing unit tests all the way in between.)
This doesn't enforce best practices and principles, but it gently guides us toward them. We're writing small, single-responsibility classes. They depend on abstractions. We're defining those abstractions (interfaces, in this case) from the perspective of the classes that need them, which leads to interface segregation. And each class is easy to unit test.
Some will point out that it's easy to get carried away with all the abstractions and create too many interfaces and too many layers of abstraction, and they are correct. But that's okay. At every single step we're likely to go a little off balance one way or the other.
As you can see, the problems that occur when we have to deal with the difference between billing systems becomes simpler. We just create every implementation differently.
Strategy seems to work for the billing system, but what about the CRUD operations?
The fact that they all have different CRUD operations is fine. We've made components similar where they need to be similar (the interfaces through which we interact with them) but the internal implementations can be as different as they need to be.
We've also sidestepped the question of which design patterns to use, except that IBillingHandlerByInsuranceSelector is an abstract factory. That's okay too, because we don't want to start off too focused on a design pattern. If this was a real application, I'd assume that a lot of what I'm doing will need to be refactored. Because the classes are small, unit tested, and depend on abstractions, it's easier to introduce design patterns when their use becomes obvious. When that happens we can likely isolate them to the classes that need them. Or, if we've just realized that we've gone in the wrong direction, it's still easier to refactor. Unless we can see the future that's certain to happen.
It's worth taking some time up front to understand the various implementation details to make sure that the way you have to work with them lines up with the code you're writing. (In other words, we still need to spend some time on design.) For example, if your billing providers don't give you immediate responses - you have to wait hours or days - then code that models it as an immediate response wouldn't make sense.
One more benefit is that this helps you to work in parallel with other developers. If you've determined that a given abstraction is a good start for your insurance companies and maybe you've written a few, then it's easy to hand off other ones to other developers. They don't have to think about the whole system. They can just write an implementation of an interface, creating whatever dependencies they need to.

C# IoC Instantiation when the injected objects are conditional

I have an IoC question that for the moment is abstract. I have not yet chosen an IoC framework for started coding. I am still mentally planning the methods I am going to use for an imminent project.
My coding style generally follows this pattern:
A Processor of some kind is instantiated and passed a Business Object.
The processor in turn will instantiate a Validator to validate that the passed business object is valid for the given process.
If the Business Object is found to be valid, then a Persistence Object will be instantiated. The Persistence object is responsible for transformations such as encryption, caching, and grouping multiple requests together in a single transaction for object graphs.
Then, the business object instantiates a DataLayer that will have the job of persisting the Business Object to the database, or pulling it from the database as the case may be (or a text file, or a webservice, whereever the data may live.)
My ideal structure is that a Processor knows about a Validator and a Peristence object, but not an AccessLayer. A persistence object knows about an access layer, but cannot directly instantiate or invoke a process. This way there are clearly defined layers that can be seperated as necessary
Finally, this process is agnostic to input or output and immutable based on the application type. In other words, I could use the same Processor to add a business object in a web app as I would in a desktop app. Obviously, the Model/View/Controller would change depending on the app type, but the rules for adding or selecting a business object remain universal.
My problem is this. I don't like that my AccessLayer in turn needs to pull the connection string from the config file, for instance. Maybe I want my users to be able to specify a config file or a Db Table for settings. Having the access layer check the config file to see if it should use the config file is circular and silly. And the Access Layer cannot likewise call a Persistence object to pull the settings, or query the Application Framework to see if it is a web app with a Web.Config or a desktop app with DbSettings.
So I was thinking that the best thing for me to do is to use an IoC container of some kind. I could then inject whatever settings I needed. This could also allow me to mock objects for testing, which is another difficult (but not impossible) task with my current method. So from my reading, my vague Processor implementation would look like this:
public class VagueProcessor{
public VagueProcessor(IValidator validator,
IPersistence persistence,
IAccessLayer accessLayer,
ISettings settings) { ... }
}
Here is my snag. In the application I am planning, the Business Object have a variety of implementations each with their own configurable rules. Say one BO is for the state of CA and another for the state of NY, and both states have their own special rules to be validated by their governing bodies. So the validator could be a CAValidator or a NYValidator just depending on the state of the Business Object.
Ok, so my question after all that preamble and backstory is this: in this scenario, would I pass a ValidatorFactory to the Processor and the Factory would instantiate the appropriate type of Validator based on the state of the Business Object? And if so, would I register each type with the IoC container, or just the Factory?
Thanks for your thoughts on this matter!!
That's a vague question as you don't have a problem yet, only the idea.
From what I understand from your question, I'd say:
The IOC solves the problem of creating the new object, not exactly deciding which object to create. In most IOC containers you can at some level choose the implementation you're asking, but in your case the logic looks very application centric, and no IOC container will help you deciding which one to use. In that case, you should indeed have a factory passed to your processor where you can ask something like factory.CreateValidatorFrom(myBusinessObject).
Internally, that factory can still use DI to instantiate each component. If you use .NET Core DI for example, you can pass a IServiceProvider to the factory, and call inside the factory serviceProvider.GetService<CAValidator>(). All DI providers will have an object like that.
So, in a sense, the factory and the DI can co-exist and each of them solve part of the problem. If you're using DI, you shouldn't ever have to instantiate the actual class. That will make it easier for each validator to have their own dependencies and you don't have to care how to get them.
And yes, in that case you'd register each validator in the DI, and also the factory. In cases like this, you can easily loop through all of them through reflection and register them dynamically by name or interface, if that is bothering you.
And in the end, if you're using .NET Core, I strongly suggest you to simply use the built-in DI. It's simple and good enough for most cases.
Validation is a crosscutting concern, so typically the validation service doesn't know about the details of the object it is validating. It only knows about its boolean valid state and how to get validation errors that are typically displayed on the UI.
As a crosscutting concern, the validation rules are abstracted from the services that read them. This is usually done via an interface and/or .NET attributes.
public class ValidateMe : IValidatableObject
{
[Required]
public bool Enable { get; set; }
[Range(1, 5)]
public int Prop1 { get; set; }
[Range(1, 5)]
public int Prop2 { get; set; }
public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
{
if (!this.Enable)
{
/* Return valid result here.
* I don't care if Prop1 and Prop2 are out of range
* if the whole object is not "enabled"
*/
}
else
{
/* Check if Prop1 and Prop2 meet their range requirements here
* and return accordingly.
*/
}
}
}
The validation service then only needs to have a mechanism to process the rules (returning a true/false for each rule) in order to ensure all of them are valid, and a way to retrieve the errors for display.
The validation service can do all of this by simply passing the model (the runtime state) to the service.
if (validationService.IsValid(model));
{
// persist
}
This can also be done using a proxy pattern to ensure that it always happens if the interface and/or attributes are available to process.
NOTE: The term Business Object implies that you want to build some sort of Smart Object Framework using objects that know how to save and retrieve their own state (internally implementing CRUD). This sort of design doesn't lend itself to DI very well. That isn't to say you can't use DI and a Smart Object design at the same time, it is just more difficult to build, more difficult to test, and then more difficult to maintain.
A design that uses models to abstract the runtime state of the application away from the services that use the models makes for an easier path. A design that I have found works pretty well for some applications is Command Query Segregation, which turns every update or request for data into its own object. It works well with a proxy or a decorator pattern to implement crosscutting concerns. It sounds strange if you are used to working with smart objects, but a loosely coupled design like this is simpler to test which makes it just as reliable, and since query and command classes are used like
var productDetails = this.queryProcessor.Execute(new GetProductDetailsQuery
{
ProductId = id
});
Or
// This command executes a long and complicated workflow,
// but this is all that is done inside of the action method
var command = new AddToCartCommand
{
ProductId = model.Id,
Quantity = model.Qty,
Selections = model.Selections,
ShoppingCartId = this.anonymousIdAccessor.AnonymousID
};
this.addToCartHandler.Handle(command);
it is almost as easy to use. You can even easily break out different steps of a complicated workflow into their own commands so it can be tested and verified at each step of the way, which is something that is difficult to do on a smart object design.

Should I abstract the validation framework from Domain layer?

I am using FluentValidation to validate my service operations. My code looks like:
using FluentValidation;
IUserService
{
void Add(User user);
}
UserService : IUserService
{
public void Add(User user)
{
new UserValidator().ValidateAndThrow(user);
userRepository.Save(user);
}
}
UserValidator implements FluentValidation.AbstractValidator.
DDD says that domain layer have to be technology independent.
What I am doing is using a validation framework instead of custom exceptions.
It's a bad idea to put validation framework in the domain layer?
Just like the repository abstraction?
Well, I see a few problems with your design even if you shield your domain from the framework by declaring an IUserValidator interface.
At first, it seems like if that would lead to the same abstraction strategy as for the Repository and other infrastructure concerns, but there's a huge difference in my opinion.
When using repository.save(...), you actually do not care at all of the implementation from the domain perspective, because how to persist things is not a domain concern.
However, invariant enforcement is a domain concern and you shouldn't have to dig into infrastructure details (the UserValidtor can now be seen as such) to see what they consist of and that's basically what you will end up doing if you do down that path since the rules would be expressed in the framework terms and would live outside the domain.
Why would it live outside?
domain -> IUserRepository
infrastructure -> HibernateUserRepository
domain -> IUserValidator
infrastructure -> FluentUserValidator
Always-valid entities
Perhaps there's a more fundamental issue with your design and that you wouldn't even be asking that question if you adhered to that school of though: always-valid entities.
From that point of view, invariant enforcement is the responsibility of the domain entity itself and therefore shouldn't even be able to exist without being valid. Therefore, invariant rules are simply expressed as contracts and exceptions are thrown when these are violated.
The reasoning behind this is that a lot of bugs comes from the fact that objects are in a state they should never have been. To expose an example I've read from Greg Young:
Let's propose we now have a SendUserCreationEmailService that takes a
UserProfile ... how can we rationalize in that service that Name is
not null? Do we check it again? Or more likely ... you just don't
bother to check and "hope for the best" you hope that someone bothered
to validate it before sending it to you. Of course using TDD one of
the first tests we should be writing is that if I send a customer with
a null name that it should raise an error. But once we start writing
these kinds of tests over and over again we realize ... "wait if we
never allowed name to become null we wouldn't have all of these tests" - Greg Young commenting on http://jeffreypalermo.com/blog/the-fallacy-of-the-always-valid-entity/
Now don't get me wrong, obviously you cannot enforce all validation rules that way, since some rules are specific to certain business operations which prohibits that approach (e.g. saving draft copies of an entity), but these rules aren't to be viewed the same way as invariant enforcement, which are rules that applies in every scenarios (e.g. a customer must have a name).
Applying the always-valid principle to your code
If we now look at your code and try to apply the always-valid approach, we clearly see that the UserValidator object doesn't have it's place.
UserService : IUserService
{
public void Add(User user)
{
//We couldn't even make it that far with an invalid User
new UserValidator().ValidateAndThrow(user);
userRepository.Save(user);
}
}
Therefore, there's no place for FluentValidation in the domain at this point. If you still aren't convinced, ask yourself how you would integrate value objects? Will you have a UsernameValidator to validate a Username value object everytime it's instanciated? Clearly, that doesn't make any sense and the use of value objects would be quite hard to integrate with the non always-valid approach.
How do we report all errors back when exceptions are thrown then?
That's actually something I struggled with and I've been asking that myself for a while (and I'm still not entirely convinced about what I'll be saying).
Basically, what I've come to understand is that it isn't the job of the domain to collect and return errors, that's a UI concern. If invalid data make it's way up to the domain, it just throws on you.
Therefore, frameworks like FluentValidation will find their natural home in the UI and will be validating view models rather than domain entities.
I know, that seems hard to accept that there will be some level of duplication, but this is mainly because you are probably a full-stack developer like me that deals with the UI and the domain when in fact those can and should probably be viewed as entirely different projects. Also, just like the view model and the domain model, view model validation and domain validation might be similar but serves a different purpose.
Also, if you're still concerned about being DRY, someone once told me that code reuse is also "coupling" and I think that fact is particularly important here.
Dealing with deferred validation in the domain
I will not re-explain those here, but there are various approaches to deal with deferred validations in the domain such as the Specification pattern and the Deferred Validation approach described by Ward Cunningham in his Checks pattern language. If you have the Implementing Domain-Driven Design book by Vaughn Vernon, you can also read from pages 208-215.
It's always a question of trade-offs
Validation is an extremely hard subject and the proof is that as of today people still don't agree on how it should be done. There are so many factors, but at the end what you want is a solution that is practical, maintainable and expressive. You cannot always be a purist and must accept the fact that some rules will be broken (e.g you might have to leak some unobtrusive persistence details in an entity in order to use your ORM of choice).
Therefore, if you think that you can live with the fact that some FluentValidation details makes it to your domain and that it's more practical like that, well I can't really tell if it will do more harm than good in the long run but I wouldn't.
Answer on your question depends what kind of validation you want put into validator class. Validation can be part of domain model and in your case you've implemented it with FluentValidation and I not see any problems with that. The key thing of domain model - you can use your domain model everywhere, for example if your project contains web part, api, integration with other subsystems. Each module reference to your domain model and works same for all.
If I understood it correctly, I see no problem whatsoever in doing this as long as it is abstracted as an infrastructure concern just like your repo abstracts the persistence technology.
As an example, I have created in for my projects an IObjectValidator that returns validators by object type, and a static implementation of it, so that I'm not coupled to the technology itself.
public interface IObjectValidator
{
void Validate<T>(T instance, params string[] ruleSet);
Task ValidateAsync<T>(T instance, params string[] ruleSet);
}
And then I implemented it with Fluent Validation just like this:
public class FluentValidationObjectValidator : IObjectValidator
{
private readonly IDependencyResolver dependencyResolver;
public FluentValidationObjectValidator(IDependencyResolver dependencyResolver)
{
this.dependencyResolver = dependencyResolver;
}
public void Validate<T>(T instance, params string[] ruleSet)
{
var validator = this.dependencyResolver
.Resolve<IValidator<T>>();
var result = ruleSet.Length == 0
? validator.Validate(instance)
: validator.Validate(instance, ruleSet: ruleSet.Join());
if(!result.IsValid)
throw new ValidationException(MapValidationFailures(result.Errors));
}
public async Task ValidateAsync<T>(T instance, params string[] ruleSet)
{
var validator = this.dependencyResolver
.Resolve<IValidator<T>>();
var result = ruleSet.Length == 0
? await validator.ValidateAsync(instance)
: await validator.ValidateAsync(instance, ruleSet: ruleSet.Join());
if(!result.IsValid)
throw new ValidationException(MapValidationFailures(result.Errors));
}
private static List<ValidationFailure> MapValidationFailures(IEnumerable<FluentValidationResults.ValidationFailure> failures)
{
return failures
.Select(failure =>
new ValidationFailure(
failure.PropertyName,
failure.ErrorMessage,
failure.AttemptedValue,
failure.CustomState))
.ToList();
}
}
Please note that I have also abstracted my IOC container with an IDependencyResolver so that I can use whatever implementation I want. (using Autofac at the moment).
So here is some bonus code for autofac ;)
public class FluentValidationModule : Module
{
protected override void Load(ContainerBuilder builder)
{
// registers type validators
builder.RegisterGenerics(typeof(IValidator<>));
// registers the Object Validator and configures the Ambient Singleton container
builder
.Register(context =>
SystemValidator.SetFactory(() => new FluentValidationObjectValidator(context.Resolve<IDependencyResolver>())))
.As<IObjectValidator>()
.InstancePerLifetimeScope()
.AutoActivate();
}
}
The code could be missing some of my helpers and extensions, but I believe it would be more than enough to get you going.
I hope I have helped :)
EDIT:
Since some fellow coders prefer not to use the "service locator anti pattern", here is a very simple example on how to remove it and still be happy :)
The code provides a dictionary property that should be filled with all your validators by Type.
public class SimpleFluentValidationObjectValidator : IObjectValidator
{
public SimpleFluentValidationObjectValidator()
{
this.Validators = new Dictionary<Type, IValidator>();
}
public Dictionary<Type, IValidator> Validators { get; private set; }
public void Validate<T>(T instance, params string[] ruleSet)
{
var validator = this.Validators[typeof(T)];
if(ruleSet.Length > 0) // no ruleset option for this example
throw new NotImplementedException();
var result = validator.Validate(instance);
if(!result.IsValid)
throw new ValidationException(MapValidationFailures(result.Errors));
}
public Task ValidateAsync<T>(T instance, params string[] ruleSet)
{
throw new NotImplementedException();
}
private static List<ValidationFailure> MapValidationFailures(IEnumerable<FluentValidationResults.ValidationFailure> failures)
{
return failures
.Select(failure =>
new ValidationFailure(
failure.PropertyName,
failure.ErrorMessage,
failure.AttemptedValue,
failure.CustomState))
.ToList();
}
}

Reflection-based injection vs. dynamic proxy: Practical considerations?

I'm working on some framework-ish code designed to execute a huge number of operations (hundreds of thousands), all of which use the same basic components, but need to accept operation-specific configuration data from an external source.
Assume for the moment that there's a configuration repository which, given the appropriate list of setting names, knows how to load these settings efficiently and store them in a type like the following:
public interface IConfiguration
{
dynamic Get(string key);
void Set(string key, dynamic value);
}
What I'm planning to do is implement either some fluent mapping syntax or just decorate the component classes with attributes like so:
public class MyComponent : IActivity
{
[Configuration("Threshold")]
public virtual int Threshold { get; set; }
[Configuration("SomeKey", Persistence = ConfigPersistence.Save)]
public virtual string SomeSetting { get; set; }
}
You get the picture... hopefully. What's important to note is that some properties actually need to be saved back to the repository, so conventional DI libraries don't work here; and even if they did, they're blunt instruments not designed to be spinning up hundreds of thousands of components and loading/saving millions of attributes. In other words, I don't think I'm reinventing the wheel, but if somebody wants to try to convince me otherwise, feel free.
Anyway, I'm considering two possible options to handle the "injection" of configuration data into these component instances:
Plain vanilla Reflection - scan the type for configuration attributes and save the member info (along with the config key) in a static dictionary. Then use reflection methods such as PropertyInfo.SetValue and PropertyInfo.GetValue for the injection and extraction (for lack of a better term). This is similar to the approach used by most DI libraries.
Use a dynamic proxy such as Castle and hook up an interceptor to the decorated properties, such that instead of referencing private/autogenerated fields, they reference the IConfiguration instance (i.e. the get method calls IConfiguration.Get and the set method calls IConfiguration.Set). This is similar to the approach used by NHibernate and other ORMs.
The full implementation may end up being a fair amount of work, so I don't want to go too far down the wrong path before realizing I missed something.
So my question is, what are the pros/cons of either approach, and what are the pitfalls I need to avoid? I'm thinking in broad terms of performance, maintainability, idiot-proofing, etc.
Or, alternatively, are there other, quicker paths to this goal, preferably which don't have steep learning curves?
Dynamic proxy is much better approach. Define a "configuration" interceptor that injects the value from the configuration into your component (preferably lazily). Using Dynamic proxy, I'd also implement a generic IDisposable interface to your proxied Component, so that when the object is disposed or GC'd, it will persist configuration values based on the Peristence flag set in your attribute.

Categories