I am using FluentValidation to validate my service operations. My code looks like:
using FluentValidation;
IUserService
{
void Add(User user);
}
UserService : IUserService
{
public void Add(User user)
{
new UserValidator().ValidateAndThrow(user);
userRepository.Save(user);
}
}
UserValidator implements FluentValidation.AbstractValidator.
DDD says that domain layer have to be technology independent.
What I am doing is using a validation framework instead of custom exceptions.
It's a bad idea to put validation framework in the domain layer?
Just like the repository abstraction?
Well, I see a few problems with your design even if you shield your domain from the framework by declaring an IUserValidator interface.
At first, it seems like if that would lead to the same abstraction strategy as for the Repository and other infrastructure concerns, but there's a huge difference in my opinion.
When using repository.save(...), you actually do not care at all of the implementation from the domain perspective, because how to persist things is not a domain concern.
However, invariant enforcement is a domain concern and you shouldn't have to dig into infrastructure details (the UserValidtor can now be seen as such) to see what they consist of and that's basically what you will end up doing if you do down that path since the rules would be expressed in the framework terms and would live outside the domain.
Why would it live outside?
domain -> IUserRepository
infrastructure -> HibernateUserRepository
domain -> IUserValidator
infrastructure -> FluentUserValidator
Always-valid entities
Perhaps there's a more fundamental issue with your design and that you wouldn't even be asking that question if you adhered to that school of though: always-valid entities.
From that point of view, invariant enforcement is the responsibility of the domain entity itself and therefore shouldn't even be able to exist without being valid. Therefore, invariant rules are simply expressed as contracts and exceptions are thrown when these are violated.
The reasoning behind this is that a lot of bugs comes from the fact that objects are in a state they should never have been. To expose an example I've read from Greg Young:
Let's propose we now have a SendUserCreationEmailService that takes a
UserProfile ... how can we rationalize in that service that Name is
not null? Do we check it again? Or more likely ... you just don't
bother to check and "hope for the best" you hope that someone bothered
to validate it before sending it to you. Of course using TDD one of
the first tests we should be writing is that if I send a customer with
a null name that it should raise an error. But once we start writing
these kinds of tests over and over again we realize ... "wait if we
never allowed name to become null we wouldn't have all of these tests" - Greg Young commenting on http://jeffreypalermo.com/blog/the-fallacy-of-the-always-valid-entity/
Now don't get me wrong, obviously you cannot enforce all validation rules that way, since some rules are specific to certain business operations which prohibits that approach (e.g. saving draft copies of an entity), but these rules aren't to be viewed the same way as invariant enforcement, which are rules that applies in every scenarios (e.g. a customer must have a name).
Applying the always-valid principle to your code
If we now look at your code and try to apply the always-valid approach, we clearly see that the UserValidator object doesn't have it's place.
UserService : IUserService
{
public void Add(User user)
{
//We couldn't even make it that far with an invalid User
new UserValidator().ValidateAndThrow(user);
userRepository.Save(user);
}
}
Therefore, there's no place for FluentValidation in the domain at this point. If you still aren't convinced, ask yourself how you would integrate value objects? Will you have a UsernameValidator to validate a Username value object everytime it's instanciated? Clearly, that doesn't make any sense and the use of value objects would be quite hard to integrate with the non always-valid approach.
How do we report all errors back when exceptions are thrown then?
That's actually something I struggled with and I've been asking that myself for a while (and I'm still not entirely convinced about what I'll be saying).
Basically, what I've come to understand is that it isn't the job of the domain to collect and return errors, that's a UI concern. If invalid data make it's way up to the domain, it just throws on you.
Therefore, frameworks like FluentValidation will find their natural home in the UI and will be validating view models rather than domain entities.
I know, that seems hard to accept that there will be some level of duplication, but this is mainly because you are probably a full-stack developer like me that deals with the UI and the domain when in fact those can and should probably be viewed as entirely different projects. Also, just like the view model and the domain model, view model validation and domain validation might be similar but serves a different purpose.
Also, if you're still concerned about being DRY, someone once told me that code reuse is also "coupling" and I think that fact is particularly important here.
Dealing with deferred validation in the domain
I will not re-explain those here, but there are various approaches to deal with deferred validations in the domain such as the Specification pattern and the Deferred Validation approach described by Ward Cunningham in his Checks pattern language. If you have the Implementing Domain-Driven Design book by Vaughn Vernon, you can also read from pages 208-215.
It's always a question of trade-offs
Validation is an extremely hard subject and the proof is that as of today people still don't agree on how it should be done. There are so many factors, but at the end what you want is a solution that is practical, maintainable and expressive. You cannot always be a purist and must accept the fact that some rules will be broken (e.g you might have to leak some unobtrusive persistence details in an entity in order to use your ORM of choice).
Therefore, if you think that you can live with the fact that some FluentValidation details makes it to your domain and that it's more practical like that, well I can't really tell if it will do more harm than good in the long run but I wouldn't.
Answer on your question depends what kind of validation you want put into validator class. Validation can be part of domain model and in your case you've implemented it with FluentValidation and I not see any problems with that. The key thing of domain model - you can use your domain model everywhere, for example if your project contains web part, api, integration with other subsystems. Each module reference to your domain model and works same for all.
If I understood it correctly, I see no problem whatsoever in doing this as long as it is abstracted as an infrastructure concern just like your repo abstracts the persistence technology.
As an example, I have created in for my projects an IObjectValidator that returns validators by object type, and a static implementation of it, so that I'm not coupled to the technology itself.
public interface IObjectValidator
{
void Validate<T>(T instance, params string[] ruleSet);
Task ValidateAsync<T>(T instance, params string[] ruleSet);
}
And then I implemented it with Fluent Validation just like this:
public class FluentValidationObjectValidator : IObjectValidator
{
private readonly IDependencyResolver dependencyResolver;
public FluentValidationObjectValidator(IDependencyResolver dependencyResolver)
{
this.dependencyResolver = dependencyResolver;
}
public void Validate<T>(T instance, params string[] ruleSet)
{
var validator = this.dependencyResolver
.Resolve<IValidator<T>>();
var result = ruleSet.Length == 0
? validator.Validate(instance)
: validator.Validate(instance, ruleSet: ruleSet.Join());
if(!result.IsValid)
throw new ValidationException(MapValidationFailures(result.Errors));
}
public async Task ValidateAsync<T>(T instance, params string[] ruleSet)
{
var validator = this.dependencyResolver
.Resolve<IValidator<T>>();
var result = ruleSet.Length == 0
? await validator.ValidateAsync(instance)
: await validator.ValidateAsync(instance, ruleSet: ruleSet.Join());
if(!result.IsValid)
throw new ValidationException(MapValidationFailures(result.Errors));
}
private static List<ValidationFailure> MapValidationFailures(IEnumerable<FluentValidationResults.ValidationFailure> failures)
{
return failures
.Select(failure =>
new ValidationFailure(
failure.PropertyName,
failure.ErrorMessage,
failure.AttemptedValue,
failure.CustomState))
.ToList();
}
}
Please note that I have also abstracted my IOC container with an IDependencyResolver so that I can use whatever implementation I want. (using Autofac at the moment).
So here is some bonus code for autofac ;)
public class FluentValidationModule : Module
{
protected override void Load(ContainerBuilder builder)
{
// registers type validators
builder.RegisterGenerics(typeof(IValidator<>));
// registers the Object Validator and configures the Ambient Singleton container
builder
.Register(context =>
SystemValidator.SetFactory(() => new FluentValidationObjectValidator(context.Resolve<IDependencyResolver>())))
.As<IObjectValidator>()
.InstancePerLifetimeScope()
.AutoActivate();
}
}
The code could be missing some of my helpers and extensions, but I believe it would be more than enough to get you going.
I hope I have helped :)
EDIT:
Since some fellow coders prefer not to use the "service locator anti pattern", here is a very simple example on how to remove it and still be happy :)
The code provides a dictionary property that should be filled with all your validators by Type.
public class SimpleFluentValidationObjectValidator : IObjectValidator
{
public SimpleFluentValidationObjectValidator()
{
this.Validators = new Dictionary<Type, IValidator>();
}
public Dictionary<Type, IValidator> Validators { get; private set; }
public void Validate<T>(T instance, params string[] ruleSet)
{
var validator = this.Validators[typeof(T)];
if(ruleSet.Length > 0) // no ruleset option for this example
throw new NotImplementedException();
var result = validator.Validate(instance);
if(!result.IsValid)
throw new ValidationException(MapValidationFailures(result.Errors));
}
public Task ValidateAsync<T>(T instance, params string[] ruleSet)
{
throw new NotImplementedException();
}
private static List<ValidationFailure> MapValidationFailures(IEnumerable<FluentValidationResults.ValidationFailure> failures)
{
return failures
.Select(failure =>
new ValidationFailure(
failure.PropertyName,
failure.ErrorMessage,
failure.AttemptedValue,
failure.CustomState))
.ToList();
}
}
Related
I have an IoC question that for the moment is abstract. I have not yet chosen an IoC framework for started coding. I am still mentally planning the methods I am going to use for an imminent project.
My coding style generally follows this pattern:
A Processor of some kind is instantiated and passed a Business Object.
The processor in turn will instantiate a Validator to validate that the passed business object is valid for the given process.
If the Business Object is found to be valid, then a Persistence Object will be instantiated. The Persistence object is responsible for transformations such as encryption, caching, and grouping multiple requests together in a single transaction for object graphs.
Then, the business object instantiates a DataLayer that will have the job of persisting the Business Object to the database, or pulling it from the database as the case may be (or a text file, or a webservice, whereever the data may live.)
My ideal structure is that a Processor knows about a Validator and a Peristence object, but not an AccessLayer. A persistence object knows about an access layer, but cannot directly instantiate or invoke a process. This way there are clearly defined layers that can be seperated as necessary
Finally, this process is agnostic to input or output and immutable based on the application type. In other words, I could use the same Processor to add a business object in a web app as I would in a desktop app. Obviously, the Model/View/Controller would change depending on the app type, but the rules for adding or selecting a business object remain universal.
My problem is this. I don't like that my AccessLayer in turn needs to pull the connection string from the config file, for instance. Maybe I want my users to be able to specify a config file or a Db Table for settings. Having the access layer check the config file to see if it should use the config file is circular and silly. And the Access Layer cannot likewise call a Persistence object to pull the settings, or query the Application Framework to see if it is a web app with a Web.Config or a desktop app with DbSettings.
So I was thinking that the best thing for me to do is to use an IoC container of some kind. I could then inject whatever settings I needed. This could also allow me to mock objects for testing, which is another difficult (but not impossible) task with my current method. So from my reading, my vague Processor implementation would look like this:
public class VagueProcessor{
public VagueProcessor(IValidator validator,
IPersistence persistence,
IAccessLayer accessLayer,
ISettings settings) { ... }
}
Here is my snag. In the application I am planning, the Business Object have a variety of implementations each with their own configurable rules. Say one BO is for the state of CA and another for the state of NY, and both states have their own special rules to be validated by their governing bodies. So the validator could be a CAValidator or a NYValidator just depending on the state of the Business Object.
Ok, so my question after all that preamble and backstory is this: in this scenario, would I pass a ValidatorFactory to the Processor and the Factory would instantiate the appropriate type of Validator based on the state of the Business Object? And if so, would I register each type with the IoC container, or just the Factory?
Thanks for your thoughts on this matter!!
That's a vague question as you don't have a problem yet, only the idea.
From what I understand from your question, I'd say:
The IOC solves the problem of creating the new object, not exactly deciding which object to create. In most IOC containers you can at some level choose the implementation you're asking, but in your case the logic looks very application centric, and no IOC container will help you deciding which one to use. In that case, you should indeed have a factory passed to your processor where you can ask something like factory.CreateValidatorFrom(myBusinessObject).
Internally, that factory can still use DI to instantiate each component. If you use .NET Core DI for example, you can pass a IServiceProvider to the factory, and call inside the factory serviceProvider.GetService<CAValidator>(). All DI providers will have an object like that.
So, in a sense, the factory and the DI can co-exist and each of them solve part of the problem. If you're using DI, you shouldn't ever have to instantiate the actual class. That will make it easier for each validator to have their own dependencies and you don't have to care how to get them.
And yes, in that case you'd register each validator in the DI, and also the factory. In cases like this, you can easily loop through all of them through reflection and register them dynamically by name or interface, if that is bothering you.
And in the end, if you're using .NET Core, I strongly suggest you to simply use the built-in DI. It's simple and good enough for most cases.
Validation is a crosscutting concern, so typically the validation service doesn't know about the details of the object it is validating. It only knows about its boolean valid state and how to get validation errors that are typically displayed on the UI.
As a crosscutting concern, the validation rules are abstracted from the services that read them. This is usually done via an interface and/or .NET attributes.
public class ValidateMe : IValidatableObject
{
[Required]
public bool Enable { get; set; }
[Range(1, 5)]
public int Prop1 { get; set; }
[Range(1, 5)]
public int Prop2 { get; set; }
public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
{
if (!this.Enable)
{
/* Return valid result here.
* I don't care if Prop1 and Prop2 are out of range
* if the whole object is not "enabled"
*/
}
else
{
/* Check if Prop1 and Prop2 meet their range requirements here
* and return accordingly.
*/
}
}
}
The validation service then only needs to have a mechanism to process the rules (returning a true/false for each rule) in order to ensure all of them are valid, and a way to retrieve the errors for display.
The validation service can do all of this by simply passing the model (the runtime state) to the service.
if (validationService.IsValid(model));
{
// persist
}
This can also be done using a proxy pattern to ensure that it always happens if the interface and/or attributes are available to process.
NOTE: The term Business Object implies that you want to build some sort of Smart Object Framework using objects that know how to save and retrieve their own state (internally implementing CRUD). This sort of design doesn't lend itself to DI very well. That isn't to say you can't use DI and a Smart Object design at the same time, it is just more difficult to build, more difficult to test, and then more difficult to maintain.
A design that uses models to abstract the runtime state of the application away from the services that use the models makes for an easier path. A design that I have found works pretty well for some applications is Command Query Segregation, which turns every update or request for data into its own object. It works well with a proxy or a decorator pattern to implement crosscutting concerns. It sounds strange if you are used to working with smart objects, but a loosely coupled design like this is simpler to test which makes it just as reliable, and since query and command classes are used like
var productDetails = this.queryProcessor.Execute(new GetProductDetailsQuery
{
ProductId = id
});
Or
// This command executes a long and complicated workflow,
// but this is all that is done inside of the action method
var command = new AddToCartCommand
{
ProductId = model.Id,
Quantity = model.Qty,
Selections = model.Selections,
ShoppingCartId = this.anonymousIdAccessor.AnonymousID
};
this.addToCartHandler.Handle(command);
it is almost as easy to use. You can even easily break out different steps of a complicated workflow into their own commands so it can be tested and verified at each step of the way, which is something that is difficult to do on a smart object design.
Considering this class with business logic:
public static class OrderShipper
{
public static void ShipOrder(Order order) {
AuthorizationHelper.AuthorizedUser();
using (new PerformanceProfiler()) {
OperationRetryHelper.HandleWithRetries(() => ShipOrderInTransaction(order));
}
}
private static void ShipOrderInTransaction(Order order) {
using (var transaction = new TransactionHelper()) {
ShipOrderInternal(order);
transaction.Commit();
}
}
private static void ShipOrderInternal(order) {
// lots of business logic
}
}
The class contains some business logic, and executes some crosscutting concerns as well. Although there is no doubt about that this class violates the Open/Closed Principle, does this class violate the Single Responsibility Principle?
I'm in doubt, since the class itself is not responsible for authorizing the user, for profiling the performance and for handling the transaction.
There is no question about this that this is poor design, since the class is still (statically) depending on those crosscutting concerns, but still: Is it violating the SRP. If so, why is this?
It's a good question, the title is slightly misleading (it's unlikely you can build an application without "calling into other code"). Remember that the SOLID principles are more guidelines than absolute rules that must be followed; if you take SRP to its logical conclusion, you will end up with one method per class. The way to minimise the impact of cross-cutting concerns is to create a facade that is as easy as possible to use. In your examples you have done this well - each crosscutting concern only uses one line.
Another way to achieve this is through AOP, which is possible in C# by using PostSharp or through IoC interception
There is nothing wrong with a class method coordinating some other classes activities, that's not breaking the SRP. You will break it if the logic of those classes was part of OrderShipper.
I'm not sure what PerformanceProfiler does, but is the only component that looks weird in there.
Let's make it more visible by converting the class to a command:
// Command pattern
public class ShipOrder
{
ITransactionFactory _transactionFactory;
public OrderShipper(ITransactionFactory factory)
{
if (factory == null) throw new ArgumentNullException("factory");
_transactionFactory = factory;
}
[PrincipalPermission(Roles = "User")]
public void Execute(Order order)
{
if (order == null) throw new ArgumentNullException("order");
using (new PerformanceProfiler())
{
HandleWithRetries(() => ShipOrderInTransaction(order));
}
}
private void ShipOrderInTransaction(Order order)
{
using (var transaction = _transactionFactory.Create())
{
ShipOrderInternal(order);
transaction.Commit();
}
}
protected void ShipOrderInternal(order)
{
// bussiness logic which is divided into different protected methods.
}
}
Hence you can call it using:
var cmd = new ShipOrder(transactionFactory);
cmd.Execute(order);
That's pretty solid.
Yes, it does break SRP, at least according to the class name.
the class itself is not responsible for authorizing the user, for profiling the performance and for handling the transaction.
You are answering yourself,it should contain only the shipping order logic. And it shouldn't be static (why is it static?!).
The solution provided by #jgauffin is a posibility, although I'm not entirely convinced that the OrderShipper should know about a transaction or it should be just a part of one. ALso, the performance profiler, IMO, has no place in this class. But having only this information I can't suggest a solution. The profiling though is a crosscuting concern and it might be better to be handled outside of this class, perhaps with an attribute.
Btw, using a message driven approach (as suggested by jgauffin) it should allow the infrastructure to provide profiling and reliability (HandleWithRetries) support
Assuming I have a list of financial transactions, I have a need to execute a list of validation rules against those transactions. An example would be I have a transaction to purchase a product, however first I need to validate that the account in the transaction has enough available funds, that the product is not sold out etc. As a result of these many rules the transaction will be marked as rejected, as well as an error code should be specified.
Naturally I am thinking towards fronting my rules with an interface, allowing the executing code to roll through the rules executing each one until the first one rejects the transaction.
Each rule will require to be configured with parameters (ex. ValidateMinimumBalance will need to know that minimumBalance = 30). The result of a rule executing can be as simple as settings the rejection code on the transaction object, and the error code; or it can be as complicated as automatically modifying multiple other properties of the transaction.
My basic understanding of design patterns points to me either Strategy or Command patterns, but I am not entirely sure which one is better suited for this scenario.
Command Pattern
Each command will implement some sort of IValidate interface
The constructor of the command will take an instance of the transaction as the receiver in order to be able to read/validate the transaction as well as modify aspects of it. The constructor will also take an array of key/value pairs as parameters for the validation logic.
When I try to picture how the Strategy Pattern fits this scenario it looks very similar. In most examples the strategy is a simple object with a single method, however in my case the strategy will need a reference to the transaction as well as validation parameters.
Strategy is more used to swap out algorithms, its not really used for chaining validations. If you are going to have a pattern where you have one validation per type then you could use the strategy, if you are finding your having to use multiple validators, or the need to reuse validators. I think you are going to have to either find a new way to do it (aka COR) or within your strategy use the COR.
I actually would answer other. I think a combination chain of responsibility pattern and the composite pattern, or decorator for validators is much more suited for your needs.
Typing up an example implementation now.. but at a high level
Chain of Responsiblity
The design would revolve around something like:
abstract class Handler
{
protected Handler next;
public Handler(Handler h){
this.next = h;
}
public abstract bool Validate(Request request);
public abstract void Handle(Request request);
}
class CoreLogic: Handler
{
public CoreLogic(Handler handle) : base(handle){
}
public override void Validate(Request request){
return True
}
public override void Handle(Request request){
if(this.Validate(request)){
if(next!= null){
next.Handle(request);
}
}
}
}
class ValidBalance: Handler
{
public ValidBalance(Handler handle) : base(handle){
}
public override void Validate(Request request){
return True
}
public override void Handle(Request request){
if(this.Validate(request)){
if(next!= null){
next.Handle(request);
}
}
}
}
class MainApp
{
static void Main(){
Handler h = new ValidateBalance( new CoreLogic(null));
h.Handle(new Request());
}
}
Other useful links:
Chain of Responsiblity wikipedia
A Strategy would be something use to 'parameterize' a Command (telling it how parts of the operation should be executed).
When I try to picture how the Strategy Pattern fits this scenario it looks very similar.
Similar? It should look identical.
The distinction is one of how the context and delegation works. In principle a Command is the "active" agent. A Strategy is injected into some active agent. That distinction is pretty subtle.
It barely changes the design. What does change is the expectation.
Command objects (more-or-less) stand alone. They're built to do their work, and then they can vanish. No one cares about them any more. Perhaps they also use the Memento pattern, and have some future life, but perhaps not.
Strategy objects (more-or-less) live with the object into which they're injected. A Strategy would be part of some larger object, and could be replaced by a different implementation without breaking or changing anything else.
But the essential interface is largely the same.
In most examples the strategy is a simple object with a single method,
Those are poor examples.
however in my case the strategy will need a reference to the transaction as well as validation parameters.
Not unusual. Nothing wrong with it.
but I am not entirely sure which one
is better suited for this scenario
Neither :)
I strongly recommend to look at Interpreter. Actually your validator rules are just predicates formulated for your transactions. It's quite possible that soon you will need to combine these rules with AND, OR, NOT, etc.
I have to design a data validation framework, which basically breaks down in these components.
Data Accessors - what's the best way
to deal with this
Object Builders -
how should I prepare for future
object structures
Validators (Strategy Pattern)
I have to apply some rules on data, but I don't know what that data set would be like in future.
So I am confused after a lot of thinking whether Rules should know about how the object looks like or is it possible without Rule and data being dependent (I have a feel that, yes it is but don't know how). I am finding it hard to design abstraction for dataset.
any clues, in what direction I should think?
language - C# (.NET)
platform - Windows
EDIT: exact question
in Stratagy pattern, is it possible that Context can hold generic object and strategy can deal with that, without knowing how the object is constructed ?
In a validation framework, you usually have a set of "out-of-the-box" rules that don't know anything about what the object/entity looks like. For example, you might have a NotNullRule to check a given property is not null:
//code is not REAL code!
var user = new User({username=null, email="hello#test.com");
var notNullrule = new NotNullRule( typeof(User).GetProperty("username"), user );
var errors = notNullrule.Check();
Debug.Assert( errors[0] == "Property Username cannot be null");
It's common to use attributes to setup which validation strategy to use on which properties of a class. See this example here.
Validation frameworks usually let you create custom rules too, that might be domain specific. For example:
public class CustomerIsEligibleForDiscount : Rule
{
public void Check(){ ... }
}
Hope this helps.
If you define an abstract/base class that calls an abstract ValidateRules() method when necessary, then you can implement ValidateRules() within each class that inherits from the abstract/base class.
Declaring the method as abstract in the base class enforces its implementation in any derived class.
Is there any reason you can't use one of the many existing C# Validation Frameworks as a starting point? They tend to have conventions built in for handling these concerns.
http://validationframework.codeplex.com/
http://xval.codeplex.com/
http://msdn.microsoft.com/en-us/library/aa480193.aspx
http://www.asp.net/mvc/tutorials/validation-with-the-data-annotation-validators-cs
We are using ASP.NET with a lot of AJAX "Page Method" calls.
The WebServices defined in the Page invokes methods from our BusinessLayer.
To prevent hackers to call the Page Methods, we want to implement some security in the BusinessLayer.
We are struggling with two different issues.
First one:
public List<Employees> GetAllEmployees()
{
// do stuff
}
This Method should be called by Authorized Users with the Role "HR".
Second one:
public Order GetMyOrder(int orderId)
{
// do sutff
}
This Method should only be called by the owner of the Order.
I know it's easy to implement the security for each method like:
public List<Employees> GetAllEmployees()
{
// check if the user is in Role HR
}
or
public Order GetMyOrder(int orderId)
{
// check if the order.Owner = user
}
What I'm looking for is some pattern/best practice to implement this kind of security in a generic way (without coding the the if then else every time)
I hope you get what i mean :-)
User #mdma describes a bit about Aspect Oriented Programming. For this you will need to use an external library (such as the great PostSharp), because .NET doesn’t have much AOP functionality. However, .NET already has a AOP mechanism for role based security, that can solve part of your problem. Look at the following example of standard .NET code:
[PrincipalPermission(SecurityAction.Demand, Role="HR")]
public List<Employees> GetAllEmployees()
{
// do stuff
}
The PrincipalPermissionAttribute is part of the System.Security.Permissions namespace and is part of .NET (since .NET 1.0). I’ve been using it for years already to implement role based security in my web applications. Nice thing about this attribute is that the .NET JIT compiler does all the weaving for you on the background and you can even define it on a class level. In that case all members of that type will inherit that attribute and its security settings.
Of course it has its limitations. Your second code sample can't be implemented using the .NET role based security attribute. I think you can’t really come around some custom security checks in this method, or calling some internal security library.
public Order GetMyOrder(int orderId)
{
Order o = GetOrderInternal(orderId);
BusinessSecurity.ValidateOrderForCurrentUser(o);
}
Of course you can use an AOP framework, but you would still have to write an framework specific attribute that will again call your own security layer. This would only get useful when such an attribute would replace multiple method calls, for instance when having to put code inside try,catch,finally statements. When you would be doing a simple method call, there wouldn’t be much difference between a single method call or a single attribute IMO.
When you are returning a collection of objects and want to filter out all objects for which the current user doesn't have the proper rights, LINQ expression trees can come in handy:
public Order[] GetAllOrders()
{
IQueryable orders = GetAllOrdersInternal();
orders = BusinessSecurity.ApplySecurityOnOrders(orders);
return orders.ToArray();
}
static class BusinessSecurity
{
public static IQueryable<Order> ApplySecurityOnOrders(
IQueryable<Order> orders)
{
var user = Membership.GetCurrentUser();
if (user.IsInRole("Administrator"))
{
return orders;
}
return
from order in orders
where order.Customer.User.Name == user.Name
select order;
}
}
When your O/RM supports LINQ through expression trees (such as NHibernate, LINQ to SQL and Entity Framework) you can write such a security method once and apply it everywhere. Of course the nice thing about this is, that the query to your database will always be optimal. In other words, no more records will be retrieved than needed.
UPDATE (years later):
I used this attribute for a long time in my code base, but several years back, I came to the conclusion that attribute based AOP has terrible downsides. For instance, it hinders testability. Since security code is weaved with normal code, you can't run normal unit tests without having to impersonate a valid user. This is brittle and should not be a concern of the unit test (the unit test itself violates the Single Responsibility Principle). Besides that, it forces you to litter your code base with that attribute.
So instead of using the PrincipalPermissionAttribute, I rather apply cross-cutting concerns like security by wrapping code with decorators. This makes my application much more flexible and much easier to test. I've written several articles about this technique the last couple of years (for instance this one and this one).
One "best practice" is to implement Security an aspect. This keeps the security rules separate from the primary business logic, avoiding hard-coding and making it easy to change the security rules in different environments.
The article below lists 7 ways of implementing aspects and keeping the code separate. One approach that is simple and doesn't change your business logic interface is to use a proxy. This exposes the same interface as you have currently, yet allows an alternative implementation, which can decorate the existing implementation. The security requirements can be injected into this interface, using either hard-coding or custom attributes. The proxy intercepts method calls to your business layer and invokes the appropriate security checks. Implementing interception via proxies is described in detail here - Decouple Components by Injecting Custom Services into your Object's Invocation Chain. Other AOP approaches are given in Understanding AOP in .NET.
Here's a forum post discussing security as an aspect, with implementation using advice and security attributes. The end result is
public static class Roles
{
public const string ROLE_ADMIN = "Admin";
public const string ROLE_CONTENT_MANAGER = "Content Manager";
}
// business method
[Security(Roles.ROLE_HR)]
public List<Employee> GetAllEmployees();
You can put the attribute directly on your business method, tight coupling, or create a service proxy with these attributes, so the security details are kept separate.
If you are using SOA, you can create a Security Service, and each action (method) will send it's context (UserId, OrderId etc.). Security Service knows about business security rules.
Scheme may be something like this
UI -> Security -> BLL -> DAL