Getting attribute context in C# - c#

Basically I'm trying to implement some sort of poor man's Aspect Oriented Programming in C#. I had thought about using a ContextAttribute but they seem only be be bound at the class level. Is there any way that I can put an attribute in such that it will receive the same parameters as the method which it annotates or some way to access the context in which it fired?
I have this code
public void AddUser(User user)
{
var errors = DataAnnotationsValidationRunner.GetErrors(user);
if (errors.Any())
throw new RulesException(errors);
users.Add(user);
}
from which I would like to extract the first 3 lines so I had something like
[Validated]
public void AddUser(User user)
{
users.Add(user);
}

I think you are missing a third component. Most AOP implementations (e.g. Aspect#) rely on a proxy or interceptor to actually execute the code. In your scenario, you lack whichever component needed to 1) know the attribute exists on the method, and 2) trigger the mechanism (or become it) needed to execute the code within the attribute.
Fortunately, there are already many (fairly) simple solutions available in open source. The simplest option I can think of would be to use a compile-time weaver like PostSharp. Grab a copy of that, and in the samples you'll find several examples of exactly what you are trying to do (you'd be interested in the OnMethodInvocationAspect).
The end result is that your code looks exactly like it does in the sample you provided, yet it's also running the code you wish.

Don't know exactly how your solution should look like, but in C# attributes do not execute code as long as you don't request them (as far as I know). And if you query for the attribute, you also have the context. So there is something wrong with your strategy in my opinion.

Related

How to use C# FluentValidation ValidationContext.RootContextData

I'm new to FluentValidation and am trying to create a validator that accepts some context/parameters at validate time. I've created a custom validator and in the constructor I have something like:
RuleFor(request => request.someField).Custom((request, context) => {
var foo = context.ParentContext.RootContextData["someDependency"];
});
And in the calling code I do:
var validator = new FooValidator();
var context = new ValidationContext<SomeRequest>(request);
context.RootContextData["someDependency"] = someDependency;
validator.Validate(context);
which causes:
System.Collections.Generic.KeyNotFoundException: The given key 'someDependency' was not present in the dictionary.
Any ideas? The reason I want to pass in some context parameters is that they come from the database. If I instead pass that into the validator constructor, then by the time the validate method is called, those context parameters might be out of date. I also don't want to do the fetching from the database in the validator constructor as I will also need to fetch the same data before/after the validate method is called, and database caching is not possible in this scenario, so I'd like to avoid the unnecessary database roundtrips. I've read and am doing what seems to be the same as what is described https://docs.fluentvalidation.net/en/latest/advanced.html#root-context-data
As mentioned in my OP comment, the code looks sound but it's likely failing at the MVC validation pipeline stage and never makes it to your Validate invocation. With the former as it stands you've not added your dependency to the dictionary so it barfs.
There's probably a couple of ways to solve it. My first thought would be to introduce a rule set so this rule only executes server-side as part of your Validate invocation. There's a whole section on rule sets in the doco which covers it pretty well. You may need to combine it with a CustomizeValidator attribute so that the rule set doesn't get executed in the MVC validation pipeline (I've never had to when using a server-side rule set but I've mentioned it for completeness).
The nice thing with this is that you probably won't need to change much of your existing code; you've mentioned you load a number of dependencies into the validation context so it could be a good fit.
Another methodology that looks good, but one that I haven't tried myself, would be to populate the validation context in the BeforeMvcValidation validation interceptor. The value of this option is going to depend on how you gather those dependencies and whether they are used for anything other than validation. It'd probably require more effort than a rule set based on your implementation description as well.

Setting up SpecFlow to pass in identifiers

I am new to C# and am trying to use SpecFlow as I used to use Gherkin by giving a unique name to an item and then passing in the name in the Step Definition. My question is about how to add in the identifier when I create an object so I can call the object without having to pass in the actual name of the object every time that I create a step.
So, for instance the code would look something like this:
[When(#"I click the (.*) button")]
public void ClickTheButton(string ButtonName)
{
driver.Click(ButtonName)
//ButtonName would be a string that would call to the ID for the ADD button
}
I want to be able to put in something like "Add" (so the line would read "When I click the ADD button") and then have the code search for the "ADD" identifier.
I know that this is possible in Ruby/Cucumber by using a DOM and then passing in XML with gherkin names. In Ruby/Cucumber the object would look something like this:
<button gherkin_name="ADD" key="id" value="add_button_12685"/>
However, I am finding absolutely no way of doing that in C# with SpecFlow and this is something that I really need to be able to do.
Is there a way to do this at all? All I'm really trying to do is link a handle/parameter name that business users could actually use to a Page Object like you can in Ruby/Cucumber without making the user know the code in the background. And, incidentally, the names of the objects are almost exactly like the gherkin line that I added in, thus they are very weird to have a user write. This is the reason that I'd like to have just an identifier for the user.
Thanks in advance for your help.
EDIT: I realise now I was not clear enough in my original post so perhaps some background will help. I am using Selenium-Webdriver to test a website that has hundreds of items on it. Writing a different step for every single item on every single page would be exceedingly tedious and time consuming. Because there are many of the exact same items with the exact same characteristics (for instance there are something like 50 buttons that all behave similarly on a single page and the site is dozens of pages) on the pages, writing a single method for testing them seems the most logical idea. Identifying these items with an identifier that the business could use would cut down on bulk inside of the Steps, the number of steps written, and the likelihood that the business users would feel comfortable using the code which is the end goal.
You can do what you want if you are using the PageObject pattern and have a property Buttons (probably on a base PageObject class) which exposes the available buttons as a collection (which can be done via reflection) and then you can just do something like:
[When(#"I click the (.*) button")]
public void ClickTheButton(string ButtonName)
{
myPage.Buttons.First(button=>button.Name==ButtonName).Click;
}
but I would take what AutomatedChaos said into consideration and not use this in a step in the gerkin but just have this as a helper method something like this
[When(#"I add a widget")]
public void AddAWidget(string ButtonName)
{
ClickTheButton("Add")
}
private void ClickTheButton(string ButtonName)
{
myPage.Buttons.First(button=>button.Name==ButtonName).Click;
}
your Buttons property doesn't have to be done with reflection, the simplest implementation is something like this:
public IEnumerable<IWebElement> Buttons
{
yield return AddButton;
yield return RemoveButton;
yield return SomeOtherButton;
//etc etc
}
but using reflection will mean that as you add buttons to the page object you don't need to remember to add them to this method, they will be found automatically.
SpecFlow is only the BDD framework. It will not drive browsers itself, you need to install additional packages that drives the browser.
With C#, you have a few options:
Selenium, the best known and works with the Page Object you are accustomed with.
Fluent Automation, an upcoming library that works as a wrapper on top of Selenium, and makes the interfacing easier (more natural language)
CodedUI, Microsofts web and UI test solution that comes natively with Visual Studio Test edition.
On a personal note, I consider Selenium (with or without Fluent Automation) the best fitted to work with SpecFlow (comparisson)
If you want to install Selenium or other packages, you can install the NuGet package manager to easily search, install and update packages for you.
Lastly, have you considered to use more domain specific Gherkin phrases like When I add a Wabberjock instead of When I press the Add button? This is where the power of BDD lies: Exposing the intention while hiding the implementation details.

Identify implementations of base class in an array

I have the following problem: I have a set of engines which automaticly (listening to events) controls my model. The following picture shows in general the class diagram:
Now I have a client which knows the EngineFacade and i want to set the property Active from Engine2 from the client, but neither the client nor the EngineFacade knows which of the three engines is Engine2.
There are two ways, but I dont like any of them:
Check if one of the engines is of type Engine2 - if there is another class which does the same task but is named different I have to change it in the EngineBuilder AND in the EngineFacade.
Check with an identifier string - I dont really like magic strings.
What I know on the client site is that there is or should be an engine which handels the grid. But I dont know more.
Maybe I have to choose between the two devils, but maybe one of you has a better solution.
You could use an attribute on the implementation of Engine2, like so:
[AttributeUsage(AttributeTargets.Class)]
public class HandlesGridAttribute : Attribute { }
Which you then apply to your derivation:
[HandlesGrid]
public Engine2 : EngineBase { ... }
Then, in your client, check for the attribute:
IEnumerable<EngineBase> bases = ...;
// Get all the implementations which handle the grid.
IEnumerable<EngineBase> handlesGrid = bases.
Where(b => b.GetType().
GetCustomAttributes(typeof(HandlesGridAttribute), true).Any());
// Set the active property.
foreach (EngineBase b in handlesGrid) b.Active = true;
The major drawback here (which may or may not apply to you) is that you can't change the value at runtime (since the attribute is baked in at compile time). If your engine is not dynamic in this way, then the attribute is the right way to go.
If you need to change whether or not a derivation can perform this action at runtime though, then you have to fall back to your second option, code constructs that identify what the attributes of the engine are. Mind you, it doesn't have to be a string (and I don't like that either), but it can be something that is more structured that will give you the information you're looking for.

Is it bad to have "specifications" for a controller/method specified in routing code?

I'm designing an alternative MVC framework for ASP.Net. Part of my goals for the framework is to have as little "magic" as possible. The only bit of reflection I have is for binding things like form, query string, etc to a plain ol' class(with some optional attributes for ignore, conversion, etc). As such, I do absolutely no class/method detection. Everything must be very explicit. I've gone through about 3 iterations of API "shape". The first two achieved my goal of having no magic, but it was very verbose and not easy to read.. and controllers usually had to do the heavy lifting the MVC framework should do.
So, now in this third iteration I'm trying really hard to get it right. One slightly controversial thing I do differently is the routing code. Because everything is explicit and reflection is discouraged, I can't search for some attribute in the controller to resolve a route. Everything must be specified at the route level. In the first iteration this wasn't done, but it made for extremely cumbersome and verbose controllers...
Right now, I have this fluent API for specifying routes. It has gone a bit beyond what I first imagined though and now functions as a sort of way to specify what a controller's method is capable of and what it should accept.
On to the actual code. The implementation is irrelevant. The only thing you really need to know is that there is a LOT of generic types involved. So, here is a quick sample of some routing:
var router=new Router(...);
var blog=router.Controller(() => new BlogController());
blog.Handles("/blog/index").With((ctrl) => ctrl.Index());
blog.Handles("/blog/{id}").With((ctrl,model) => ctrl.View(model["id"])).WhereRouteLike((r) => r["id"].IsInteger()); //model defaults to creating a dictionary from route parameters
blog.Handles("/blog/new").UsingFormModel(() => new BlogPost()).With((ctrl, model) => ctrl.NewPost(model)); //here model would be of type BlogPost. Also, could substitue UsingRouteModel, UsingQueryStringModel, etc
There are also some other methods that could be implemented such as WhereModelIsLike or some such that does verification on the model. However, does this kind of "specification" belong in the routing layer? What are the limits that should be specified in the routing layer? What should be left to the controller to validate?
Am I making the routing layer worry about too much?
I think the routing is way too verbose. I wouldn't want to write that kind of code for 20 controllers. Especially, because it is really repetetive.
The problem I see here that even default cases require verbose declarations. Those verbose declarations only should be needed for special cases.
It is expressive and readable, but you might want to consider packaging advanced features up.
Have a look at the following specification. And that's just for a single action in a single controller:
blog.Handles("/blog/new")
.UsingFormModel(() => new BlogPost())
.With((ctrl, model) => ctrl.NewPost(model))
.WhereModelIsLike(m => m.Status == PostStatus.New);
One way to only slightly reducing the amount of code would be to allow the specification of a root folder:
var blog=router.Controller(() => new BlogController(), "/blog");
blog.Handles("index").Wi..
blog.Handles("{id}").Wit..
blog.Handles("new").Usin..
Another idea to reduce the code for default cases would be to introduce one interface per default action. The controller needs to implement the interfaces for supported actions:
Something like this maybe:
public interface ISupportIndex
{
void Index();
}
public interface ISupportSingleItem
{
void View(int id);
}
Now, you could provide methods like blog.HandlesIndex();, blog.HandlesSingleItem();.
Those methods return the same thing as your existing methods, so the result can be further refined.
They could be designed as extension methods that are only available if the controller actually implements the interface. To achieve this, the return type of router.Controller would need to be a covariant interface with the controller as generic parameter, i.e. something like this:
IControllerRoute<out TController>
For example, the extension method HandlesIndex would be implemented like this:
public static IRouteHandler HandlesIndex(
this IControllerRoute<ISupportIndex> route)
{
// note: This makes use of the "root" as suggested above:
// It only specifies "index", not "/someroot/index".
return route.Handles("index").With(x => x.Index);
}
work on IControllerRoute<ISupportIndex>, to be only displayed in cases the controller actually supports it.
The route for the blog controller could look like this:
blog.HandlesIndex();
blog.HandlesSingleItem();
// Uses short version for models with default constructor:
blog.HandlesNew<BlogPost>().UsingFormModel();
// The version for models without default constructor could look like this:
//blog.HandlesNew<BlogPost>().UsingFormModel(() => new BlogPost(myDependency));
Adding validation rules could be done a little bit more concise, too:
blog.HandlesNew<BlogPost>().UsingFormModel()
.When(m => m.Status == PostStatus.New);
If the specification is more complex, it could be packaged in its own class that implements IModelValidation. That class is now used:
blog.HandlesNew<BlogPost>().UsingFormModel()
.WithValidator<NewBlogPostValidation>();
All of my suggestions are just ways to make your current approach easier to handle, so I guess up till now, it doesn't really answer your actual question. I do that now:
I like my controllers as clean as possible. Putting validation rules on the route is something that looks very good to me, because the controller action now can assume that it only is called with valid data. I would continue with this approach.
Yes, IMHO, routing should not contain logic about the model, or even the view.
If you look at light-weight Web frameworks out there now (Nancy etc.), the routing concept does not include things like view link generation. It is entirely about mapping the URI template to the controller. This takes a lot of the "magic" out of the ASP.NET implementation.
https://github.com/NancyFx/Nancy/wiki/Defining-routes
However, the Nancy approach still requires some "framework" code to understand which routes are available. So, it doesn't fit your requirements exactly.

.NET refactoring, DRY. dual inheritance, data access and separation of concerns

Back story:
So I've been stuck on an architecture problem for the past couple of nights on a refactor I've been toying with. Nothing important, but it's been bothering me. It's actually an exercise in DRY, and an attempt to take it to such an extreme as the DAL architecture is completely DRY. It's a completely philosophical/theoretical exercise.
The code is based in part on one of #JohnMacIntyre's refactorings which I recently convinced him to blog about at http://whileicompile.wordpress.com/2010/08/24/my-clean-code-experience-no-1/. I've modified the code slightly, as I tend to, in order to take the code one level further - usually, just to see what extra mileage I can get out of a concept... anyway, my reasons are largely irrelevant.
Part of my data access layer is based on the following architecture:
abstract public class AppCommandBase : IDisposable { }
This contains basic stuff, like creation of a command object and cleanup after the AppCommand is disposed of. All of my command base objects derive from this.
abstract public class ReadCommandBase<T, ResultT> : AppCommandBase
This contains basic stuff that affects all read-commands - specifically in this case, reading data from tables and views. No editing, no updating, no saving.
abstract public class ReadItemCommandBase<T, FilterT> : ReadCommandBase<T, T> { }
This contains some more basic generic stuff - like definition of methods that will be required to read a single item from a table in the database, where the table name, key field name and field list names are defined as required abstract properties (to be defined by the derived class.
public class MyTableReadItemCommand : ReadItemCommandBase<MyTableClass, Int?> { }
This contains specific properties that define my table name, the list of fields from the table or view, the name of the key field, a method to parse the data out of the IDataReader row into my business object and a method that initiates the whole process.
Now, I also have this structure for my ReadList...
abstract public ReadListCommandBase<T> : ReadCommandBase<T, IEnumerable<T>> { }
public class MyTableReadListCommand : ReadListCommandBase<MyTableClass> { }
The difference being that the List classes contain properties that pertain to list generation (i.e. PageStart, PageSize, Sort and returns an IEnumerable) vs. return of a single DataObject (which just requires a filter that identifies a unique record).
Problem:
I'm hating that I've got a bunch of properties in my MyTableReadListCommand class that are identical in my MyTableReadItemCommand class. I've thought about moving them to a helper class, but while that may centralize the member contents in one place, I'll still have identical members in each of the classes, that instead point to the helper class, which I still dislike.
My first thought was dual inheritance would solve this nicely, even though I agree that dual inheritance is usually a code smell - but it would solve this issue very elegantly. So, given that .NET doesn't support dual inheritance, where do I go from here?
Perhaps a different refactor would be more suitable... but I'm having trouble wrapping my head around how to sidestep this problem.
If anyone needs a full code base to see what I'm harping on about, I've got a prototype solution on my DropBox at http://dl.dropbox.com/u/3029830/Prototypes/Prototype%20-%20DAL%20Refactor.zip. The code in question is in the DataAccessLayer project.
P.S. This isn't part of an ongoing active project, it's more a refactor puzzle for my own amusement.
Thanks in advance folks, I appreciate it.
Separate the result processing from the data retrieval. Your inheritance hierarchy is already more than deep enough at ReadCommandBase.
Define an interface IDatabaseResultParser. Implement ItemDatabaseResultParser and ListDatabaseResultParser, both with a constructor parameter of type ReadCommandBase ( and maybe convert that to an interface too ).
When you call IDatabaseResultParser.Value() it executes the command, parses the results and returns a result of type T.
Your commands focus on retrieving the data from the database and returning them as tuples of some description ( actual Tuples or and array of arrays etc etc ), your parser focuses on converting the tuples into objects of whatever type you need. See NHibernates IResultTransformer for an idea of how this can work (and it's probably a better name than IDatabaseResultParser too).
Favor composition over inheritance.
Having looked at the sample I'll go even further...
Throw away AppCommandBase - it adds no value to your inheritance hierarchy as all it does is check that the connection is not null and open and creates a command.
Separate query building from query execution and result parsing - now you can greatly simplify the query execution implementation as it is either a read operation that returns an enumeration of tuples or a write operation that returns the number of rows affected.
Your query builder could all be wrapped up in one class to include paging / sorting / filtering, however it may be easier to build some form of limited structure around these so you can separate paging and sorting and filtering. If I was doing this I wouldn't bother building the queries, I would simply write the sql inside an object that allowed me to pass in some parameters ( effectively stored procedures in c# ).
So now you have IDatabaseQuery / IDatabaseCommand / IResultTransformer and almost no inheritance =)
I think the short answer is that, in a system where multiple inheritance has been outlawed "for your protection", strategy/delegation is the direct substitute. Yes, you still end up with some parallel structure, such as the property for the delegate object. But it is minimized as much as possible within the confines of the language.
But lets step back from the simple answer and take a wide view....
Another big alternative is to refactor the larger design structure such that you inherently avoid this situation where a given class consists of the composite of behaviors of multiple "sibling" or "cousin" classes above it in the inheritance tree. To put it more concisely, refactor to an inheritance chain rather than an inheritance tree. This is easier said than done. It usually requires abstracting very different pieces of functionality.
The challenge you'll have in taking this tack that I'm recommending is that you've already made a concession in your design: You're optimizing for different SQL in the "item" and "list" cases. Preserving this as is will get in your way no matter what, because you've given them equal billing, so they must by necessity be siblings. So I would say that your first step in trying to get out of this "local maximum" of design elegance would be to roll back that optimization and treat the single item as what it truly is: a special case of a list, with just one element. You can always try to re-introduce an optimization for single items again later. But wait till you've addressed the elegance issue that is vexing you at the moment.
But you have to acknowledge that any optimization for anything other than the elegance of your C# code is going to put a roadblock in the way of design elegance for the C# code. This trade-off, just like the "memory-space" conjugate of algorithm design, is fundamental to the very nature of programming.
As is mentioned by Kirk, this is the delegation pattern. When I do this, I usually construct an interface that is implemented by the delegator and the delegated class. This reduces the perceived code smell, at least for me.
I think the simple answer is... Since .NET doesn't support Multiple Inheritence, there is always going to be some repetition when creating objects of a similar type. .NET simply does not give you the tools to re-use some classes in a way that would facilitate perfect DRY.
The not-so-simple answer is that you could use code generation tools, instrumentation, code dom, and other techniques to inject the objects you want into the classes you want. It still creates duplication in memory, but it would simplify the source code (at the cost of added complexity in your code injection framework).
This may seem unsatisfying like the other solutions, however if you think about it, that's really what languages that support MI are doing behind the scenes, hooking up delegation systems that you can't see in source code.
The question comes down to, how much effort are you willing to put into making your source code simple. Think about that, it's rather profound.
I haven't looked deeply at your scenario, but I have some thoughs on the dual-hierarchy problem in C#. To share code in a dual-hierarchy, we need a different construct in the language: either a mixin, a trait (pdf) (C# research -pdf) or a role (as in perl 6). C# makes it very easy to share code with inheritance (which is not the right operator for code-reuse), and very laborious to share code via composition (you know, you have to write all that delegation code by hand).
There are ways to get a kind of mixin in C#, but it's not ideal.
The Oxygene (download) language (an Object Pascal for .NET) also has an interesting feature for interface delegation that can be used to create all that delegating code for you.

Categories