Let's assume that our system can perform actions, and that an action requires some parameters to do its work.
I have defined the following base class for all actions (simplified for your reading pleasure):
public abstract class BaseBusinessAction<TActionParameters>
: where TActionParameters : IActionParameters
{
protected BaseBusinessAction(TActionParameters actionParameters)
{
if (actionParameters == null)
throw new ArgumentNullException("actionParameters");
this.Parameters = actionParameters;
if (!ParametersAreValid())
throw new ArgumentException("Valid parameters must be supplied", "actionParameters");
}
protected TActionParameters Parameters { get; private set; }
protected abstract bool ParametersAreValid();
public void CommonMethod() { ... }
}
Only a concrete implementation of BaseBusinessAction knows how to validate that the parameters passed to it are valid, and therefore the
ParametersAreValid is an abstract function. However, I want the base class constructor to enforce that the parameters passed are always valid, so I've added a
call to ParametersAreValid to the constructor and I throw an exception when the function returns false. So far so good, right? Well, no.
Code analysis is telling me to "not call overridable methods in constructors" which actually makes a lot of sense because when the base class's constructor is called
the child class's constructor has not yet been called, and therefore the ParametersAreValid method may not have access to some critical member variable that the
child class's constructor would set.
So the question is this: How do I improve this design?
Do I add a Func<bool, TActionParameters> parameter to the base class constructor? If I did:
public class MyAction<MyParameters>
{
public MyAction(MyParameters actionParameters, bool something) : base(actionParameters, ValidateIt)
{
this.something = something;
}
private bool something;
public static bool ValidateIt()
{
return something;
}
}
This would work because ValidateIt is static, but I don't know... Is there a better way?
Comments are very welcome.
This is a common design challenge in an inheritance hierarchy - how to perform class-dependent behavior in the constructor. The reason code analysis tools flag this as a problem is that the constructor of the derived class has not yet had an opportunity to run at this point, and the call to the virtual method may depend on state that has not been initialized.
So you have a few choices here:
Ignore the problem. If you believe that implementers should be able to write a parameter validation method without relying on any runtime state of the class, then document that assumption and stick with your design.
Move validation logic into each derived class constructor, have the base class perform just the most basic, abstract kinds of validations it must (null checks, etc).
Duplicate the logic in each derived class. This kind of code duplication seems unsettling, and it opens the door for derived classes to forget to perform the necessary setup or validation logic.
Provide an Initialize() method of some kind that has to be called by the consumer (or factory for your type) that will ensure that this validation is performed after the type is fully constructed. This may not be desirable, since it requires that anyone who instantiates your class must remember to call the initialization method - which you would think a constructor could perform. Often, a Factory can help avoid this problem - it would be the only one allowed to instantiate your class, and would call the initialization logic before returning the type to the consumer.
If validation does not depend on state, then factor the validator into a separate type, which you could even make part of the generic class signature. You could then instantiate the validator in the constructor, pass the parameters to it. Each derived class could define a nested class with a default constructor, and place all parameter validation logic there. A code example of this pattern is provided below.
When possible, have each constructor perform the validation. But this isn't always desirable. In that case, I personally, prefer the factory pattern because it keeps the implementation straight forward, and it also provides an interception point where other behavior can be added later (logging, caching, etc). However, sometimes factories don't make sense, and in that case I would seriously consider the fourth option of creating a stand-along validator type.
Here's the code example:
public interface IParamValidator<TParams>
where TParams : IActionParameters
{
bool ValidateParameters( TParams parameters );
}
public abstract class BaseBusinessAction<TActionParameters,TParamValidator>
where TActionParameters : IActionParameters
where TParamValidator : IParamValidator<TActionParameters>, new()
{
protected BaseBusinessAction(TActionParameters actionParameters)
{
if (actionParameters == null)
throw new ArgumentNullException("actionParameters");
// delegate detailed validation to the supplied IParamValidator
var paramValidator = new TParamValidator();
// you may want to implement the throw inside the Validator
// so additional detail can be added...
if( !paramValidator.ValidateParameters( actionParameters ) )
throw new ArgumentException("Valid parameters must be supplied", "actionParameters");
this.Parameters = actionParameters;
}
}
public class MyAction : BaseBusinessAction<MyActionParams,MyActionValidator>
{
// nested validator class
private class MyActionValidator : IParamValidator<MyActionParams>
{
public MyActionValidator() {} // default constructor
// implement appropriate validation logic
public bool ValidateParameters( MyActionParams params ) { return true; /*...*/ }
}
}
If you are deferring to the child class to validate the parameters anyway, why not simply do this in the child class constructor? I understand the principle you are striving for, namely, to enforce that any class that derives from your base class validates its parameters. But even then, users of your base class could simply implement a version of ParametersAreValid() that simply returns true, in which case, the class has abided by the letter of the contract, but not the spirit.
For me, I usually put this kind of validation at the beginning of whatever method is being called. For example,
public MyAction(MyParameters actionParameters, bool something)
: base(actionParameters)
{
#region Pre-Conditions
if (actionParameters == null) throw new ArgumentNullException();
// Perform additional validation here...
#endregion Pre-Conditions
this.something = something;
}
I hope this helps.
I would recommend applying the Single Responsibility Principle to the problem. It seems that the Action class should be responsible for one thing; executing the action. Given that, the validation should be moved to a separate object which is responsible only for validation. You could possibly use some generic interface such as this to define the validator:
IParameterValidator<TActionParameters>
{
Validate(TActionParameters parameters);
}
You can then add this to your base constructor, and call the validate method there:
protected BaseBusinessAction(IParameterValidator<TActionParameters> validator, TActionParameters actionParameters)
{
if (actionParameters == null)
throw new ArgumentNullException("actionParameters");
this.Parameters = actionParameters;
if (!validator.Validate(actionParameters))
throw new ArgumentException("Valid parameters must be supplied", "actionParameters");
}
There is a nice hidden benefit to this approach, which is it allows you to more easily re-use validation rules that are common across actions. If your using an IoC container, then you can also easily add binding conventions to automatically bind IParameterValidator implementations appropriately based on the type of TActionParameters
I had a very similar issue in the past and I ended up moving the logic to validate parameters to the appropriate ActionParameters class. This approach would work out of the box if your parameter classes are lined up with BusinessAction classes.
If this is not the case, it gets more painful. You have the following options (I would prefer the first one personally):
Wrap all the parameters in IValidatableParameters. The implementations will be lined up with business actions and will provide validation
Just suppress this warning
Move this check to parent classes, but then you end up with code duplication
Move this check to the method that actually uses the parameters (but then your code fails later)
Why not do something like this:
public abstract class BaseBusinessAction<TActionParameters>
: where TActionParameters : IActionParameters
{
protected abstract TActionParameters Parameters { get; }
protected abstract bool ParametersAreValid();
public void CommonMethod() { ... }
}
Now the concrete class has to worry about the parameters and ensuring their validity. I would just enforce the CommonMethod calling the ParametersAreValid method prior to doing anything else.
How about moving the validation to a more common location in the logic. Instead of running the validation in the constructor, run it on the first (and only the first) call to the method. That way, other developers could construct the object, then change or fix the parameters before executing the action.
You could do this by altering your getter/setter for the Parameters property, so anything that uses the paramters would validate them on the first use.
Where are the parameters anticipated to be used: from within CommonMethod? It is not clear why the parameters must be valid at the time of instantiation instead of at the time of use and thus you might choose to leave it up to the derived class to validate the parameters before use.
EDIT - Given what I know the problem seems to be one of special work needed on construction of the class. That, to me, speaks of a Factory class used to build instances of BaseBusinessAction wherein it would call the virtual Validate() on the instance it builds when it builds it.
Related
According to MSDN's design guide for constructors,
"If you don’t explicitly declare any constructors on a type, many languages (such as C#) will automatically add a public default constructor. (Abstract classes get a protected constructor.)
Adding a parameterized constructor to a class prevents the compiler from adding the default constructor. This often causes accidental breaking changes."
Why not:
"If you don’t explicitly declare any default constructors on a type, many languages (such as C#) will automatically add a public default constructor. (Abstract classes get a protected constructor.)"
What is the reason behind this?
Because not all classes should be constructed parameterless.
Consider a class that is designed to implement the interface between your application and a file on disk. It would be very inconvenient having to handle the case where the object is constructed without specifying which file to manage.
As such, since the main point of creating a non-static class is that you want to create objects of it, you're spared having to add an empty parameterless constructor if that is all you want to have.
Once you start adding constructors at all, then the automagic is disabled and no default constructor will be provided.
If I define a custom constructor which means my object need initialising in a specific way e.g.:
class Customer
{
public Customer(string name) { this.Name = name; }
public string Name { get; }
}
If the compiler also added public Customer() then you could bypass the requirement to initialise a customer with a name.
If no constructor is present, there is no way to new up an instance of the class.
So, when you provide a constructor, there is at least one way to construct the class. If no constructor at all is provided, one is provided by default, so that you can actually build the class.
This answer's the question of why the default constructor exists, but not why it doesn't exist when you don't create your own parameterless constructor.
If a default constructor were to be provided when you've already provided one, this could lead to unintended consuming of the class. An example of this has been pointed out in another answer, but just as another:
public class Foo
{
private readonly IDbConnection _dbConnection;
public Foo(IDbConnection dbConnection)
{
if (dbConnection == null)
throw new ArgumentNullException(nameof(dbConnection));
_dbConnection = dbConnection;
}
public Whatever Get()
{
var thingyRaw = _dbConnection.GetStuff();
var thingy = null; // pretend some transformation occurred on thingyRaw to get thingy
return thingy;
}
}
If a default constructor were to be automatically created in the above class, it would be possible to construct the class without its dependency IDbConnection, this is not intended behavior and as such, no default constructor is applied.
I'm wondering if there's a way to hook to an event whenever an object is instantiated.
If it doesn't, is there a way to retrieve the object to which an attribute is attached to when the attribute is instantiated?
What I want to do is give some of my classes a custom attribute and whenever a class with this attribute is instantiated, run some code for it.
Of course, I could simply place the code in each of those classes' constructor but that's a lot of copy and pasting and I could easily forget to copy that code into one or two classes. And of course, would be very convenient for end users as all they would have to do is add my attribute to their classes and not worry about remember to add that bit of code in their constructors.
I actually can't do a base class because all of those objects already have a base.
Thanks in advance.
Here's an example of what I'd like to do. Either use the attribute's constructor or have an event handler for object instantiation.
public class MySuperAttribute : Attribute
{
public MySuperAttribute()
{
//Something akin to this or the event in Global
Global.AddToList(this.TheTargetObject);
}
}
[MySuperAttribute]
public class MyLabel : System.Windows.Forms.Label
{
}
public static class Global
{
public static void AddToList(Object obj)
{
//Add the object to a list
}
//Some pseudo-hook into the instantiation of any object from the assembly
private void Assembly_ObjectInstantiated(Object obj)
{
if(obj.GetType().GetCustomAttributes(typeof(MySuperAttribute), true).Count != 0)
AddtoList(obj);
}
}
There is no easy way to hook object instantiation externally, maybe with some debugging API, and it has a good reason. It makes your code harder to maintain and understand for other people.
Attributes won't work, because the instance of an attribute is not actually created until it is required - via reflection, and an attribute is assigned to a type, not an instance.
But you may well put the code in a base class, and derive all other classes from it, although it is also not a good practice to pass half-initialized instance to other methods. If the class inherits from ContextBoundObject, you can assign a custom implementation of ProxyAttribute to it and override all operations on it.
If you can't create a common base class (when your types inherit from different types), you can always create the instance with a custom method like this one:
public static T Create<T>() where T : new()
{
var inst = new T();
Global.AddToList(inst);
return inst;
}
However, seeing as you inherit from form controls, their instantiation is probably controlled by the designer. I am afraid there is no perfect solution, in this case.
I have a situation where I create an object called EntryEvent from data I receive. That data has to be parsed. The base event is supposed to kick off parsing of the data that was received through the constructor, and given to the object. The subtype knows how to pars that specific data set. Now, when compiling said code, I get the warning CA2214, that it contains a call chain to a virtual method. While it may be bad to have unforseen consequences, I do not know how to get the required behavior: Parse the received event without having to call an additional "Parse" method from the outside.
The Code in question is:
public abstract class BaseEvent
{
protected BaseEvent(object stuff)
{
this.ParseEvent();
}
protected abstract void ParseEvent();
}
public class EntryEvent : BaseEvent
{
public EntryEvent( object stuff )
: base( stuff )
{
}
protected override void ParseEvent()
{
// Parse event
}
}
According to MSDN (emphasis is mine):
When a virtual method is called, the actual type that executes the method is not selected until run time. When a constructor calls a virtual method, it is possible that the constructor for the instance that invokes the method has not executed.
So in my opinion you have these options (at least):
1) Do not disable that warning but suppress that message for your specific class(es) documenting its intended behavior (assuming you take extra care to deal with such scenario). It's not so bad if it's limited to few classes in a very controlled environment (after all...warnings are not errors and they may be ignored).
2) Remove that virtual method call from base class constructor but leave abstract method declaration there. Developers will have to implement such method and to call it in constructor they will need to mark their classes as sealed. Finally add somewhere in class/method documentation that that method must be called inside their constructor and their class must be sealed to do so.
They can forget that call but you may add (for DEBUG builds) a check when properties or methods are accessed (for example forcing, as part of class interface, to set a specific flag). If they forget to set the flag or they forget to call the method then an exception will be raised ("This object has not been built, ParseEvent() must be called in derived classes constructor.").
I don't like this method very much because it adds extra complexity but if your class hierarchy is too big (then you feel you can't use #1) or lazy initialization (described in #3) is not applicable then it may be a working solution. I'd also consider to change design to introduce a factory method that will invoke ParseEvent() for each fully constructed object.
3) Change little bit your design: defer parsing to when it's needed. For example:
public abstract class BaseEvent
{
public DateTime TimeStamp
{
get
{
if (_timestamp == null)
ParseEvent();
return _timestamp.Value;
}
protected set { _timestamp = value; }
}
protected BaseEvent(object stuff)
{
}
protected abstract void ParseEvent();
private DateTime? _timestamp;
}
Last example is only for illustration purposes, you may want to use Lazy<T> to do same task in a more coincise, clear and thread-safe way. Of course in reality you'll have more fields/properties and probably parsing will provide all values in one shot (then you just need a flag, no need for Nullable/special value on each field) This is approach I'd prefer even if it's more verbose.
This is a bit of an odd oop question. I want to create a set of objects (known at design time) that each have certain functions associated with them.
I can either do this by giving my objects properties that can contain 'delegates':
public class StateTransition {
Func<bool> Condition { get; set; }
Action ActionToTake { get; set; }
Func<bool> VerifyActionWorked { get; set; }
}
StateTransition foo = new StateTransition {
Condition = () => {//...}
// etc
};
Alternatively I can use an abstract class and implement this for each object I want to create:
public abstract class StateTransition {
public abstract bool Condition();
public abstract void ActionToTake();
public abstract bool VerifyActionWorked();
}
class Foo : StateTransition {
public override bool Condition() {//...}
// etc
}
Foo f = new Foo();
I realise the practical consequences (creating at design time vs run time) of these two methods are quite different.
How can I choose which method is appropriate for my application?
The first approach looks more suited to events than raw delegates, but... whatever.
The key factor between them is: who controls what happens?
If the caller can legitimately do anything there, then the event approach would be fine. The system doesn't force you to subclass a Button just to add what happens when you click it, after all (although you can do it that way).
If the "things that can happen" are pretty controlled, and you wouldn't want every caller doing different things, then a sub-class approach is more suitable. This also avoids the need for every caller to have to tell it what to do, when the "things to do" might actually be a very small number of options. The base-type approach also gives the ability to control the subclasses, for example by only having an internal contructor on the base-class (so that only types in the same assembly, or in assemblies noted via [InternalsVisibleTo(...)], can subclass it).
You could also combine the two (override vs event) via:
public class StateTransition {
public event Func<bool> Condition;
protected virtual bool OnCondition() {
var handler = Condition;
return handler == null ? false : handler();
}
public event Action ActionToTake;
protected virtual void OnActionToTake() {
var handler = ActionToTake;
if(handler != null) handler();
}
public event Func<bool> VerifyActionWorked;
protected virtual bool OnVerifyActionWorked() {
var handler = VerifyActionWorked;
return handler == null ? true : handler();
}
// TODO: think about default return values
}
Another thing to consider with the delegate/event approach is: what do you do if the delegate is null ? If you need all 3, then demanding all 3 in a constructor would be a good idea.
The delegate solution would be useful if you:
want to create objects dynamically, i.e. choosing implemenation for each method depending on some conditions.
want to change the implementation during the life time of the object.
For other cases I would recommend the object oriented approach.
How can I choose which method is appropriate for my application?
Does your application requires to define new transition objects that have additional different properties, or additional different methods ? Then making new subclasses, overrding methods, ("Polymorphism") is better.
Or.
Does your application requires to define transition objects that only change method behaviour ? Then Method Delegates or Methods Events are better.
Summary
Overriding Methods ("Polymorphism") is better when your application requires to add different features, like properties or methods, for different subclasses, not just changing the implementation of methods.
Provided that this is more an opinion than a rock-solid answer...
I think that the two would be more or less equivalent if you had just one delegate or abstract/virtual method. After all, you can think of a delegate as a handy shortcut to avoid creating an implementing an interface for just one method.
In this case, where you have three methods, the base class approach would be the most practical.
To make the two things completely equivalent, you could use a base non-abstract class, with empty virtual methods, so that if a derived class does not override it, it's the same as having a null delegate property.
If you are saying that your objects known at design time, then it sounds like you do not need to alter behavior of objects dynamically (at run time). So I think there is no reason for you to to use 'delegate' approach.
Solution 1 has more moving parts, which allows for finer-grained separation of concerns. You could have one object decide what the Condition for a given StateTransition should be, another object define the ActionToTake, and so on. Or you could have one object decide them all, but based on different criteria. Not the most useful approach most of the time IMO - especially considering the slight additional complexity cost.
In solution 2, each StateTransition derivative is a cohesive whole, the way it checks the condition cannot be separated from the way it does the action or verifies it.
Both solutions can be used to accomplish Inversion of Control - in other words, they both allow you to say that StateTransition's direct consumer will not control which flavor of StateTransition it is going to use but that the decision will instead be delegated to an external object.
How can I choose which method is appropriate for my application?
Method with delegates makes code more functional because it moves from classic Object Oriented Programming techniques like inheritance and polymorphism to Functional Programming techniques like passing functions and using closures.
I tend to use method with delegates everywhere I can because
I prefer composition over inheritance
I find method with delegates to require less code to write
For example, a concrete StateTransition instance can be created in 5 lines of code from delegates and closures using standard .NET initialization mechanism :
dim PizzaTransition as new StateTransition with {
.Condition = function() Pizza.Baked,
.ActionToTake = sub() Chef.Move(Pizza, Plate),
.VerifyActionWorked = function() Plate.Contains(Pizza)
}
I find it easy to build Fluent API around a class with a set of additional methods implemented as Extension methods or inside the class.
For example, if methods Create, When, Do and Verify are added to the StateTransition class:
public class StateTransition
public property Condition as func(of boolean)
public property ActionToTake as Action
public property VerifyActionWorked as func(of boolean)
public shared function Create() as StateTransition
return new StateTransition
end function
public function When(Condition as func(of boolean)) as StateTransition
me.Condition = Condition
return me
end function
public function Do(Action as Action) as StateTransition
me.ActionToTake = Action
return me
end function
public function Verify(Verify as func(of boolean)) as StateTransition
me.VerifyActionWorked = Check
return me
end function
end class
Then method chaining can be also be used to create a concrete StateTransition instance:
dim PizzaTransition = StateTransition.Create.
When(function() Pizza.Baked).
Do(sub() Chef.Move(Pizza, Plate).
Verify(function() Plate.Contains(Pizza))
Is it possible to define an Interface with optional implementation methods? For example I have the following interface definition as IDataReader in my core library:
public interface IDataReader<T> {
void StartRead(T data);
void Stop();
}
However, in my current implementations, the Stop() method has never been used or implemented. In all my implementation classes, this method has to be implemented with throw NotImplementedExcetion() as default:
class MyDataReader : IDataReader<MyData> {
...
public void Stop()
{
// this none implementaion looks like uncompleted codes
throw NotImplementedException();
}
Of course, I can remove the throw exception code and leave it empty.
When I designed this data reader interface, I thought it should provide a way to stop the reading process. Maybe we will use Stop() sometime in the future.
Anyway, not sure if it is possible to make this Stop() method as an optional implementation method? The only way I can think is to either to define two interfaces one with stop and another without such as IDataReader and IDataReader2. Another option is to break this one into to interfaces like this:
interface IDataReader<T> {
void StartRead(T data);
}
interface IStop {
void Stop();
}
In my implementation cases, I have to cast or use as IStop to check if my implementation supports Stop() method:
reader.StartRead(myData);
....
// some where when I need to stop reader
IStop stoppable = reader as IStop;
if (stoppable != null ) stoppable.Stop();
...
Still I have to write those codes. Any suggestions? Not sure if there is any way to define optional implementation methods in an interface in .Net or C#?
Interesting. I'll have to quote you here:
However, in my current
implementations, the Stop() method has
never been used or implemented. In all
my implementation classes, this method
has to be implemented with throw
NotImplementedExcetion() as default:
If this is the case, then you have two options:
Remove the Stop() method from the interface. If it isn't used by every implementor of the interface, it clearly does not belong there.
Instead of an interface, convert your interface to an abstract base class. This way there is no need to override an empty Stop() method until you need to.
Update The only way I think methods can be made optional is to assign a method to a variable (of a delegate type similar to the method's signature) and then evaluating if the method is null before attempting to call it anywhere.
This is usually done for event handlers, wherein the handler may or may not be present, and can be considered optional.
For info, another approach fairly common in the BCL is Supports* on the same interface, i.e.
bool SupportsStop {get;}
void Stop();
(examples of this, for example, in IBindingList).
I'm not pretending that it is "pure" or anything, but it works - but it means you now have two methods to implement per feature, not one. Separate interfaces (IStoppableReader, for example) may be preferable.
For info, if the implementation is common between all implementations, then you can use extension methods; for a trivial example:
public static void AddRange<T>(this IList<T> list, IEnumerable<T> items) {
foreach(T item in items) list.Add(item);
}
(or the equivalent for your interface). If you provide a more specialized version against the concrete type, then it will take precedence (but only if the caller knows about the variable as the concrete type, not the interface). So with the above, anyone knowingly using a List<T> still uses List<T>'s version of AddRange; but if the have a List<T> but only know about it as IList<T>, it'll use the extension method.
If the method is inappropriate for your implementation, throw InvalidOperationException just like most iterators do when you call Reset on them. An alternative is NotSupportedException which tends to be used by System.IO. The latter is more logical (as it has nothing to do with the current state of the object, just its concrete type) but the former is more commonly used in my experience.
However, it's best to only put things into an interface when you actually need them - if you're still in a position where you can remove Stop, I would do so if I were you.
There's no unified support for optional interface members in the language or the CLR.
If no classes in your code actually implement Stop(), and you don't have definite plans to do so in the future, then you don't need it in your interface. Otherwise, if some but not all of your objects are "stoppable", then the correct approach is indeed to make it a separate interface such as IStoppable, and the clients should then query for it as needed.
If your implementation does not implement the interface method Stop, then it breaks obviousily the contract that comes with your interface. Either you implement the Stop method appropriately (not by throwing an Exception and not by leaving it empty) or you need to redesign your interface (so to change the contract).
Best Regards
C# version 4 (or vNext) is considering default implementation for interfaces - I heard that on channel9 a few months ago ;).
Interfaces with default implementation would behave somewhat like abstract base classes. Now that you can inherit multiple interfaces this could mean that C# might get multiple inheritance in form of interfaces with default implementations.
Until then you might get away with extension methods...
Or your type could make use of the delegates.
interface IOptionalStop
{
Action Stop { get; }
}
public class WithStop : IOptionalStop
{
#region IOptionalStop Members
public Action Stop
{
get;
private set;
}
#endregion
public WithStop()
{
this.Stop =
delegate
{
// we are going to stop, honest!
};
}
}
public class WithoutStop : IOptionalStop
{
#region IOptionalStop Members
public Action Stop
{
get;
private set;
}
#endregion
}
public class Program
{
public static string Text { get; set; }
public static void Main(string[] args)
{
var a = new WithStop();
a.Stop();
var o = new WithoutStop();
// Stop is null and we cannot actually call it
a.Stop();
}
}