C# base misuse? - c#

I quite often find this kind of code in my company...
class Base
{
public int Property
{
get; set;
}
}
class Derived : Base
{
public Derived()
{
base.Property = 0xAFFE;
}
}
And, i often argue that this kind of use of base is "wrong".
I argue, that "this.Property" would be "correct" (or simply "Property = 0xAFFE;")
I argue, that one could refactor (making Property virtual, override it).
But, my arguments seem not to convince. Can you help with arguments? Or am i (completely) wrong?
Thanx.

I think that, if Property in your example is not virtual, it doesn't matter if you use base or this.
If it is virtual though, and overriden in an inherited class, you'll have differences in behavior.
I personally tend to never use base or this when setting properties like this (which would be the same as specifying this). Only in specific situations (like overriding a virtual method an calling the base implementation) do I use those keywords

So, we have at least 3 ways to say the same:
Property = 0xAFFE;
this.Property = 0xAFFE;
base.Property = 0xAFFE;
And as usual, there are minor aspects that can make a decision why this line is correct or wrong.
First version is most relaxed. Property can be a real property or a variable declared above or a param passed to from outer scope. Moreover, in some cases you can write Property = Property and the compiler would be smart enough to understand you (bad practice anyway).
Second version does an assumption: Property is the class belongings. Period. Despite some code analyzers would rise a hint "syntax can be simplified", this is a normal way of doing things.
Third one intends an assumption that is even more tight: Property is a blessing from ancestors. So even if you override it locally, that syntax 100% ensures you that the change is applied on the ancestor field.
To be honest, in most cases last two syntax forms are interchangeable. Do you really see a point to enforce yours?

Related

Should I make the method virtual or abstract?

I have an abstract class that does its own internal validation. It has another method that allows subclasses to do additional validation checks. Currently, I've made the method abstract.
protected abstract bool ValidateFurther();
However, I'm seeing quite a number of subclasses being forced to override it just to return true. I'm considering to make the method virtual.
protected virtual bool ValidateFurther() => true;
Is it bad to assume that validation is going to be fine in the abstract class? I'm worried that subclasses may not notice it and ended up not overriding it even when it is needed. Which is the more suitable approach here?
You could add another layer into your design.
public abstract class Base
{
protected abstract bool ValidateFurther();
}
public abstract class BaseWithValidation : Base
{
protected override bool ValidateFurther() => true;
}
If a significant subset of your inherited classes should just return true you can use BaseWithValidation to avoid having to repeat the code everywhere; for anything else use Base.
abstract method means you just want to define the behavior to be followed, and let sub classes do the implementation.
virtual method means you defined an method with initial implementation, but allow to be override.
So maybe you can explain you context more, then we can discuss it!
It's okay to make the method virtual and define a default implementation returning true.
protected virtual bool ValidateFurther() => true;
The difference between abstract and virtual is that that abstract methods defines only the signature of a method and the implementation is left to be implemented by the derived classes(similar to an interface). All child classes are required to implement the logic of an abstract method. You can see the documentation for more details.
Virtual on the other hand requires you to implement the logic and if needed you can override that method to add/extend the logic. And because you have default logic your child/derived classes are not required to override it. You can see the documnetation for more details.
Basically it is okay to implement the method as virtual and return true; by default.
FYI: From C# 8 you can have default implementations in the interfaces link
The short answer
If this class' (and all of its derived class') purpose does not always necessitate validation, then you should go with virtual, otherwise abstract.
In other words, is validation a cornerstone of this class' purpose? (yes = abstract, no = virtual)
I suspect that virtual is the better approach here, but not for the reason you're thinking it is. The rest of this answer elaborates on why your reasoning isn't the deciding factor here, and what actually is the deciding factor.
Your reasoning
I'm seeing quite a number of subclasses being forced to override it just to return true.
I suspect you're succumbing to the programmer's reflex: "I see this repeated and must write code to avoid this repetition!"
While that is generally a good approach, it can also be misapplied when you start applying this to things that happen to be the same rather than expressing the same functional purpose.
The example I tend to use to address that point is the following:
public class Book
{
public string Title { get; set; }
public DateTime CreatedOn { get; set; }
}
public class EmployeeJob
{
public string Title { get; set; }
public DateTime CreatedOn { get; set; }
}
There is definitely value to abstracting the CreatedOn property, as these entities are both audited data entities. The CreatedOn property is part of that audited entity, and its existence in both Book and EmployeeJob stems from these classes both being audited entities.
If a change is made to audited entities (e.g. they no longer track creation date), then that change needs to automatically persist to all audited entities. When you use shared logic, that automatically happens.
But does Title need to be abstracted into a shared logic? No. There is no functional overlap here. Yes, these properties have the same name and type, but they share no common logic whatsoever. They just happen to be equal to each other right now, but they are not tied to one another.
If a change is made to one Title property (e.g. it now becomes a Guid FK to a table of job titles), that change doesn't automatically reflect on the other (e.g. a book title would still just be a string). Implementing these Title properties using shared logic would actually cause a problem down the line instead of solve one.
In short: sometimes programmers seek more patterns than they need. Or if you allow me to quote Jurassic Park...
The deciding factor
I'm considering to make the method virtual.
Whether you make it abstract or virtual depends on one specific considerations (not DRY, as addressed above): Do you wish to provide a default implementation, or would you prefer to enforce that every consumer (i.e. derived class) evaluate the implementation of this method for themselves?
Neither of these are objectively better than the other, it's a matter of which fits best for you current scenario.
I'm seeing quite a number of subclasses being forced to override it just to return true.
I infer from this that you're essentially skipping validation in these classes, so in this case I would opt for the virtual approach since this class' (and all of its derived class') purpose does not always necessitate validation (again, that is my interpretation based on your explanation).
In other words, is validation a cornerstone of this class' purpose? (yes = abstract, no = virtual). You didn't specify your class or its purpose so I can't make that final call.

Why "New" Keyword in C# [duplicate]

I was looking at this blog post and had following questions:
Why do we need the new keyword, is it just to specify that a base class method is being hidden. I mean, why do we need it? If we don't use the override keyword, aren't we hiding the base class method?
Why is the default in C# to hide and not override? Why have the designers implemented it this way?
Good questions. Let me re-state them.
Why is it legal to hide a method with another method at all?
Let me answer that question with an example. You have an interface from CLR v1:
interface IEnumerable
{
IEnumerator GetEnumerator();
}
Super. Now in CLR v2 you have generics and you think "man, if only we'd had generics in v1 I would have made this a generic interface. But I didn't. I should make something compatible with it now that is generic so that I get the benefits of generics without losing backwards compatibility with code that expects IEnumerable."
interface IEnumerable<T> : IEnumerable
{
IEnumerator<T> .... uh oh
What are you going to call the GetEnumerator method of IEnumerable<T>? Remember, you want it to hide GetEnumerator on the non-generic base interface. You never want that thing to be called unless you're explicitly in a backwards-compat situation.
That alone justifies method hiding. For more thoughts on justifications of method hiding see my article on the subject.
Why does hiding without "new" cause a warning?
Because we want to bring it to your attention that you are hiding something and might be doing it accidentally. Remember, you might be hiding something accidentally because of an edit to the base class done by someone else, rather than by you editing your derived class.
Why is hiding without "new" a warning rather than an error?
Same reason. You might be hiding something accidentally because you've just picked up a new version of a base class. This happens all the time. FooCorp makes a base class B. BarCorp makes a derived class D with a method Bar, because their customers like that method. FooCorp sees that and says hey, that's a good idea, we can put that functionality on the base class. They do so and ship a new version of Foo.DLL, and when BarCorp picks up the new version, it would be nice if they were told that their method now hides the base class method.
We want that situation to be a warning and not an error because making it an error means that this is another form of the brittle base class problem. C# has been carefully designed so that when someone makes a change to a base class, the effects on code that uses a derived class are minimized.
Why is hiding and not overriding the default?
Because virtual override is dangerous. Virtual override allows derived classes to change the behaviour of code that was compiled to use base classes. Doing something dangerous like making an override should be something you do consciously and deliberately, not by accident.
If the method in the derived class is preceded with the new keyword, the method is defined as being independent of the method in the base class
However if you don't specify either new or overrides, the resulting output is the same as if you specified new, but you will get a compiler warning (as you may not be aware that you are hiding a method in the base class method, or indeed you may have wanted to override it, and merely forgot to include the keyword).
So it helps you to avoid mistakes and explicitly show what you want to do and it makes more readable code, so one can easily understand your code.
It is worth noting that the only effect of new in this context is to suppress a Warning. There is no change in semantics.
So one answer is: We need new to signal to the compiler that the hiding is intentional and to get rid of the warning.
The follow up question is: If you won't / can't override a method, why would you introduce another method with the same name? Because hiding is in essence a name-conflict. And you would of course avoid it in most cases.
The only good reason I can think of for intentional hiding is when a name is forced upon you by an interface.
In C# members are sealed by default meaning that you cannot override them (unless marked with the virtual or abstract keywords) and this for performance reasons. The new modifier is used to explicitly hide an inherited member.
If overriding was default without specifying the override keyword, you could accidentally override some method of your base just due to the name equality.
.Net compiler strategy is to emit warnings if something could go wrong, just to be safe, so in this case if overriding was default, there would have to be a warning for each overriden method - something like 'warning: check if you really want to override'.
My guess would mainly be due to the multiple interface inheritance. Using discreet interfaces it would be very possible that two distinct interfaces use the same method signature. Allowing the use of the new keyword would allow you to create these different implementations with one class, instead of having to create two distinct classes.
Updated ... Eric gave me an idea on how to improve this example.
public interface IAction1
{
int DoWork();
}
public interface IAction2
{
string DoWork();
}
public class MyBase : IAction1
{
public int DoWork() { return 0; }
}
public class MyClass : MyBase, IAction2
{
public new string DoWork() { return "Hi"; }
}
class Program
{
static void Main(string[] args)
{
var myClass = new MyClass();
var ret0 = myClass.DoWork(); //Hi
var ret1 = ((IAction1)myClass).DoWork(); //0
var ret2 = ((IAction2)myClass).DoWork(); //Hi
var ret3 = ((MyBase)myClass).DoWork(); //0
var ret4 = ((MyClass)myClass).DoWork(); //Hi
}
}
As noted, method/property hiding makes it possible to change things about a method or property which could not be readily changed otherwise. One situation where this can be useful is allowing an inherited class to have read-write properties which are read-only in the base class. For example, suppose a base class has a bunch of read-only properties called Value1-Value40 (of course, a real class would use better names). A sealed descendant of this class has a constructor that takes an object of the base class and copies the values from there; the class does not allow them to be changed after that. A different, inheritable, descendant declare a read-write properties called Value1-Value40 which, when read, behaves the same as the base class versions but, when written, allows the values to be written. The net effect will be that code which wants an instance of the base class that it knows will never change can create a new object of the read-only class, which can copy data from a passed-in object without having to worry whether that object is read-only or read-write.
One annoyance with this approach--perhaps someone can help me out--is that I don't know of a way to both shadow and override a particular property within the same class. Do any of the CLR languages allow that (I use vb 2005)? It would be useful if the base class object and its properties could be abstract, but that would require an intermediate class to override the Value1 to Value40 properties before a descendant class could shadow them.

How do I block the new modifier?

I have a property in a base class that I don't want overridden for any reason. It assigns an ID to the class for use with a ThreadQueue I created. I see no reason whatsoever for anyone to override it. I was wondering how I can block anyone from attempting to override it short of them changing its modifier.
private int _threadHostID = 0;
public int ThreadHostID
{
get
{
if (_threadHostID == 0)
{
_threadHostID = ThreadQueue.RequestHostID();
}
return _threadHostID;
}
}
Edit: totally forgot the language: C#.
Edit2: It is not virtual or overriding anything else so please no sealed.
First off: "Overriding" refers to virtual overriding. You are talking about creating hiding methods, not overriding methods.
I have a property in a base class that I don't want hidden
You are free to want that, but you are going to have to learn to live with the disappointment of not getting what you want.
I see no reason whatsoever for anyone to hide it.
Then there won't be a problem, will there? If no one could possible want to hide it, then they won't hide it. You're basically saying "I have an object of no value to anyone; how do I keep someone from stealing it?" Well, if it is of no value, then no one is going to want to steal it, so why would you spend money on a safe to protect something that no one wants to steal in the first place?
If there is no reason for someone to hide or override your method then no one will. If there is a reason for someone to hide or override your method, then who are you to tell them not to? You are providing a base class; you are the servant of the derived class author, not their master.
Now, sometimes being a good servant means building something that resists misuse, is robust, and reasonably priced. I encourage people to build sealed classes, for example. Designing secure, robust, inheritable classes that meet the real needs of inheritors is expensive and difficult.
But if you are going to create a robust unsealed base class designed for inheritance, why try to stop the derived class author from hiding, if they have a reason to do so? It cannot possibly hurt the base class. The only people it could hurt are the users of the derived class, and those people are the derived class author's problem, not yours.
There is no way to stop member hiding. If you don't make it virtual or abstract, then a derived class cannot override it properly anyway, hiding isn't polymorphic.
If a derived class hides it using the new operator, then they are opening up problems for themselves as any code that decides to use a reference to the base class will not touch the derived member. So basically, all code that utilises the "base class"-ness of the type hierarchy will bypass all member hiding anyway.
The sealed keyword only works if a derived type overrides a base type and doesn't want it to be overridden further... not sure how it plays with the new operator though. Most likely the member hiding will still be allowed, but will still have the same direct-type problem.
Your task is done by not making the method virtual or abstract, if a person wants to hide members then they are responsible for anything that breaks because they decided to abuse the design.
I think you should not worry about this. If you don't write it as virtual then you are making clear that it is not intended to be overridden and in fact you will receive a warning if you will override it (without the "new" modifier):
Warning: [...] hides inherited member [...].
Use the new keyword if hiding was intended
If you have this fear you should worry about any method that you write in a non-sealed class. So the job for you is just make sure that the design of your class is consistent and clear and if someone wants to inherit it then should be not dumb to just go and redefine non-virtual properties/methods. You cannot completely shield yourself from others stupidity :).
As far as I can tell, you apparently can't do that on a property level. However, if you seal the class:
public class Base
{
public int ID { get; set; }
}
public sealed class Child : Base
{
/// blah
}
then ...
public class Grandchild : Child
{
public int ID { get; set; }
}
will throw an error on the class definition, so using new doesn't even come into play.
Not an exact solution to your problem, but it does keep others from extending or interfering with your API.
Does it actually matter if someone does put a 'new' implementation in? I'm assuming you will always be referring to the base class in any code using that property since that is where it is declared and since it's not override or virtual it won't polymorphically call up to a 'new' implementation anyway.

Why not make everything 'virtual'? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Why C# implements methods as non-virtual by default?
I'm speaking primarily about C#, .NET 3.5, but wonder in general what the benefits are of not considering everything "virtual" - which is to say that a method called in an instance of a child class always executes the child-most version of that method. In C#, this is not the case if the parent method is not labeled with the "virtual" modifier. Example:
public class Parent
{
public void NonVirtual() { Console.WriteLine("Non-Virtual Parent"); }
public virtual void Virtual(){ Console.WriteLine("Virtual Parent"); }
}
public class Child : Parent
{
public new void NonVirtual() { Console.WriteLine("Non-Virtual Child"); }
public override void Virtual() { Console.WriteLine("Virtual Child"); }
}
public class Program
{
public static void Main(string[] args)
{
Child child = new Child();
Parent parent = new Child();
var anon = new Child();
child.NonVirtual(); // => Child
parent.NonVirtual(); // => Parent
anon.NonVirtual(); // => Child
((Parent)child).NonVirtual(); // => Parent
child.Virtual(); // => Child
parent.Virtual(); // => Child
anon.Virtual(); // => Child
((Parent)child).Virtual(); // => Child
}
}
What exactly are the benefits of the non-virtual behavior observed above? The only thing I could think of was "What if the author of Parent doesn't want his method to be virtual?" but then I realized I couldn't think of a good use case for that. One might argue that the behavior of the class is dependent on how a non-virtual method operates - but then that seems to me there is some poor encapsulation going on, or that the method should be sealed.
Along these same lines, it seems like 'hiding' is normally a bad idea. After all, if a Child object and methods were created, it seems that it was done so for a specific reason to override the Parent. And if Child implements (and hides the parents) NonVirtual(), it is super easy to not get the what many might consider "expected" behavior of calling Child::NonVirtual(). (I say "expected" because it is sometimes easy to not notice 'hiding' is happening).
So, what are the benefits of not allowing everything to have "virtual" behavior? What is a good use-case for hiding a non-virtual parent if it's so easy to get unexpected behavior?
If anyone is curious as to why I pose this question - I was recently examining Castle Projects DynamicProxy library. The one main hurdle in using it is that any method (or property) you want to proxy has to be virtual. And this isn't always an option for developers (if we don't have control over the source). Not to mention the purpose of DynamicProxy is to avoid-coupling between your proxied class and whatever behavior you are trying to achieve with the proxy (such as Logging, or perhaps a Memoization implementation). And by forcing virtual methods to accomplish this what is instead achieved is very thin but obtuse coupling of DynamicProxy to all the classes it is proxying - Imagine, you have a ton of methods labeled virtual even though they are never inherited and overridden, so any other developer looking at the code might wonder "why are these even virtual? lets change them back".
Anyway, the frustration there led me to wonder what the benefits are of non-virtual, when it seems having everything virtual might have been more clear (IMO, I suppose) and perhaps(?) have more benefits.
EDIT: Labeling as community wiki, since it seems like a question that might have subjective answers
Because you don't want people overriding methods that you haven't designed the class for. It takes a significant effort to make sure it is safe to override a method or even derive from a class. It's much safer to make it non-virtual if you haven't considered what might happen.
Eric Lippert covers this here, on method hiding
In many cases, it is crucial for a class to function properly that a given method has a specific behavior. If the method is overridden in an inherited class, there is no guarantee that the method will correctly implement the expected behavior. You should only mark a method virtual if your class is specifically designed for inheritance and will support a method with a different implementation. Designing for inheritance is not easy, there are many cases where incorrectly overriding a method will break the class's internal behavior
Simple: The entire point in a class is to encapsulate some kind of abstraction. For example, we want an object that behaves as a text string.
Now, if everything had been virtual, I would be able to do this:
class MessedUpString : String{
override void Trim() { throw new Exception(); }
}
and then pass this to some function that expects a string. And the moment they try to trim that string, it explodes.
The string no longer behaves as a string. How is that ever a good thing?
If everything is made virtual, you're going to have a hard time enforcing class invariants. You allow the class abstraction to be broken.
By default, a class should encapsulate the rules and behaviors that it is expected to follow. Everything you make virtual is in principle an extensibility hook, the function can be changed to do anything whatsoever. That only makes sense in a few cases, when we have behavior that is actually user-defined.
The reason classes are useful is that they allow us to ignore the implementation details. We can simply say "this is a string object, I know it is going to behave as a string. I know it will never violate any of these guarantees". If that guarantee can not be maintained, the class is useless. You might as well just make all data members public and move the member methods outside the class.
Do you know the Liskov Substitution Principle?
Anywhere an object of base class B is expected, you should be able to pass an object of derived class D. That is one of the most fundamental rules of object-oriented programming. We need to know that derived classes will still work when we upcast them to the base class and pass them to a function that expect the base class. That means we have to make some behavior fixed and unchangeable.
One key benefit of a non-virtual method is that it can be bound at compile time. That is the compiler can be sure which actual method is to be called when a method is used in code.
The actual method to be called cannot be known at compile time if that method is declared virtual, since the reference may actually point to a sub-type that has overriden it. Hence there is a small overhead at runtime when the actual method to call needs be resolved.
In a framework, a non-virtual member could be called and have a range of expected outputs, if the method was virtual the result of the method could be an expected result that wasn't tested for. Allowing methods to be non-virtual give expected results to framework actions.

providing abstract class member variables from a subclass

What is the 'correct' way of providing a value in an abstract class from a concrete subclass?
ie, should I do this:
abstract class A {
private string m_Value;
protected A(string value) {
m_Value = value;
}
public string Value {
get { return m_Value; }
}
}
class B : A {
B() : this("string value") {}
}
or this:
abstract class A {
protected A() { }
public abstract string Value { get; }
}
class B : A {
B() {}
public override string Value {
get { return "string value"; }
}
}
or something else?
And should different things be done if the Value property is only used in the abstract class?
I usually prefer the first approach because it requires less code in child classes.
However, I admit that the semantics of the second approach are clearer in a subtle way. Overriding the property says "this property's implementation is part of my identity." Passing a constructor argument has a different connotation: "I'm setting a value in this other thing, which just happens to be my base class." It implies composition (has-a) rather than inheritance (is-a).
And should different things be done if
the Value property is only used in the
abstract class?
In this case, you should definitely use the first (constructor-oriented) approach so you can hide that implementation detail from subclasses.
Likewise if you need to use the value in the constructor; as Marc mentioned this is an actual technical reason to use the first approach. Though it wouldn't matter in this specific scenario, if someone later modifies the property override to use some other member variable in the derived class, you might have a subtle bug on your hands.
It depends; does the base-class need to know about it in the ctor? If so, the override approach may be a bad idea (virtual doesn't work very well inside the ctor). Otherwise, either is OK.
I think the second idiom is better, as it is more manageable (if your base class needs multiple properties defined in a derived class, the constructor can get messy). It is also clearer where the information comes. If you see the Value property you know that it is defined in a subclass. In the first example, you have to track the definition point of the m_Value variable, which could be modified in the base class.
I think it's pretty much the same, choose one way and stick to that for coherence.
Both your solutions are forcing the derived class to provide a value, which is good; a possible alternative, in case a value should not be required:
abstract class A {
public string Value {
get;
protected set;
}
}
My personal preference is your first option (constructor parameter), because I personally think that it's the clearer one, but it's really a matter of taste.
It depends.
I will use the first way if I need to modify Valuein abstract class.
I will use the second way only if I need to inherit many classes from A and somewhere, I need to box the inherited classes to the base abstract class.
If both of the above are not true, I will use the second approach which is more manageable and clean.
If Value is only used in abstract class, I will declare it as a private field instead of a property.
One major advantage of the second case is that it allows a subclass to define behaviour for the Value property that may be more complex than a simple scalar value. For example, you might want to compute the Value based on other fields that the subclass defines. With the first approach, that is an impossibility, but the second approach allows for that.
I prefer the second. It lets you provide a value without adding an actual field to the class if the value is constant or can be calculated at runtime. The less state you have (fewer fields) the more maintainable you'll likely find the code to be.

Categories