Why not make everything 'virtual'? [duplicate] - c#

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Why C# implements methods as non-virtual by default?
I'm speaking primarily about C#, .NET 3.5, but wonder in general what the benefits are of not considering everything "virtual" - which is to say that a method called in an instance of a child class always executes the child-most version of that method. In C#, this is not the case if the parent method is not labeled with the "virtual" modifier. Example:
public class Parent
{
public void NonVirtual() { Console.WriteLine("Non-Virtual Parent"); }
public virtual void Virtual(){ Console.WriteLine("Virtual Parent"); }
}
public class Child : Parent
{
public new void NonVirtual() { Console.WriteLine("Non-Virtual Child"); }
public override void Virtual() { Console.WriteLine("Virtual Child"); }
}
public class Program
{
public static void Main(string[] args)
{
Child child = new Child();
Parent parent = new Child();
var anon = new Child();
child.NonVirtual(); // => Child
parent.NonVirtual(); // => Parent
anon.NonVirtual(); // => Child
((Parent)child).NonVirtual(); // => Parent
child.Virtual(); // => Child
parent.Virtual(); // => Child
anon.Virtual(); // => Child
((Parent)child).Virtual(); // => Child
}
}
What exactly are the benefits of the non-virtual behavior observed above? The only thing I could think of was "What if the author of Parent doesn't want his method to be virtual?" but then I realized I couldn't think of a good use case for that. One might argue that the behavior of the class is dependent on how a non-virtual method operates - but then that seems to me there is some poor encapsulation going on, or that the method should be sealed.
Along these same lines, it seems like 'hiding' is normally a bad idea. After all, if a Child object and methods were created, it seems that it was done so for a specific reason to override the Parent. And if Child implements (and hides the parents) NonVirtual(), it is super easy to not get the what many might consider "expected" behavior of calling Child::NonVirtual(). (I say "expected" because it is sometimes easy to not notice 'hiding' is happening).
So, what are the benefits of not allowing everything to have "virtual" behavior? What is a good use-case for hiding a non-virtual parent if it's so easy to get unexpected behavior?
If anyone is curious as to why I pose this question - I was recently examining Castle Projects DynamicProxy library. The one main hurdle in using it is that any method (or property) you want to proxy has to be virtual. And this isn't always an option for developers (if we don't have control over the source). Not to mention the purpose of DynamicProxy is to avoid-coupling between your proxied class and whatever behavior you are trying to achieve with the proxy (such as Logging, or perhaps a Memoization implementation). And by forcing virtual methods to accomplish this what is instead achieved is very thin but obtuse coupling of DynamicProxy to all the classes it is proxying - Imagine, you have a ton of methods labeled virtual even though they are never inherited and overridden, so any other developer looking at the code might wonder "why are these even virtual? lets change them back".
Anyway, the frustration there led me to wonder what the benefits are of non-virtual, when it seems having everything virtual might have been more clear (IMO, I suppose) and perhaps(?) have more benefits.
EDIT: Labeling as community wiki, since it seems like a question that might have subjective answers

Because you don't want people overriding methods that you haven't designed the class for. It takes a significant effort to make sure it is safe to override a method or even derive from a class. It's much safer to make it non-virtual if you haven't considered what might happen.

Eric Lippert covers this here, on method hiding

In many cases, it is crucial for a class to function properly that a given method has a specific behavior. If the method is overridden in an inherited class, there is no guarantee that the method will correctly implement the expected behavior. You should only mark a method virtual if your class is specifically designed for inheritance and will support a method with a different implementation. Designing for inheritance is not easy, there are many cases where incorrectly overriding a method will break the class's internal behavior

Simple: The entire point in a class is to encapsulate some kind of abstraction. For example, we want an object that behaves as a text string.
Now, if everything had been virtual, I would be able to do this:
class MessedUpString : String{
override void Trim() { throw new Exception(); }
}
and then pass this to some function that expects a string. And the moment they try to trim that string, it explodes.
The string no longer behaves as a string. How is that ever a good thing?
If everything is made virtual, you're going to have a hard time enforcing class invariants. You allow the class abstraction to be broken.
By default, a class should encapsulate the rules and behaviors that it is expected to follow. Everything you make virtual is in principle an extensibility hook, the function can be changed to do anything whatsoever. That only makes sense in a few cases, when we have behavior that is actually user-defined.
The reason classes are useful is that they allow us to ignore the implementation details. We can simply say "this is a string object, I know it is going to behave as a string. I know it will never violate any of these guarantees". If that guarantee can not be maintained, the class is useless. You might as well just make all data members public and move the member methods outside the class.
Do you know the Liskov Substitution Principle?
Anywhere an object of base class B is expected, you should be able to pass an object of derived class D. That is one of the most fundamental rules of object-oriented programming. We need to know that derived classes will still work when we upcast them to the base class and pass them to a function that expect the base class. That means we have to make some behavior fixed and unchangeable.

One key benefit of a non-virtual method is that it can be bound at compile time. That is the compiler can be sure which actual method is to be called when a method is used in code.
The actual method to be called cannot be known at compile time if that method is declared virtual, since the reference may actually point to a sub-type that has overriden it. Hence there is a small overhead at runtime when the actual method to call needs be resolved.

In a framework, a non-virtual member could be called and have a range of expected outputs, if the method was virtual the result of the method could be an expected result that wasn't tested for. Allowing methods to be non-virtual give expected results to framework actions.

Related

C# base misuse?

I quite often find this kind of code in my company...
class Base
{
public int Property
{
get; set;
}
}
class Derived : Base
{
public Derived()
{
base.Property = 0xAFFE;
}
}
And, i often argue that this kind of use of base is "wrong".
I argue, that "this.Property" would be "correct" (or simply "Property = 0xAFFE;")
I argue, that one could refactor (making Property virtual, override it).
But, my arguments seem not to convince. Can you help with arguments? Or am i (completely) wrong?
Thanx.
I think that, if Property in your example is not virtual, it doesn't matter if you use base or this.
If it is virtual though, and overriden in an inherited class, you'll have differences in behavior.
I personally tend to never use base or this when setting properties like this (which would be the same as specifying this). Only in specific situations (like overriding a virtual method an calling the base implementation) do I use those keywords
So, we have at least 3 ways to say the same:
Property = 0xAFFE;
this.Property = 0xAFFE;
base.Property = 0xAFFE;
And as usual, there are minor aspects that can make a decision why this line is correct or wrong.
First version is most relaxed. Property can be a real property or a variable declared above or a param passed to from outer scope. Moreover, in some cases you can write Property = Property and the compiler would be smart enough to understand you (bad practice anyway).
Second version does an assumption: Property is the class belongings. Period. Despite some code analyzers would rise a hint "syntax can be simplified", this is a normal way of doing things.
Third one intends an assumption that is even more tight: Property is a blessing from ancestors. So even if you override it locally, that syntax 100% ensures you that the change is applied on the ancestor field.
To be honest, in most cases last two syntax forms are interchangeable. Do you really see a point to enforce yours?

When inheriting a base class, is there a reason why C# defaults to new instead of override?

I know the difference between override and new (or believe to do so anyways), and there are several questions describing the difference between the two, but my question is if there is a particular reason why C# defaults to the new behavior (with a warning), instead of defaulting to override?
public class Base
{
virtual public string GetString() => "Hello from Base";
}
public class Child : Base
{
public string GetString() => "Hello from Child";
}
...
var childAsBase = (Base)new Child();
Console.WriteLine(childAsBase.GetString());
...
c:\>dotnet run
child.cs(5,23): warning CS0114: 'Child.GetString()' hides inherited member 'Base.GetString()'.
To make the current member override that implementation, add the override keyword.
Otherwise add the new keyword. [C:\IPutAllMyProjectsInMyRootFolder.csproj]
Hello from Base
I can think that it can be to get the same behavior whether the inherited method is marked virtual or not, but at the same time, declaring it virtual is saying "override me" so default to override instead seems reasonable to me.
Another reason that crossed my mind is the cost of using a virtual function table, but that seems like a scary reason in the sense that what I as a coder want the code to do should be more important than saving cpu-cycles. But perhaps back when the language was invented, that was not the case?
When a C# language design decision that involves type hierarchies seems unusual to you, a good technique is to ask yourself the question "what would happen if someone changed my base class without telling me?" C# was carefully designed to mitigate the costs of brittle base class failures, and this is one.
Let's first consider the case where a shadowing method has the override keyword.
This indicates to the compiler that the derived class author and the base class author are cooperating. The base class author made an overridable method, which is a super dangerous thing to do. An overridable method means that you cannot write a test case which tests all possible behaviours of that method! Overrideable-ness of a method must be designed in, and so you are required to say that a method is virtual (or abstract).
If we see an override modifier then we know that both the base class and derived class authors are taking responsibility for the correctness and safety of this dangerous extension point, and have successfully communicated with each other to agree upon the contract.
Let's next consider the case where a shadowing method has the new keyword. Again, we know that the derived class author has examined the base class, and has determined that the shadowed method, whether virtual or not, does not meet the needs of the derived class consumers, and has deliberately made the dangerous decision to have two methods that have the same signature.
That then leaves us with the situation where the shadowing method has neither override nor new. We have no evidence that the author of the derived class knows about the method in the base class. In fact we have evidence to the contrary; if they knew about a virtual base class method, they would have overridden it to match the contract of the virtual method, and if they knew about a non-virtual base class method then they would have deliberately made the dangerous decision to shadow it.
How could this situation arise? Only two ways come to mind.
First, the derived class author has insufficiently studied their base class and is ignorant of the existence of the method they've just shadowed, which is a horrible position to be in. The derived class inherits the behaviours of the base class and can be used in scenarios where the invariants of the base class are required to be maintained! We must warn ignorant developers that they are doing something extremely dangerous.
Second, the derived class is recompiled after a change to the base class. Now the derived class author is not ignorant of the base class as it was original written, and as they designed their derived class, and as they tested their derived class. But they are ignorant of the fact that the base class has changed.
Again, we must warn ignorant developers that something has happened that they need to make an important decision about: to override if possible, or to acknowledge the hiding, or to rename or delete the derived class method.
This then justifies why a warning must be given when a shadowing method is marked neither new nor override. But that wasn't your question. Your question was "why default to new?"
Well, suppose you are the compiler developer. Here are your choices when the compiler is faced with a shadowing method that lacks new and override:
Do nothing; give no warning or error, and choose a behaviour. If the code breaks due to a brittle base class failure, too bad. You should have looked at your base class more carefully. Plainly we can do better than this.
Make it an error. Now a base class author can break your build by changing a member of a base class. This is not a terrible idea, but we must now weigh the cost of desired build breaks -- because they found a bug -- against the costs of unwanted build breaks -- where the default behaviour is desired -- against the cost of ignoring the warning accidentally and introducing a bug.
This is a tricky call and there are arguments on all sides. Introducing a warning is a reasonable compromise position; you can always turn on "warnings are errors", and I recommend that you do.
Make it a warning, and make it override if the base method is overridable, and shadowing if the base method is not overridable. Not only is this inconsistent, but we've just introduced another kind of brittle base class failure. Do you see it? What if the base class author changes their method from non-virtual to virtual, or vice-versa? That would cause accidentally-shadowing methods to change from overriding to shadowing, or vice-versa.
But let's leave that aside for the moment. What are the other consequences of automatically overriding if possible? Remember, the premise of the scenario is that the overriding is accidental and the derived class author is ignorant of the implementation details, the invariants, and the public surface area of the base class.
Automatically changing behaviour of all callers of the base class method seems insanely dangerous compared with the danger of changing the behaviours of only those callers that call the shadowing method via a receiver of the derived type.
Make it a warning, and default to shadowing, not overriding. This choice is safer in general, it avoids a second kind of brittle base failure, it avoids build breaks, callers of the method with base class receivers get the behaviour that their test cases expect, and callers of the method with derived class receivers get the behaviour they expect.
All design choices are the results of carefully weighing many mutually incompatible design goals. The designers of C# were particularly concerned with large teams working on versioned software components where base classes could change in unexpected ways, and teams might not communicate those changes well to each other.
Another reason that crossed my mind is the cost of using a virtual function table, but that seems like a scary reason in the sense that what I as a coder want the code to do should be more important than saving cpu-cycles. But perhaps back when the language was invented, that was not the case?
Virtual methods introduce costs; the obvious cost is the extra table jump at runtime and the code needed to get to it. There are also less obvious costs like: the jitter can't inline virtual calls to non-sealed methods, and so on.
But as you note, the reason to make non-virtual the default is not primarily for performance. The primary reason is that virtualization is incredibly dangerous and needs to be carefully designed in. The invariants that must be maintained by derived classes that override methods need to be documented and communicated. Proper design of type hierarchies is expensive, and making it opt-in lowers costs and increases safety. Frankly, I wish sealed was the default as well.

Why "New" Keyword in C# [duplicate]

I was looking at this blog post and had following questions:
Why do we need the new keyword, is it just to specify that a base class method is being hidden. I mean, why do we need it? If we don't use the override keyword, aren't we hiding the base class method?
Why is the default in C# to hide and not override? Why have the designers implemented it this way?
Good questions. Let me re-state them.
Why is it legal to hide a method with another method at all?
Let me answer that question with an example. You have an interface from CLR v1:
interface IEnumerable
{
IEnumerator GetEnumerator();
}
Super. Now in CLR v2 you have generics and you think "man, if only we'd had generics in v1 I would have made this a generic interface. But I didn't. I should make something compatible with it now that is generic so that I get the benefits of generics without losing backwards compatibility with code that expects IEnumerable."
interface IEnumerable<T> : IEnumerable
{
IEnumerator<T> .... uh oh
What are you going to call the GetEnumerator method of IEnumerable<T>? Remember, you want it to hide GetEnumerator on the non-generic base interface. You never want that thing to be called unless you're explicitly in a backwards-compat situation.
That alone justifies method hiding. For more thoughts on justifications of method hiding see my article on the subject.
Why does hiding without "new" cause a warning?
Because we want to bring it to your attention that you are hiding something and might be doing it accidentally. Remember, you might be hiding something accidentally because of an edit to the base class done by someone else, rather than by you editing your derived class.
Why is hiding without "new" a warning rather than an error?
Same reason. You might be hiding something accidentally because you've just picked up a new version of a base class. This happens all the time. FooCorp makes a base class B. BarCorp makes a derived class D with a method Bar, because their customers like that method. FooCorp sees that and says hey, that's a good idea, we can put that functionality on the base class. They do so and ship a new version of Foo.DLL, and when BarCorp picks up the new version, it would be nice if they were told that their method now hides the base class method.
We want that situation to be a warning and not an error because making it an error means that this is another form of the brittle base class problem. C# has been carefully designed so that when someone makes a change to a base class, the effects on code that uses a derived class are minimized.
Why is hiding and not overriding the default?
Because virtual override is dangerous. Virtual override allows derived classes to change the behaviour of code that was compiled to use base classes. Doing something dangerous like making an override should be something you do consciously and deliberately, not by accident.
If the method in the derived class is preceded with the new keyword, the method is defined as being independent of the method in the base class
However if you don't specify either new or overrides, the resulting output is the same as if you specified new, but you will get a compiler warning (as you may not be aware that you are hiding a method in the base class method, or indeed you may have wanted to override it, and merely forgot to include the keyword).
So it helps you to avoid mistakes and explicitly show what you want to do and it makes more readable code, so one can easily understand your code.
It is worth noting that the only effect of new in this context is to suppress a Warning. There is no change in semantics.
So one answer is: We need new to signal to the compiler that the hiding is intentional and to get rid of the warning.
The follow up question is: If you won't / can't override a method, why would you introduce another method with the same name? Because hiding is in essence a name-conflict. And you would of course avoid it in most cases.
The only good reason I can think of for intentional hiding is when a name is forced upon you by an interface.
In C# members are sealed by default meaning that you cannot override them (unless marked with the virtual or abstract keywords) and this for performance reasons. The new modifier is used to explicitly hide an inherited member.
If overriding was default without specifying the override keyword, you could accidentally override some method of your base just due to the name equality.
.Net compiler strategy is to emit warnings if something could go wrong, just to be safe, so in this case if overriding was default, there would have to be a warning for each overriden method - something like 'warning: check if you really want to override'.
My guess would mainly be due to the multiple interface inheritance. Using discreet interfaces it would be very possible that two distinct interfaces use the same method signature. Allowing the use of the new keyword would allow you to create these different implementations with one class, instead of having to create two distinct classes.
Updated ... Eric gave me an idea on how to improve this example.
public interface IAction1
{
int DoWork();
}
public interface IAction2
{
string DoWork();
}
public class MyBase : IAction1
{
public int DoWork() { return 0; }
}
public class MyClass : MyBase, IAction2
{
public new string DoWork() { return "Hi"; }
}
class Program
{
static void Main(string[] args)
{
var myClass = new MyClass();
var ret0 = myClass.DoWork(); //Hi
var ret1 = ((IAction1)myClass).DoWork(); //0
var ret2 = ((IAction2)myClass).DoWork(); //Hi
var ret3 = ((MyBase)myClass).DoWork(); //0
var ret4 = ((MyClass)myClass).DoWork(); //Hi
}
}
As noted, method/property hiding makes it possible to change things about a method or property which could not be readily changed otherwise. One situation where this can be useful is allowing an inherited class to have read-write properties which are read-only in the base class. For example, suppose a base class has a bunch of read-only properties called Value1-Value40 (of course, a real class would use better names). A sealed descendant of this class has a constructor that takes an object of the base class and copies the values from there; the class does not allow them to be changed after that. A different, inheritable, descendant declare a read-write properties called Value1-Value40 which, when read, behaves the same as the base class versions but, when written, allows the values to be written. The net effect will be that code which wants an instance of the base class that it knows will never change can create a new object of the read-only class, which can copy data from a passed-in object without having to worry whether that object is read-only or read-write.
One annoyance with this approach--perhaps someone can help me out--is that I don't know of a way to both shadow and override a particular property within the same class. Do any of the CLR languages allow that (I use vb 2005)? It would be useful if the base class object and its properties could be abstract, but that would require an intermediate class to override the Value1 to Value40 properties before a descendant class could shadow them.

Should I mark all methods virtual?

In Java you can mark method as final to make it impossible to override.
In C# you have to mark method as virtual to make it possible to override.
Does it mean that in C# you should mark all methods virtual (except a few ones that you don't want to be overridden), since most likely you don't know in what way your class can be inherited?
In C# you have to mark method as virtual to make it possible to override. Does it mean that in C# you should mark all methods virtual (except a few ones that you don't want to be overridden), since most likely you don't know in what way your class can be inherited?
No. If the language designers thought that virtual should have been the default then it would have been the default.
Overridablility is a feature, and like all features it has costs. The costs of an overrideable method are considerable: there are big design, implementation and testing costs, particularly if there is any "sensitivity" to the class; virtual methods are ways of introducing untested third-party code into a system and that has a security impact.
If you don't know how you intend your class to be inherited then don't publish your class because you haven't finished designing it yet. Your extensibility model is definitely something you should know ahead of time; it should deeply influence your design and testing strategy.
I advocate that all classes be sealed and all methods be non-virtual until you have a real-world customer-focussed reason to unseal or to make a method virtual.
Basically your question is "I am ignorant of how my customers intend to consume my class; should I therefore make it arbitrarily extensible?" No; you should become knowledgable! You wouldn't ask "I don't know how my customers are going to use my class, so should I make all my properties read-write? And should I make all my methods read-write properties of delegate type so that my users can replace any method with their own implementation?" No, don't do any of those things until you have evidence that a user actually needs that capability! Spend your valuable time designing, testing and implementing features that users actually want and need, and do so from a position of knowledge.
In my opinion the currently accepted answer is unnecessarily dogmatic.
Fact is that when you don't mark a method as virtual, others cannot override its behaviour and when you mark a class as sealed others cannot inherit from the class. This can cause substantial pain. I don't know how many times I cursed an API for marking classes sealed or not marking methods virtual simply because they did not anticipate my use case.
Theoretically it might be the correct approach to only allow overriding methods and inheriting classes which are meant to be overridden and inherited but in practice it's impossible to foresee every possible scenario and there really isn't a good reason to be so closed in.
If you don't have a very good reason then don't mark classes as
sealed.
If your library is meant to be consumed by others, then at least try to mark the main methods of a class which contain the behaviour as virtual.
One way to make the call is to look at the name of the method or property. A GetLength() method on a List does exactly what the name implies and it doesn't allow for much of interpretation. Changing its implementation would likely be not very transparent so marking it as virtual is probably unnecessary. Marking the Add method as virtual is far more useful as someone could create a special List which only accepts some objects via the Add method etc. Another example are custom controls. You would want to make the main drawing method virtual so others can use the bulk of the behaviour and just change the look but you probably wouldn't override the X and Y properties.
In the end you often don't have to make that decision right away. In an internal project where you can easily change the code anyway I wouldn't worry about these things. If a method needs to be overridden you can always make it virtual when this happens.
On the contrary, if the project is an API or library which is consumed by others and slow to update, it certainly pays off to think about which classes and methods might be useful. In this case I think it's better to be open rather than strictly closed.
No! Because you don't know how your class will be inherited, you should only mark a method as virtual if you know that you want it to be overridden.
No. Only methods that you want derived classes to specify should be virtual.
Virtual is not related to final.
To prevent overriding of a virtual method in c# you use sealed
public class MyClass
{
public sealed override void MyFinalMethod() {...}
}
Yes you should.
I wish to answer a different answer than most other answers.
This is a flaw in C#. A defect. A mistake in its design.
You can see that when comparing to Java where all methods are "virtual" unless,
specified otherwise ("final").
Of course that if there is a class "Rectangle" with a method of "Area",
and you wish to have your own class that represents a "Rectangle" with margins.
You wish to take advantage of the existing class with all of its properties and methods and you just want to add a property of "margin" that adds some value to the regular rectangle area, and if the area method in Rectangle is not marked virtual, you are doomed.
Image please a method that takes an array of Rectangles and returns the sum of the area of all rectangles.
Some can be regular rectangles and some with margin.
Now read back the answer that is marked "correct" the describe "secuirty issue" or "testing". Those are meaningless compared to the disability to override.
I am not surprised that others answered "no".
I am surprised that the authors of C# couldn't see that while basing their language on Java.
We can conjure up reasons for/again either camp, but that's entirely useless.
In Java there are millions of unintended non-final public methods, but we hear very few horror stories.
In C# there are millions of sealed public methods, and we hear very few horror stories.
So it is not a big deal - the need to override a public method is rare, so it's moot either way.
This reminds me of another argument - whether a local variable should be final by default. It is a pretty good idea, but we cannot exaggerate how valuable it is. There are billions of local variables that could be, but are not, final, but it has been shown to be an actual problem.
Making a method virtual will generally slow down any code that needs to call it. This slowdown will be insignificant but may in some cases be quite large (among other things, because non-virtual method calls may be in-lined, which may in turn allow the optimizer to eliminate unnecessary operations). It's not always possible to predict the extent to which virtual calls may affect execution speed, and one should generally void doing things which will make code slower except when there's a discernible benefit for doing so.
The performance benefit of making methods non-virtual is probably sufficient in many cases to justify having methods be non-virtual by default, but when classes are designed to be inherited most methods should be virtual and unsealed; the primary usage for non-virtual or sealed methods should be as wrappers for other (possibly protected) virtual methods (code that wants to change the underlying behavior should override the appropriate virtual rather than the wrapper).
There are frequently non-performance-related reasons for marking classes as sealed or limiting inheritance to other classes within the assembly. Among other things, if a class is externally inheritable, all members with protected scope are effectively added to its public API, and any changes to their behavior in the base class may break any derived classes that rely upon that behavior. On the other hand, if a class is inheritable, making its methods virtual doesn't really increase its exposure. If anything, it may reduce derived class's reliance upon the base class internals by allowing them to completely "bury" aspects of the base class implementation that are no longer relevant in the derived class [e.g. if the members of List<T> were virtual, a derived class which overrode them all could use an array of arrays to hold things (avoiding large-object-heap issues), and wouldn't have to try to keep the private array used by List<T> consistent with the array-of-arrays.
No, you should not mark all methods as virtual. You should consider how your class could be inherited. If the class should not be inherited, then mark it sealed and obviously the members should not be virtual. If your class is likely to be inherited, you really should maximize the ability to override the behavior. Therefore generously use virtual everywhere in such classes unless you have a reason not to.

How do I block the new modifier?

I have a property in a base class that I don't want overridden for any reason. It assigns an ID to the class for use with a ThreadQueue I created. I see no reason whatsoever for anyone to override it. I was wondering how I can block anyone from attempting to override it short of them changing its modifier.
private int _threadHostID = 0;
public int ThreadHostID
{
get
{
if (_threadHostID == 0)
{
_threadHostID = ThreadQueue.RequestHostID();
}
return _threadHostID;
}
}
Edit: totally forgot the language: C#.
Edit2: It is not virtual or overriding anything else so please no sealed.
First off: "Overriding" refers to virtual overriding. You are talking about creating hiding methods, not overriding methods.
I have a property in a base class that I don't want hidden
You are free to want that, but you are going to have to learn to live with the disappointment of not getting what you want.
I see no reason whatsoever for anyone to hide it.
Then there won't be a problem, will there? If no one could possible want to hide it, then they won't hide it. You're basically saying "I have an object of no value to anyone; how do I keep someone from stealing it?" Well, if it is of no value, then no one is going to want to steal it, so why would you spend money on a safe to protect something that no one wants to steal in the first place?
If there is no reason for someone to hide or override your method then no one will. If there is a reason for someone to hide or override your method, then who are you to tell them not to? You are providing a base class; you are the servant of the derived class author, not their master.
Now, sometimes being a good servant means building something that resists misuse, is robust, and reasonably priced. I encourage people to build sealed classes, for example. Designing secure, robust, inheritable classes that meet the real needs of inheritors is expensive and difficult.
But if you are going to create a robust unsealed base class designed for inheritance, why try to stop the derived class author from hiding, if they have a reason to do so? It cannot possibly hurt the base class. The only people it could hurt are the users of the derived class, and those people are the derived class author's problem, not yours.
There is no way to stop member hiding. If you don't make it virtual or abstract, then a derived class cannot override it properly anyway, hiding isn't polymorphic.
If a derived class hides it using the new operator, then they are opening up problems for themselves as any code that decides to use a reference to the base class will not touch the derived member. So basically, all code that utilises the "base class"-ness of the type hierarchy will bypass all member hiding anyway.
The sealed keyword only works if a derived type overrides a base type and doesn't want it to be overridden further... not sure how it plays with the new operator though. Most likely the member hiding will still be allowed, but will still have the same direct-type problem.
Your task is done by not making the method virtual or abstract, if a person wants to hide members then they are responsible for anything that breaks because they decided to abuse the design.
I think you should not worry about this. If you don't write it as virtual then you are making clear that it is not intended to be overridden and in fact you will receive a warning if you will override it (without the "new" modifier):
Warning: [...] hides inherited member [...].
Use the new keyword if hiding was intended
If you have this fear you should worry about any method that you write in a non-sealed class. So the job for you is just make sure that the design of your class is consistent and clear and if someone wants to inherit it then should be not dumb to just go and redefine non-virtual properties/methods. You cannot completely shield yourself from others stupidity :).
As far as I can tell, you apparently can't do that on a property level. However, if you seal the class:
public class Base
{
public int ID { get; set; }
}
public sealed class Child : Base
{
/// blah
}
then ...
public class Grandchild : Child
{
public int ID { get; set; }
}
will throw an error on the class definition, so using new doesn't even come into play.
Not an exact solution to your problem, but it does keep others from extending or interfering with your API.
Does it actually matter if someone does put a 'new' implementation in? I'm assuming you will always be referring to the base class in any code using that property since that is where it is declared and since it's not override or virtual it won't polymorphically call up to a 'new' implementation anyway.

Categories