In Java you can mark method as final to make it impossible to override.
In C# you have to mark method as virtual to make it possible to override.
Does it mean that in C# you should mark all methods virtual (except a few ones that you don't want to be overridden), since most likely you don't know in what way your class can be inherited?
In C# you have to mark method as virtual to make it possible to override. Does it mean that in C# you should mark all methods virtual (except a few ones that you don't want to be overridden), since most likely you don't know in what way your class can be inherited?
No. If the language designers thought that virtual should have been the default then it would have been the default.
Overridablility is a feature, and like all features it has costs. The costs of an overrideable method are considerable: there are big design, implementation and testing costs, particularly if there is any "sensitivity" to the class; virtual methods are ways of introducing untested third-party code into a system and that has a security impact.
If you don't know how you intend your class to be inherited then don't publish your class because you haven't finished designing it yet. Your extensibility model is definitely something you should know ahead of time; it should deeply influence your design and testing strategy.
I advocate that all classes be sealed and all methods be non-virtual until you have a real-world customer-focussed reason to unseal or to make a method virtual.
Basically your question is "I am ignorant of how my customers intend to consume my class; should I therefore make it arbitrarily extensible?" No; you should become knowledgable! You wouldn't ask "I don't know how my customers are going to use my class, so should I make all my properties read-write? And should I make all my methods read-write properties of delegate type so that my users can replace any method with their own implementation?" No, don't do any of those things until you have evidence that a user actually needs that capability! Spend your valuable time designing, testing and implementing features that users actually want and need, and do so from a position of knowledge.
In my opinion the currently accepted answer is unnecessarily dogmatic.
Fact is that when you don't mark a method as virtual, others cannot override its behaviour and when you mark a class as sealed others cannot inherit from the class. This can cause substantial pain. I don't know how many times I cursed an API for marking classes sealed or not marking methods virtual simply because they did not anticipate my use case.
Theoretically it might be the correct approach to only allow overriding methods and inheriting classes which are meant to be overridden and inherited but in practice it's impossible to foresee every possible scenario and there really isn't a good reason to be so closed in.
If you don't have a very good reason then don't mark classes as
sealed.
If your library is meant to be consumed by others, then at least try to mark the main methods of a class which contain the behaviour as virtual.
One way to make the call is to look at the name of the method or property. A GetLength() method on a List does exactly what the name implies and it doesn't allow for much of interpretation. Changing its implementation would likely be not very transparent so marking it as virtual is probably unnecessary. Marking the Add method as virtual is far more useful as someone could create a special List which only accepts some objects via the Add method etc. Another example are custom controls. You would want to make the main drawing method virtual so others can use the bulk of the behaviour and just change the look but you probably wouldn't override the X and Y properties.
In the end you often don't have to make that decision right away. In an internal project where you can easily change the code anyway I wouldn't worry about these things. If a method needs to be overridden you can always make it virtual when this happens.
On the contrary, if the project is an API or library which is consumed by others and slow to update, it certainly pays off to think about which classes and methods might be useful. In this case I think it's better to be open rather than strictly closed.
No! Because you don't know how your class will be inherited, you should only mark a method as virtual if you know that you want it to be overridden.
No. Only methods that you want derived classes to specify should be virtual.
Virtual is not related to final.
To prevent overriding of a virtual method in c# you use sealed
public class MyClass
{
public sealed override void MyFinalMethod() {...}
}
Yes you should.
I wish to answer a different answer than most other answers.
This is a flaw in C#. A defect. A mistake in its design.
You can see that when comparing to Java where all methods are "virtual" unless,
specified otherwise ("final").
Of course that if there is a class "Rectangle" with a method of "Area",
and you wish to have your own class that represents a "Rectangle" with margins.
You wish to take advantage of the existing class with all of its properties and methods and you just want to add a property of "margin" that adds some value to the regular rectangle area, and if the area method in Rectangle is not marked virtual, you are doomed.
Image please a method that takes an array of Rectangles and returns the sum of the area of all rectangles.
Some can be regular rectangles and some with margin.
Now read back the answer that is marked "correct" the describe "secuirty issue" or "testing". Those are meaningless compared to the disability to override.
I am not surprised that others answered "no".
I am surprised that the authors of C# couldn't see that while basing their language on Java.
We can conjure up reasons for/again either camp, but that's entirely useless.
In Java there are millions of unintended non-final public methods, but we hear very few horror stories.
In C# there are millions of sealed public methods, and we hear very few horror stories.
So it is not a big deal - the need to override a public method is rare, so it's moot either way.
This reminds me of another argument - whether a local variable should be final by default. It is a pretty good idea, but we cannot exaggerate how valuable it is. There are billions of local variables that could be, but are not, final, but it has been shown to be an actual problem.
Making a method virtual will generally slow down any code that needs to call it. This slowdown will be insignificant but may in some cases be quite large (among other things, because non-virtual method calls may be in-lined, which may in turn allow the optimizer to eliminate unnecessary operations). It's not always possible to predict the extent to which virtual calls may affect execution speed, and one should generally void doing things which will make code slower except when there's a discernible benefit for doing so.
The performance benefit of making methods non-virtual is probably sufficient in many cases to justify having methods be non-virtual by default, but when classes are designed to be inherited most methods should be virtual and unsealed; the primary usage for non-virtual or sealed methods should be as wrappers for other (possibly protected) virtual methods (code that wants to change the underlying behavior should override the appropriate virtual rather than the wrapper).
There are frequently non-performance-related reasons for marking classes as sealed or limiting inheritance to other classes within the assembly. Among other things, if a class is externally inheritable, all members with protected scope are effectively added to its public API, and any changes to their behavior in the base class may break any derived classes that rely upon that behavior. On the other hand, if a class is inheritable, making its methods virtual doesn't really increase its exposure. If anything, it may reduce derived class's reliance upon the base class internals by allowing them to completely "bury" aspects of the base class implementation that are no longer relevant in the derived class [e.g. if the members of List<T> were virtual, a derived class which overrode them all could use an array of arrays to hold things (avoiding large-object-heap issues), and wouldn't have to try to keep the private array used by List<T> consistent with the array-of-arrays.
No, you should not mark all methods as virtual. You should consider how your class could be inherited. If the class should not be inherited, then mark it sealed and obviously the members should not be virtual. If your class is likely to be inherited, you really should maximize the ability to override the behavior. Therefore generously use virtual everywhere in such classes unless you have a reason not to.
Related
I know the difference between override and new (or believe to do so anyways), and there are several questions describing the difference between the two, but my question is if there is a particular reason why C# defaults to the new behavior (with a warning), instead of defaulting to override?
public class Base
{
virtual public string GetString() => "Hello from Base";
}
public class Child : Base
{
public string GetString() => "Hello from Child";
}
...
var childAsBase = (Base)new Child();
Console.WriteLine(childAsBase.GetString());
...
c:\>dotnet run
child.cs(5,23): warning CS0114: 'Child.GetString()' hides inherited member 'Base.GetString()'.
To make the current member override that implementation, add the override keyword.
Otherwise add the new keyword. [C:\IPutAllMyProjectsInMyRootFolder.csproj]
Hello from Base
I can think that it can be to get the same behavior whether the inherited method is marked virtual or not, but at the same time, declaring it virtual is saying "override me" so default to override instead seems reasonable to me.
Another reason that crossed my mind is the cost of using a virtual function table, but that seems like a scary reason in the sense that what I as a coder want the code to do should be more important than saving cpu-cycles. But perhaps back when the language was invented, that was not the case?
When a C# language design decision that involves type hierarchies seems unusual to you, a good technique is to ask yourself the question "what would happen if someone changed my base class without telling me?" C# was carefully designed to mitigate the costs of brittle base class failures, and this is one.
Let's first consider the case where a shadowing method has the override keyword.
This indicates to the compiler that the derived class author and the base class author are cooperating. The base class author made an overridable method, which is a super dangerous thing to do. An overridable method means that you cannot write a test case which tests all possible behaviours of that method! Overrideable-ness of a method must be designed in, and so you are required to say that a method is virtual (or abstract).
If we see an override modifier then we know that both the base class and derived class authors are taking responsibility for the correctness and safety of this dangerous extension point, and have successfully communicated with each other to agree upon the contract.
Let's next consider the case where a shadowing method has the new keyword. Again, we know that the derived class author has examined the base class, and has determined that the shadowed method, whether virtual or not, does not meet the needs of the derived class consumers, and has deliberately made the dangerous decision to have two methods that have the same signature.
That then leaves us with the situation where the shadowing method has neither override nor new. We have no evidence that the author of the derived class knows about the method in the base class. In fact we have evidence to the contrary; if they knew about a virtual base class method, they would have overridden it to match the contract of the virtual method, and if they knew about a non-virtual base class method then they would have deliberately made the dangerous decision to shadow it.
How could this situation arise? Only two ways come to mind.
First, the derived class author has insufficiently studied their base class and is ignorant of the existence of the method they've just shadowed, which is a horrible position to be in. The derived class inherits the behaviours of the base class and can be used in scenarios where the invariants of the base class are required to be maintained! We must warn ignorant developers that they are doing something extremely dangerous.
Second, the derived class is recompiled after a change to the base class. Now the derived class author is not ignorant of the base class as it was original written, and as they designed their derived class, and as they tested their derived class. But they are ignorant of the fact that the base class has changed.
Again, we must warn ignorant developers that something has happened that they need to make an important decision about: to override if possible, or to acknowledge the hiding, or to rename or delete the derived class method.
This then justifies why a warning must be given when a shadowing method is marked neither new nor override. But that wasn't your question. Your question was "why default to new?"
Well, suppose you are the compiler developer. Here are your choices when the compiler is faced with a shadowing method that lacks new and override:
Do nothing; give no warning or error, and choose a behaviour. If the code breaks due to a brittle base class failure, too bad. You should have looked at your base class more carefully. Plainly we can do better than this.
Make it an error. Now a base class author can break your build by changing a member of a base class. This is not a terrible idea, but we must now weigh the cost of desired build breaks -- because they found a bug -- against the costs of unwanted build breaks -- where the default behaviour is desired -- against the cost of ignoring the warning accidentally and introducing a bug.
This is a tricky call and there are arguments on all sides. Introducing a warning is a reasonable compromise position; you can always turn on "warnings are errors", and I recommend that you do.
Make it a warning, and make it override if the base method is overridable, and shadowing if the base method is not overridable. Not only is this inconsistent, but we've just introduced another kind of brittle base class failure. Do you see it? What if the base class author changes their method from non-virtual to virtual, or vice-versa? That would cause accidentally-shadowing methods to change from overriding to shadowing, or vice-versa.
But let's leave that aside for the moment. What are the other consequences of automatically overriding if possible? Remember, the premise of the scenario is that the overriding is accidental and the derived class author is ignorant of the implementation details, the invariants, and the public surface area of the base class.
Automatically changing behaviour of all callers of the base class method seems insanely dangerous compared with the danger of changing the behaviours of only those callers that call the shadowing method via a receiver of the derived type.
Make it a warning, and default to shadowing, not overriding. This choice is safer in general, it avoids a second kind of brittle base failure, it avoids build breaks, callers of the method with base class receivers get the behaviour that their test cases expect, and callers of the method with derived class receivers get the behaviour they expect.
All design choices are the results of carefully weighing many mutually incompatible design goals. The designers of C# were particularly concerned with large teams working on versioned software components where base classes could change in unexpected ways, and teams might not communicate those changes well to each other.
Another reason that crossed my mind is the cost of using a virtual function table, but that seems like a scary reason in the sense that what I as a coder want the code to do should be more important than saving cpu-cycles. But perhaps back when the language was invented, that was not the case?
Virtual methods introduce costs; the obvious cost is the extra table jump at runtime and the code needed to get to it. There are also less obvious costs like: the jitter can't inline virtual calls to non-sealed methods, and so on.
But as you note, the reason to make non-virtual the default is not primarily for performance. The primary reason is that virtualization is incredibly dangerous and needs to be carefully designed in. The invariants that must be maintained by derived classes that override methods need to be documented and communicated. Proper design of type hierarchies is expensive, and making it opt-in lowers costs and increases safety. Frankly, I wish sealed was the default as well.
I have a project where quite a few functions and variable getters will be defined, abstractly. My question is should I use an abstract class for this(with each function throwing NotImplementedException), or should I just use an interface? Or should I use both, making both an interface and then an abstract class implementing the interface?
Note, even though all of these functions and such may be defined, it does not mean they will all be used in all use cases. For instance, AddUser in an authentication class may be defined in an interface, but not ever used in a website due to closed user sign up.
In general, the answer to the question of whether or not to use inheritance or an interface can be answered by thinking about it this way:
When thinking about hypothetical
implementing classes, is it a case
where these types are what I'm
describing, or is it a case where
these types can be or can do what I'm
describing?
Consider, for example, the IEnumerable<T> interface. The classes that implement IEnumerable<T> are all different classes. They can be an enumerable structure, but they're fundamentally something else (a List<T> or a Dictionary<TKey, TValue> or a query, etc.)
On the other hand, look at the System.IO.Stream class. While the classes that inherit from that abstract class are different (FileStream vs. NetworkStream, for example), they are both fundamentally streams--just different kinds. The stream functionality is at the core of what defines these types, versus just describing a portion of the type or a set of behaviors that they provide.
Often you'll find it beneficial to do both; define an interface that defines your behavior, then an abstract class that implements it and provides core functionality. This will allow you to, if appropriate, have the best of both worlds: an abstract class for inheriting from when the functionality is core, and an interface to implement when it isn't.
Also, bear in mind that it's still possible to provide some core functionality on an interface through the use of extension methods. While this doesn't, strictly speaking, put any actual instance code on the interface (since that's impossible), you can mimic it. This is how the LINQ-to-Objects query functions work on IEnumerable<T>, by way of the static Enumerable class that defines the extension methods used for querying generic IEnumerable<T> instances.
As a side note, you don't need to throw any NotImplementedExceptions. If you define a function or property as abstract, then you don't need to (and, in fact, cannot) provide a function body for it within the abstract class; the inheriting classes will be forced to provide a method body. They might throw such an exception, but that's not something you need to worry about (and is true of interfaces as well).
Personally, I think it depends on what the "type" is defining.
If you're defining a set of behaviors, I would recommend an interface.
If, on the other hand, the type really defines a "type", then I'd prefer an abstract class. I would recommend leaving the methods abstract instead of providing an empty behavior, though.
Note, even though all of these functions and such may be defined, it does not mean they will all be used in all use cases.
If this is true, you should consider breaking this up into multiple abstract classes or interfaces. Having "inappropriate" methods in the base class/interface really is a violation of the Liskov Substitution Principle, and a sign of a design flaw.
If you're not providing any implementation, then use an interface otherwise use an abstract class. If there are some methods that may not be implemented in subclasses, it might make sense to create an intermediate abstract class to do the legwork of throwing NotSupportedException or similar.
One advantage of abstract classes is that one can add to an abstract class new class members whose default implementation can be expressed in terms of existing class members, without breaking existing inheritors of that class. By contrast, if any new members are added to an interface, every implementation of that interface must be modified to add the necessary functionality.
It would be very nice if .net allowed for an interface to include default implementations for properties, methods, and events which did not make any use of object fields. From a technical standpoint, I would think such a thing could be accomplished without too much difficulty by having for each interface a list of default vtable entries which could be used with implementations that don't define all vtable slots. Unfortunately, nothing like that ability exists in .net.
Abstract classes should be used when you can provide a partial implementation. Use interfaces when you don't want to provide any implementation at all - just definition.
In your question, it sounds like there is no implementation, so go with an interface.
Also, rather than throwing NotImplementedException you should declare your method/property with the abstract keyword so that all inheritors have to provide an implementation.
#Earlz I think refering to this: Note, even though all of these functions and such may be defined, it does not mean they will all be used in all use cases. is directly related to the best way to 'attack' this problem.
What you should aim at is minimizing the number of such functions so that it becomes irrelavant (or at least not that important) if you use either or. So improve the design as much as you can and you will see that it really doesn't matter which way you go.
Better yet post a high level of what you are trying to do and let's see if we can come up together with something nice. More brains working towards a common goal will get a better answer/design.
Another pattern that works in some situations is to create a base class that is not abstract. Its has a set of public methods that define the API. Each of these calls a Protected method that is Overideable.
This allows the derived class to pick and choose what methods it needs to implement.
So for instance
public void AddUser(object user)
{
AddUserCore(user);
}
protected virtual void AddUserCore(object user)
{
//no implementation in base
}
If I have a class with a method I want protected and internal. I want that only derived classes in the assembly would be able to call it.
Since protected internal means protected or internal, you have to make a choice. What do you choose in this case - protected or internal?
Personally I would choose protected. If subclasses in your own assembly are good enough to call the method, why wouldn't a subclass in another assembly? Perhaps you could refactor the functionality into a separate (internal) class altogether.
You really need to think objectively about the purpose of the method. Internal accessibility almost always feels wrong to me. Mostly because of my experience trying to derive from controls or classes in the .NET framework where I ran into a brick wall because someone decided to mark a class or method as internal. The original author never noticed that not having access to that method made things much harder to implement a subclass.
EDIT
To clarify, internal accessibility for a class is very useful and I wasn't implying internal in general is bad. My point was that internal methods on an otherwise public class seems wrong to me. A properly designed base class should not give an unfair advantage to derived classes in the same assembly.
I want that only derived classes in the assembly would be able to call it.
Well then, you have two choices. You can make it protected, and whenever one of your customers extends your class and calls your method and you find out about it, you can write them a sternly worded letter telling them to please stop doing that. Or you can make it internal, and do code reviews of your coworkers' code to ensure that they don't use the method they're not supposed to use.
My guess is that the latter is the cheaper and easier thing to do. I'd make it internal.
I believe the right choice is internal. This way you can protect people outside of your assembly from calling this method, and this only leaves you to be careful and only call this method from derived classes. It is easier to be careful in the assembly you write than hope other people would be careful when they use it.
It's such a quirky decision to make protected internal mean protected OR internal. For this precise case I would use internal. The reason is that if encapsulation is broken, I would rather it'd be me, rather that someone not under my control.
I think the answer varies based on your needs.
If I were you, I would do something like this:
public class YourClass
{
protected class InnerClass
{
internal void YourMethod()
{
// Your Code
}
}
}
I have two basic interface-related concepts that I need to have a better
understanding of.
1) How do I use interfaces if I only want to use some of the interface
methods in a given class? For example, my FriendlyCat class inherits from
Cat and implements ICatSounds. ICatSounds exposes MakeSoftPurr() and
MakeLoudPurr() and MakePlayfulMeow(). But, it also exposes MakeHiss()
and MakeLowGrowl() - both of which I don't need for my FriendlyCat class.
When I try to implement only some of the methods exposed by the interface
the compiler complains that the others (that I don't need) have not been
implemented.
Is the answer to this that I must create an interface that only contains
the methods that I want to expose? For example, from my CatSounds class, I
would create IFriendlyCatSounds? If this is true, then what happens when
I want to use the other methods in another situation? Do I need to create
another custom-tailored interface? This doesn't seem like good design to me.
It seems like I should be able to create an interface with all of the
relevant methods (ICatSounds) and then pick and choose which methods I
am using based on the implementation (FriendlyCat).
2) My second question is pretty basic but still a point of confusion for
me. When I implement the interface (using Shift + Alt + F10) I get the interface's
methods with "throw new NotImplementedException();" in the body. What
else do I need to be doing besides referencing the interface method that
I want to expose in my class? I am sure this is a big conceptual oops, but
similar to inheriting from a base class, I want to gain access to the methods
exposed by the interface wihtout adding to or changing them. What is the
compiler expecting me to implement?
-- EDIT --
I understand #1 now, thanks for your answers. But I still need further elaboration
on #2. My initial understanding was that an interface was a reflection of a the fully
designed methods of a given class. Is that wrong? So, if ICatSounds has
MakeSoftPurr() and MakeLoudPurr(), then both of those functions exist in
CatSounds and do what they imply. Then this:
public class FriendlyCat: Cat, ICatSounds
{
...
public void ICatSounds.MakeLoudPurr()
{
throw new NotImplementedException();
}
public void ICatSounds.MakeSoftPurr()
{
throw new NotImplementedException();
}
}
is really a reflection of of code that already exists so why am
I implementing anything? Why can't I do something like:
FriendlyCat fcat = new FriendlyCat();
fcat.MakeSoftPurr();
If the answer is, as I assume it will be, that the method has no
code and therefore will do nothing. Then, if I want these methods
to behave exactly as the methods in the class for which the interface
is named, what do I do?
Thanks again in advance...
An interface is a contract. You have to provide at least stubs for all of the methods. So designing a good interface is a balancing act between having lots of little interfaces (thus having to use several of them to get anything done), and having large, complex interfaces that you only use (or implement) parts of. There is no hard an fast rule for how to choose.
But you do need to keep in mind that once you ship your first version of the code, it becomes a lot more difficult to change your interfaces. It's best to think at least a little bit ahead when you design them.
As for implementation, it's pretty common to see code that stubs the methods that aren't written yet, and throws a NotImplemented exception. You don't really want to ship NotImplemented in most cases, but it's a good get around the problem of not having the code compile because you havn't implemented required parts of the interface yet.
There's at least one example in the framework of "deliberately" not implementing all of an interface's contract in a class: ReadOnlyCollection<T>
Since this class implements IList<T>, it has to have an "Insert" method, which makes no sense in a read-only collection.
The way Microsoft have implemented it is quite interesting. Firstly, they implement the method explicitly, something like this:
public class ReadOnlyCollection<T> : IList<T>
{
public void IList<T>.Insert(int index, T item)
{
throw new NotSupportedException();
}
/* ... rest of IList<T> implemented normally */
}
This means that users of ReadOnlyCollection<T> don't see the Insert method in intellisense - they would only see it if it were cast to IList<T> first.
Having to do this is really a hint that your interface hierarchy is a bit messed up and needs refactoring, but it's an option if you have no control over the interfaces (or need backwards compatibility, which is probably why MS decided to take this route in the framework).
You have to implement all the methods in your interface. Create two interfaces, IHappyCatSounds and IMeanCatSounds, split out those methods. Don't implement IMeanCatSounds in FriendlyCat, because a friendly cat is not a mean cat. You have to think about an interface as a contract. When you write the interface, you are guaranteeing that every class that implements the interface will have those members.
It throws a NotImplementedException because you haven't implemented it yet. The compiler is expecting you to implement the code that would be completed when the cat purrs, meows or hisses. An interface doesn't have code in it. It's simply nothing more than a contract for any class that implements it, so you can't really "access the code" the interface implements, because the interface doesn't implement any code. You implement the code when you inherit from the interface.
For example:
// this is the interface, or the "contract". It guarantees
// that anything that implements IMeowingCat will have a void
// that takes no parameters, named Meow.
public class IMeowingCat
{
void Meow();
}
// this class, which implements IMeowingCat is the "interface implementation".
// *You* write the code in here.
public class MeowingCat : IMeowingCat
{
public void Meow
{
Console.WriteLine("Meow. I'm hungry");
}
}
I'd strongly suggest picking up a copy of The Object Oriented Thought Process, and read it through in it's entirety. It's short, but it should help you to clear things up.
For starters, though, I'd read this and this.
Imagine that you could "pick and choose." For example, suppose you were allowed to not implement ICatSounds.MakeHiss() on FriendlyCat. Now what happens when a user of your classes writes the following code?
public ICatSounds GetCat()
{
return new FriendlyCat();
}
ICatSounds cat = GetCat();
cat.MakeHiss();
The compiler has to let this pass: after all, GetCat is returning an ICatSounds, it's being assigned to an ICatSounds variable and ICatSounds has a MakeHiss method. But what happens when the code runs? .NET finds itself calling a method that doesn't exist.
This would be bad if it were allowed to happen. So the compiler requires you to implement all the methods in the interface. Your implementation is allowed to throw exceptions, such as NotImplementedException or NotSupportedException, if you want to: but the methods have to exist; the runtime has to be able to at least call them, even if they blow up.
See also Liskov Substitution Principle. Basically, the idea is that if FriendlyCat is an ICatSounds, it has to be substitutable anywhere an ICatSounds is used. A FriendlyCat without a MakeHiss method is not substitutable because users of ICatSounds could use the MakeHiss method but users of FriendlyCat couldn't.
A few thoughts:
Interface Separation Principle. Interfaces should be as small as possible, and only contain things that cannot be separated. Since MakePlayfulMeow() and MakeHiss() are not intrinsically tied together, they should be on two separate interfaces.
You're running into a common problem with deep inheritance trees, especially of the type of inheritance that you're describing. Namely, there's commonly three objects that have three different behaviors in common, only none of them share the same set. So a Lion might Lick() and Roar(), a Cheetah might Meow() and Lick(), and an AlienCat might Roar() and Meow(). In this scenario, there's no clear inheritance hierarchy that makes sense. Because of situations like these, it often makes more sense to separate the behaviors into separate classes, and then create aggregates that combine the appropriate behaviors.
Consider whether that's the right design anyway. You normally don't tell a cat to purr, you do something to it that causes it to purr. So instead of MakePlayfulMeow() as a method on the cat, maybe it makes more sense to have a Show(Thing) method on the cat, and if the cat sees a Toy object, it can decide to emit an appropriate sound. In other words, instead of thinking of your program as manipulating objects, think of your program as a series of interactions between objects. In this type of design, interfaces often end up looking less like 'things that can be manipulated' and more like 'messages that an object can send'.
Consider something closer to a data-driven, discoverable approach rather than a more static approach. Instead of Cat.MakePlayfulMeow(), it might make more sense to have something like Cat.PerformAction(new PlayfulMeowAction()). This gives an easy way of having a more generic interface, which can still be discoverable (Cat.GetPossibleActions()), and helps solve some of the 'Lions can't purr' issues common in deep inheritance hierarchies.
Another way of looking at things is to not make interfaces necessarily match class definitions 1:1. Consider a class to define what something is, and an interface as something to describe its capabilities. So whether FriendlyCat should inherit from something is a reasonable question, but the interfaces it exposes should be a description of its capabilities. This is slightly different, but not totally incompatible, from the idea of 'interfaces as message declarations' that I suggested in the third point.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Why C# implements methods as non-virtual by default?
I'm speaking primarily about C#, .NET 3.5, but wonder in general what the benefits are of not considering everything "virtual" - which is to say that a method called in an instance of a child class always executes the child-most version of that method. In C#, this is not the case if the parent method is not labeled with the "virtual" modifier. Example:
public class Parent
{
public void NonVirtual() { Console.WriteLine("Non-Virtual Parent"); }
public virtual void Virtual(){ Console.WriteLine("Virtual Parent"); }
}
public class Child : Parent
{
public new void NonVirtual() { Console.WriteLine("Non-Virtual Child"); }
public override void Virtual() { Console.WriteLine("Virtual Child"); }
}
public class Program
{
public static void Main(string[] args)
{
Child child = new Child();
Parent parent = new Child();
var anon = new Child();
child.NonVirtual(); // => Child
parent.NonVirtual(); // => Parent
anon.NonVirtual(); // => Child
((Parent)child).NonVirtual(); // => Parent
child.Virtual(); // => Child
parent.Virtual(); // => Child
anon.Virtual(); // => Child
((Parent)child).Virtual(); // => Child
}
}
What exactly are the benefits of the non-virtual behavior observed above? The only thing I could think of was "What if the author of Parent doesn't want his method to be virtual?" but then I realized I couldn't think of a good use case for that. One might argue that the behavior of the class is dependent on how a non-virtual method operates - but then that seems to me there is some poor encapsulation going on, or that the method should be sealed.
Along these same lines, it seems like 'hiding' is normally a bad idea. After all, if a Child object and methods were created, it seems that it was done so for a specific reason to override the Parent. And if Child implements (and hides the parents) NonVirtual(), it is super easy to not get the what many might consider "expected" behavior of calling Child::NonVirtual(). (I say "expected" because it is sometimes easy to not notice 'hiding' is happening).
So, what are the benefits of not allowing everything to have "virtual" behavior? What is a good use-case for hiding a non-virtual parent if it's so easy to get unexpected behavior?
If anyone is curious as to why I pose this question - I was recently examining Castle Projects DynamicProxy library. The one main hurdle in using it is that any method (or property) you want to proxy has to be virtual. And this isn't always an option for developers (if we don't have control over the source). Not to mention the purpose of DynamicProxy is to avoid-coupling between your proxied class and whatever behavior you are trying to achieve with the proxy (such as Logging, or perhaps a Memoization implementation). And by forcing virtual methods to accomplish this what is instead achieved is very thin but obtuse coupling of DynamicProxy to all the classes it is proxying - Imagine, you have a ton of methods labeled virtual even though they are never inherited and overridden, so any other developer looking at the code might wonder "why are these even virtual? lets change them back".
Anyway, the frustration there led me to wonder what the benefits are of non-virtual, when it seems having everything virtual might have been more clear (IMO, I suppose) and perhaps(?) have more benefits.
EDIT: Labeling as community wiki, since it seems like a question that might have subjective answers
Because you don't want people overriding methods that you haven't designed the class for. It takes a significant effort to make sure it is safe to override a method or even derive from a class. It's much safer to make it non-virtual if you haven't considered what might happen.
Eric Lippert covers this here, on method hiding
In many cases, it is crucial for a class to function properly that a given method has a specific behavior. If the method is overridden in an inherited class, there is no guarantee that the method will correctly implement the expected behavior. You should only mark a method virtual if your class is specifically designed for inheritance and will support a method with a different implementation. Designing for inheritance is not easy, there are many cases where incorrectly overriding a method will break the class's internal behavior
Simple: The entire point in a class is to encapsulate some kind of abstraction. For example, we want an object that behaves as a text string.
Now, if everything had been virtual, I would be able to do this:
class MessedUpString : String{
override void Trim() { throw new Exception(); }
}
and then pass this to some function that expects a string. And the moment they try to trim that string, it explodes.
The string no longer behaves as a string. How is that ever a good thing?
If everything is made virtual, you're going to have a hard time enforcing class invariants. You allow the class abstraction to be broken.
By default, a class should encapsulate the rules and behaviors that it is expected to follow. Everything you make virtual is in principle an extensibility hook, the function can be changed to do anything whatsoever. That only makes sense in a few cases, when we have behavior that is actually user-defined.
The reason classes are useful is that they allow us to ignore the implementation details. We can simply say "this is a string object, I know it is going to behave as a string. I know it will never violate any of these guarantees". If that guarantee can not be maintained, the class is useless. You might as well just make all data members public and move the member methods outside the class.
Do you know the Liskov Substitution Principle?
Anywhere an object of base class B is expected, you should be able to pass an object of derived class D. That is one of the most fundamental rules of object-oriented programming. We need to know that derived classes will still work when we upcast them to the base class and pass them to a function that expect the base class. That means we have to make some behavior fixed and unchangeable.
One key benefit of a non-virtual method is that it can be bound at compile time. That is the compiler can be sure which actual method is to be called when a method is used in code.
The actual method to be called cannot be known at compile time if that method is declared virtual, since the reference may actually point to a sub-type that has overriden it. Hence there is a small overhead at runtime when the actual method to call needs be resolved.
In a framework, a non-virtual member could be called and have a range of expected outputs, if the method was virtual the result of the method could be an expected result that wasn't tested for. Allowing methods to be non-virtual give expected results to framework actions.