Why in C# am I required to specify the access modifier of the method I'm overriding if I'm not changing it? Wouldn't it be simpler and more logical to not specify any access modifier at all in this situations?
(Just to clarify: I write this question not because I think that I'm smarter then language designers, but because I'm sure that they had a good reason that I can't understand yet.)
Edit: I'm not asking about why we can't change access modifier, but rather about why we have to redundantly specify it.
While I'm not one of the language designers, I think it's entirely reasonable to force you to specify it:
Anyone reading the code immediately knows the access without having to check the original declaration
A change to the access of the original declaration is a breaking change this way, making it much more obvious to the overriding code
It makes a method which happens to override a higher declaration consistent with non-overriding methods. (It would be odd for a method with no access modifier to be sometimes public and sometimes private, depending on override.)
Basically, redundancy is sometimes a good thing :)
To illustrate Jon's point in code, consider the following:
class LevelOne
{
public virtual void SayHi()
{
Console.WriteLine("Hi!");
}
}
class LevelTwo : LevelOne
{
override void SayHi()
{
Console.WriteLine("Hi!");
}
}
class LevelThree : LevelTwo
{
override void SayHi()
{
Console.WriteLine("Hi!");
}
}
Ad infinitum. Now imagine you're at n-depth of derived classes, you have to follow the inheritance hierarchy all the way back to the original base class to find out the access modifier of the method. That's just plain annoying!
In order to facilitate understanding of the code by another developer who don't write the code.
when you read override you know that you make polymorphism
Related
Sorry if the question sounds confusing. What I mean is that if I have a class that has a method that does a bunch of calculations and then returns a value, I can either make that method public (which gives my other classes access), or I can make it private and make a public get method.
Something like this:
public publicmethod{
return privatemethod();
}
private privatemethod{
//do stuff
return value;
}
Is this a futile exercise or does it provide additional program security?
Well, there is no additional security here. However, such a usage can sometimes make sense.
For example, the private and public method may have different semantics.
// base class
public virtual BuyFood()
{
BuyPizza();
BuyCoke();
}
private void BuyPizza()
{
// ...
}
// derived class
public override void BuyFood()
{
BuyChopSuey();
}
private void BuyChopSuey()
{
// ...
}
So your implementation is just calling to a private method -- but what is important, you expose the semantics: your BuyFood operation is just BuyChopSuey(). Your code says: "in this class, buying food is just buying chop suey" in a clear way. You are able to add BuyTsingtaoBeer() into BuyFood() any time without changing the semantics of the both methods.
It is completely redundant. It does not provide anything except another name for the same thing and another indirection for readers to follow. Simply make a single implementation, and make it public. On the same note, getX() { return x; } setX(T newX) { x = newX; } does not encapsulate anything, at best it's future-proofing.
You may end up implementing a particular function required by an interface in a single line, largely delegating to (possibly private) methods which exist for other good reasons. This is different, and more justified (but again, if it's only return someMethod(); you should probably abolish the private implementation and assume the common name). A particular case if when you need two implement two methods which do the same thing (e.g. from separate interfaces).
I think either way is fine, it's more a matter of style assuming the method doesn't change the state of the class. If you have a class that has a bunch of properties and very few methods, it probably makes more sense to define another property. If you have a lot of methods in the class but few properties, then a method is more consistent with your overall class design.
If the method changes a bunch of other class variables than I'd expose it as a public method instead of a property.
I don't think either way, property or method, is necessarily more secure. It depends on what checks you do - is the caller allowed to perform the calculation? Are all variables used in the calculations within acceptable ranges? Etc. All of these checks can be performed whether you are using a property or a method.
Well, actually the question is What code do I want to be able to call this method?
Any code in general, even from other assemblies? Make the method public.
Any code from the same assembly? Make it internal.
Only code from this class? Make it private.
Having a private method directly aliased to a public method only makes the private method callable from the outside, which contradicts its private status.
If the method only does some calculation and doesn't use or change anything in the object, make it a public static method:
public static CalculationMethod(int input) {
//do stuff
return value;
}
That way any code can use the method without having the create an instance of the class:
int result = ClassName.CalculationMethod(42);
Instead of public consider internal, which would give access only to code in the same assembly.
Every so often, I run into a case where I want a collection of classes all to possess similar logic. For example, maybe I want both a Bird and an Airplane to be able to Fly(). If you're thinking "strategy pattern", I would agree, but even with strategy, it's sometimes impossible to avoid duplicating code.
For example, let's say the following apply (and this is very similar to a real situation I recently encountered):
Both Bird and Airplane need to hold an instance of an object that implements IFlyBehavior.
Both Bird and Airplane need to ask the IFlyBehavior instance to Fly() when OnReadyToFly() is called.
Both Bird and Airplane need to ask the IFlyBehavior instance to Land() when OnReadyToLand() is called.
OnReadyToFly() and OnReadyToLand() are private.
Bird inherits Animal and Airplane inherits PeopleMover.
Now, let's say we later add Moth, HotAirBalloon, and 16 other objects, and let's say they all follow the same pattern.
We're now going to need 20 copies of the following code:
private IFlyBehavior _flyBehavior;
private void OnReadyToFly()
{
_flyBehavior.Fly();
}
private void OnReadyToLand()
{
_flyBehavior.Land();
}
Two things I don't like about this:
It's not very DRY (the same nine lines of code are repeated over and over again). If we discovered a bug or added a BankRight() to IFlyBehavior, we would need to propogate the changes to all 20 classes.
There's not any way to enforce that all 20 classes implement this repetitive internal logic consistently. We can't use an interface because interfaces only permit public members. We can't use an abstract base class because the objects already inherit base classes, and C# doesn't allow multiple inheritance (and even if the classes didn't already inherit classes, we might later wish to add a new behavior that implements, say, ICrashable, so an abstract base class is not always going to be a viable solution).
What if...?
What if C# had a new construct, say pattern or template or [fill in your idea here], that worked like an interface, but allowed you to put private or protected access modifiers on the members? You would still need to provide an implementation for each class, but if your class implemented the PFlyable pattern, you would at least have a way to enforce that every class had the necessary boilerplate code to call Fly() and Land(). And, with a modern IDE like Visual Studio, you'd be able to automatically generate the code using the "Implement Pattern" command.
Personally, I think it would make more sense to just expand the meaning of interface to cover any contract, whether internal (private/protected) or external (public), but I suggested adding a whole new construct first because people seem to be very adamant about the meaning of the word "interface", and I didn't want semantics to become the focus of people's answers.
Questions:
Regardless of what you call it, I'd like to know whether the feature I'm suggesting here makes sense. Do we need some way to handle cases where we can't abstract away as much code as we'd like, due to the need for restrictive access modifiers or for reasons outside of the programmer's control?
Update
From AakashM's comment, I believe there is already a name for the feature I'm requesting: a Mixin. So, I guess my question can be shortened to: "Should C# allow Mixins?"
The problem you describe could be solved using the Visitor pattern (everything can be solved using the Visitor pattern, so beware! )
The visitor pattern lets you move the implementation logic towards a new class. That way you do not need a base class, and a visitor works extremely well over different inheritance trees.
To sum up:
New functionality does not need to be added to all different types
The call to the visitor can be pulled up to the root of each class hierarchy
For a reference, see the Visitor pattern
Cant we use extension methods for this
public static void OnReadyToFly(this IFlyBehavior flyBehavior)
{
_flyBehavior.Fly()
}
This mimics the functionality you wanted (or Mixins)
Visual Studio already offers this in 'poor mans form' with code snippets. Also, with the refactoring tools a la ReSharper (and maybe even the native refactoring support in Visual Studio), you get a long way in ensuring consistency.
[EDIT: I didn't think of Extension methods, this approach brings you even further (you only need to keep the _flyBehaviour as a private variable). This makes the rest of my answer probably obsolete...]
However; just for the sake of the discussion: how could this be improved? Here's my suggestion.
One could imagine something like the following to be supported by a future version of the C# compiler:
// keyword 'pattern' marks the code as eligible for inclusion in other classes
pattern WithFlyBehaviour
{
private IFlyBehavior_flyBehavior;
private void OnReadyToFly()
{
_flyBehavior.Fly();
}
[patternmethod]
private void OnReadyToLand()
{
_flyBehavior.Land();
}
}
Which you could use then something like:
// probably the attribute syntax can not be reused here, but you get the point
[UsePattern(FlyBehaviour)]
class FlyingAnimal
{
public void SetReadyToFly(bool ready)
{
_readyToFly = ready;
if (ready) OnReadyToFly(); // OnReadyToFly() callable, although not explicitly present in FlyingAnimal
}
}
Would this be an improvement? Probably. Is it really worth it? Maybe...
You just described aspect oriented programming.
One popular AOP implementation for C# seems to be PostSharp (Main site seems to be down/not working for me though, this is the direct "About" page).
To follow up on the comment: I'm not sure if PostSharp supports it, but I think you are talking about this part of AOP:
Inter-type declarations provide a way
to express crosscutting concerns
affecting the structure of modules.
Also known as open classes, this
enables programmers to declare in one
place members or parents of another
class, typically in order to combine
all the code related to a concern in
one aspect.
Could you get this sort of behavior by using the new ExpandoObject in .NET 4.0?
Scala traits were developed to address this kind of scenario. There's also some research to include traits in C#.
UPDATE: I created my own experiment to have roles in C#. Take a look.
I will use extension methods to implement the behaviour as the code shows.
Let Bird and Plane objects implement a property for IFlyBehavior object for an interface IFlyer
public interface IFlyer
{
public IFlyBehavior FlyBehavior
}
public Bird : IFlyer
{
public IFlyBehaviour FlyBehavior {get;set;}
}
public Airplane : IFlyer
{
public IFlyBehaviour FlyBehavior {get;set;}
}
Create an extension class for IFlyer
public IFlyerExtensions
{
public void OnReadyToFly(this IFlyer flyer)
{
flyer.FlyBehavior.Fly();
}
public void OnReadyToLand(this IFlyer flyer)
{
flyer.FlyBehavior.Land();
}
}
So I recently ran into this C# statement at work:
public new string SomeFunction(int i)
{
return base.SomeFunction(i);
}
I searched the web but I think I can find a better answer here.
Now, I'm guessing that all this does is return a new string with the same value as the string returned by the call to base.SomeFunction(i)... is this correct?
Also, does this feature exist in other languages (java specifically)?
EDIT:
In my specific case, base.SomeFunction is protected and NOT virtual... does this make a difference? Thanks
No, it means that it's hiding SomeFunction in the base class rather than overriding it. If there weren't a method in the base class with the same signature, you'd get a compile-time error (because you'd be trying to hide something that wasn't there!)
See this question for more information. (I don't think this is a duplicate question, as it's about what "new" is for at all rather than just talking about the warning when it's absent.)
Duplicate example from my answer on that question though, just to save the clickthrough...
Here's an example of the difference between hiding a method and overriding it:
using System;
class Base
{
public virtual void OverrideMe()
{
Console.WriteLine("Base.OverrideMe");
}
public virtual void HideMe()
{
Console.WriteLine("Base.HideMe");
}
}
class Derived : Base
{
public override void OverrideMe()
{
Console.WriteLine("Derived.OverrideMe");
}
public new void HideMe()
{
Console.WriteLine("Derived.HideMe");
}
}
class Test
{
static void Main()
{
Base x = new Derived();
x.OverrideMe();
x.HideMe();
}
}
The output is:
Derived.OverrideMe
Base.HideMe
'new' is the member-hiding keyword. From the docs:
When used as a modifier, the new
keyword explicitly hides a member
inherited from a base class. When you
hide an inherited member, the derived
version of the member replaces the
base-class version. Although you can
hide members without the use of the
new modifier, the result is a warning.
If you use new to explicitly hide a
member, it suppresses this warning and
documents the fact that the derived
version is intended as a replacement.
The intent behind your sample code is to make the function public in the child, even though it was protected in the base. The language doesn't let you make a class member more visible in the child, so this instead declares a new function that happens to have the same name. This hides the base function, but then again, the caller wouldn't have had access to that one in the first place, while this function calls the one in the base.
In short, the code is a bit of a hack, but it does make sense. It's probably a hint that the base might need its functionality refactored, though.
I have the following snippet of code that's generating the "Use new keyword if hiding was intended" warning in VS2008:
public double Foo(double param)
{
return base.Foo(param);
}
The Foo() function in the base class is protected and I want to expose it to a unit test by putting it in wrapper class solely for the purpose of unit testing. I.e. the wrapper class will not be used for anything else. So one question I have is: is this accepted practice?
Back to the new warning. Why would I have to new the overriding function in this scenario?
The new just makes it absolutely clear that you know you are stomping over an existing method. Since the existing code was protected, it isn't as big a deal - you can safely add the new to stop it moaning.
The difference comes when your method does something different; any variable that references the derived class and calls Foo() would do something different (even with the same object) as one that references the base class and calls Foo():
SomeDerived obj = new SomeDerived();
obj.Foo(); // runs the new code
SomeBase objBase = obj; // still the same object
objBase.Foo(); // runs the old code
This could obviously have an impact on any existing code that knows about SomeDerived and calls Foo() - i.e. it is now running a completely different method.
Also, note that you could mark it protected internal, and use [InternalsVisibleTo] to provide access to your unit test (this is the most common use of [InternalsVisibleTo]; then your unit-tests can access it directly without the derived class.
The key is that you're not overriding the method. You're hiding it. If you were overriding it, you'd need the override keyword (at which point, unless it's virtual, the compiler would complain because you can't override a non-virtual method).
You use the new keyword to tell both the compiler and anyone reading the code, "It's okay, I know this is only hiding the base method and not overriding it - that's what I meant to do."
Frankly I think it's rarely a good idea to hide methods - I'd use a different method name, like Craig suggested - but that's a different discussion.
You're changing the visibility without the name. Call your function TestFoo and it will work. Yes, IMHO it's acceptable to subclass for this reason.
You'll always find some tricky situations where the new keyword can be used for hiding while it can be avoided most of the times.
However, recently I really needed this keyword, mainly because the language lacks some other proper synthax features to complete an existing accessor for instance:
If you consider an old-fashioned class like:
KeyedCollection<TKey, TItem>
You will notice that the accesor for acessing the items trough index is:
TItem this[Int32 index] { get; set; }
Has both { get; set; } and they are of course mandatory due to the inheritance regarding ICollection<T> and Collection<T>, but there is only one { get; } for acessing the items through their keys (I have some guesses about this design and there is plenty of reasons for that, so please note that I picked up the KeyedCollection<TKey, TItem>) just for illustrations purposes).
Anyway so there is only one getter for the keys access:
TItem this[TKey key] { get; }
But what about if I want to add the { set; } support, technically speaking it's not that stupid especially if you keep reasoning from the former definition of the propery, it's just a method... the only way is to implement explicitly another dummy interface but when you want to make implicit you have to come up with the new keyword, I'm hiding the accessor definition, keeping the get; base definition and just add a set stuffed with some personal things to make it work.
I think for this very specific scenario, this keyword is perfecly applicable, in particular in regards to a context where there is no brought to the { get; } part.
public new TItem this[TKey key]
{
get { return base... }
set { ... }
}
That's pretty much the only trick to avoid this sort of warning cause the compiler is suggesting you that you're maybe hiding without realizing what you are doing.
I have a class with some abstract methods, but I want to be able to edit a subclass of that class in the designer. However, the designer can't edit the subclass unless it can create an instance of the parent class. So my plan is to replace the abstract methods with stubs and mark them as virtual - but then if I make another subclass, I won't get a compile-time error if I forget to implement them.
Is there a way to mark the methods so that they have to be implemented by subclasses, without marking them as abstract?
Well you could do some really messy code involving #if - i.e. in DEBUG it is virtual (for the designer), but in RELEASE it is abstract. A real pain to maintain, though.
But other than that: basically, no. If you want designer support it can't be abstract, so you are left with "virtual" (presumably with the base method throwing a NotImplementedException).
Of course, your unit tests will check that the methods have been implemented, yes? ;-p
Actually, it would probably be quite easy to test via generics - i.e. have a generic test method of the form:
[Test]
public void TestFoo() {
ActualTest<Foo>();
}
[Test]
public void TestBar() {
ActualTest<Bar>();
}
static void ActualTest<T>() where T : SomeBaseClass, new() {
T obj = new T();
Assert.blah something involving obj
}
You could use the reference to implementation idiom in your class.
public class DesignerHappy
{
private ADesignerHappyImp imp_;
public int MyMethod()
{
return imp_.MyMethod()
}
public int MyProperty
{
get { return imp_.MyProperty; }
set { imp_.MyProperty = value; }
}
}
public abstract class ADesignerHappyImp
{
public abstract int MyMethod();
public int MyProperty {get; set;}
}
DesignerHappy just exposes the interface you want but forwards all the calls to the implementation object. You extend the behavior by sub-classing ADesignerHappyImp, which forces you to implement all the abstract members.
You can provide a default implementation of ADesignerHappyImp, which is used to initialize DesignerHappy by default and expose a property that allows you to change the implementation.
Note that "DesignMode" is not set in the constructor. It's set after VS parses the InitializeComponents() method.
I know its not quite what you are after but you could make all of your stubs in the base class throw the NotImplementedException. Then if any of your subclasses have not overridden them you would get a runtime exception when the method in the base class gets called.
The Component class contains a boolean property called "DesignMode" which is very handy when you want your code to behave differently in the designer than at runtime. May be of some use in this case.
As a general rule, if there's no way in a language to do something that generally means that there's a good conceptual reason not to do it.
Sometimes this will be the fault of the language designers - but not often. Usually I find they know more about language design than I do ;-)
In this case you want a un-overridden virtual method to throw a compile time exception (rather and a run time one). Basically an abstract method then.
Making virtual methods behave like abstract ones is just going to create a world of confusion for you further down the line.
On the other hand, VS plug in design is often not quite at the same level (that's a little unfair, but certainly less rigour is applied than is at the language design stage - and rightly so). Some VS tools, like the class designer and current WPF editors, are nice ideas but not really complete - yet.
In the case that you're describing I think you have an argument not to use the class designer, not an argument to hack your code.
At some point (maybe in the next VS) they'll tidy up how the class designer deals with abstract classes, and then you'll have a hack with no idea why it was coded that way.
It should always be the last resort to hack your code to fit the designer, and when you do try to keep hacks minimal. I find that it's usually better to have concise, readable code that makes sense quickly over Byzantine code that works in the current broken tools.
To use ms as an example...
Microsoft does this with the user control templates in silverlight. #if is perfectly acceptable and it is doubtful the the tooling will work around it anytime soon. IMHO