This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Why C# implements methods as non-virtual by default?
It would be much more less work to define which methods are NOT overideable instead of which are overideable because (at least for me), when you're designing a class, you don't care if its heirs will override your methods or not...
So, why methods in C# are not automatically virtual? What is the common sense in this?
Anders Hejlsberg answered that question in this interview and I quote:
There are several reasons. One is
performance. We can observe that as
people write code in Java, they forget
to mark their methods final.
Therefore, those methods are virtual.
Because they're virtual, they don't
perform as well. There's just
performance overhead associated with
being a virtual method. That's one
issue.
A more important issue is versioning.
There are two schools of thought about
virtual methods. The academic school
of thought says, "Everything should be
virtual, because I might want to
override it someday." The pragmatic
school of thought, which comes from
building real applications that run in
the real world, says, "We've got to be
real careful about what we make
virtual."
When we make something virtual in a
platform, we're making an awful lot of
promises about how it evolves in the
future. For a non-virtual method, we
promise that when you call this
method, x and y will happen. When we
publish a virtual method in an API, we
not only promise that when you call
this method, x and y will happen. We
also promise that when you override
this method, we will call it in this
particular sequence with regard to
these other ones and the state will be
in this and that invariant.
Every time you say virtual in an API,
you are creating a call back hook. As
an OS or API framework designer,
you've got to be real careful about
that. You don't want users overriding
and hooking at any arbitrary point in
an API, because you cannot necessarily
make those promises. And people may
not fully understand the promises they
are making when they make something
virtual.
You should care which members can be overridden in derived classes.
Deciding which methods to make virtual should be a deliberate, well-thought-out decision - not something that happens automatically - the same as any other decisions regarding the public surface of your API.
Beyond the design and clarity reasons a non-virtual method is also technically better for a couple of reasons:
Virtual methods take longer to call (because the runtime needs to navigate through the virtual lookup table to find the actual method to call)
Virtual methods can't be inlined (because the compiler doesn't know at compile time which method will eventually be called)
Therefore unless you have specific intentions to override the method it is better for it to be non-virtual.
Convention? Nothing more, I would think. I know Java automatically makes methods virtual, while C# does not, so there's clearly some disagreement at some level as to what's better. Personally, I prefer the C# default - consider that overriding methods is a lot less common than not overriding them, so it would seem more concise to define virtual methods explicitly.
See also the answer of Anders Hejlsberg (the inventor of C#) at A Conversation with Anders Hejlsberg, Part IV.
To paraphrase Eric Lippert, one of the guys who designed C#:
So you code doesn't get accidentally broken when the source code you recieved from a third party is changed. In other words, to prevent the Brittle Base Class problem.
If a method is to be virtual, if because you (supposedly) made the conscious decision to allow the function to be replaceable, and designed, tested and document around that. What happens if, say, you made a function "frob" and, in some subsequent version, the base class's makers decide to also make a function "frob"?
So it's clear whether you're allowing overriding or a method or forcing hiding of a method (via the new keyword).
Forcing you to add the keyword removes any ambiguity that might be there.
There are always two approaches when you want to specify that you are allowing or denying something. You can either trust everyone and punish sinners or you can distrust everyone and force them to ask permission.
There are some minor performance problems with virtual methods - can't be inlined, slower to call than non-virtual methods - but that really isn't that important.
More significantly they pose threat for your design. It's not about caring what others will do with your class it's about good object design. When a method is virtual you are saying that you can plug it out and replace it with different implementation. As I say you must treat such a method as an enemy - you can't trust it. You can't rely on any side-effects. You have to set up very strict contract for that method and stick with it.
If you consider that human is very lazy and forgetting creature which approach is more prudent?
I have never personally used virtual method in my design. If there is some logic that my class uses and I want it to be interchangeable then I just create interface for it. That interface than constitutes above mentioned contract. There are some situations where you really need virtual methods but I think that those are quite rare.
I believe there is an efficiency issue as well as the reasons others have posted. There is no need to spend the cpu cycles to look for an override if the method is not virtual.
When someone inherits from your class, that would give them the ability to change how any method works when the base class uses it. If you have a method that you absolutely need it to perform an action a certain way in the base class, you would have no way not allowing someone to change that functionality.
Here's one example. Suppose you have a function that you expect to not return an error. Someone comes in and decides to change it so that on Tuesday, it throws an out of range exception. Now the code in the base class fails, because something it depended on happening changed.
Because it's not Java
Seriously, just a different backing philosophy. Java wanted wanted extensibility to be the default and encapsulation to be explicit and C# wanted extensibility to be explicit and encapsulation to be the default.
Actually, that's bad design practice. Not caring which methods are override-able and which are not, I mean. You should always think about what should and should be override-able just as you should carefully consider what should or shouldn't be public!
Related
When it comes to extension methods class names seem to do nothing, but provide a grouping which is what name-spaces do. As soon as I include the namespace I get all the extension methods in the namespace. So my question comes down to this: Is there some value I can get from the extension methods being in the static class?
I realize it is a compiler requirement for them to be put into a static class, but it seems like from an organizational perspective it would be reasonable for it to be legal to allow extension methods to be defined in name-spaces without classes surrounding them. Rephrasing the above question another way: Is there any practical benefit or help in some scenario I get as a developer from having extension methods attached to the class vs. attached to the namespace?
I'm basically just looking to gain some intuition, confirmation, or insight - I suspect it's may be that it was easiest to implement extension methods that way and wasn't worth the time to allow extension methods to exist on their own in name-spaces.
Perhaps you will find a satisfactory answer in Eric Lippert's blog post Why Doesn't C# Implement "Top Level" Methods? (in turn prompted by SO question Why C# is not allowing non-member functions like C++), whence (my emphasis):
I am asked "why doesn't C# implement feature X?" all the time. The
answer is always the same: because no one ever designed, specified,
implemented, tested, documented and shipped that feature. All six of
those things are necessary to make a feature happen. All of them cost
huge amounts of time, effort and money. Features are not cheap, and we
try very hard to make sure that we are only shipping those features
which give the best possible benefits to our users given our
constrained time, effort and money budgets.
I understand that such a general answer probably does not address the
specific question.
In this particular case, the clear user benefit was in the past not
large enough to justify the complications to the language which would
ensue. By restricting how different language entities nest inside each
other we (1) restrict legal programs to be in a common, easily
understood style, and (2) make it possible to define "identifier
lookup" rules which are comprehensible, specifiable, implementable,
testable and documentable.
By restricting method bodies to always be inside a struct or class, we make it easier to reason about the meaning of an unqualified
identifier used in an invocation context; such a thing is always an
invocable member of the current type (or a base type).
To me putting them in the class is all about grouping related functions inside a class. You may have a number of extension methods in the same namespace. If I wanted to write some extension methods for the DirectoryInfo and FileInfo classes I would create two classes in an IO namespace called DirectoryInfoExtensions and FileInfoExtensions.
You can still call the extension methods like you would any other static method. I dont know how the compiler works but perhaps the output assembly if compiled for .net 2 can still be used by legacy .net frameworks. It also means the existing reflection library can work and be used to run extension methods without any changes. Again I am no compiler expert but I think the "this" keyword in the context of an extension method is to allow for syntactical sugar that allows us to use the methods as though they belong to the object.
The .NET Framework requires that every method exist in a class which is within an assembly. A language could allow methods or fields to be declared without an explicitly-specified enclosing class, place all such methods in assembly Fnord into a class called Fnord_TopLevelDefault, and then search the Fnord_TopLevelDefault class of all assemblies when performing method lookup; the CLS specification would have to be extended for this feature to work smoothly for mixed-language projects, however. As with extension methods, such behavior could be CLS compliant if the CLS didn't acknowledge it, since code in a language which didn't use such a feature could use a "free-floating" method Foo in assembly Fnord by spelling it Fnord_TopLevelDefault.Foo, but that would be a bit ugly.
A more interesting question is the extent to which allowing an extension method Foo to be invoked from an arbitrary class without requiring a clearly visible reference to that class is less evil than would be allowing a non-extension static methods to be likewise invoked. I don't think Math.Sqrt(x) is really more readable than Sqrt; even if one didn't want to import Math everywhere, being able to do so at least locally could in some cases improve code legibility considerably.
They can reference other static class members internally.
You should not only consider the consumer side aspect, but also the code maintenance aspect.
Even though intellisense doesn't distinguish with respect to the owner class, the information is still there through tool tips and whatever productivity tools you have added to your IDE. This can easily be used to provide some context for the method in what otherwise would be a flat (and sometimes very long) list.
Consumer wise, bottom line, I do not think it matters much.
In a big project I work for, I am considering recommending other programmers to always seal their classes if they haven't considered how their classes should be subclassed. Often times, less-experienced programmers never consider this.
I find it odd that in Java and C# classes are non-sealed / non-final by default. I think making classes sealed greatly improves readability of the code.
Notice that this is in-house code that we can always change should the rare case occur that we need to subclass.
What are your experiences? I meet quite some resistance to this idea. Are people that lazy they could not be bothered to type sealed?
Okay, as so many other people have weighed in...
Yes, I think it's entirely reasonable to recommend that classes are sealed by default.
This goes along with the recommendation from Josh Bloch in his excellent book Effective Java, 2nd edition:
Design for inheritance, or prohibit it.
Designing for inheritance is hard, and can make your implementation less flexible, especially if you have virtual methods, one of which calls the other. Maybe they're overloads, maybe they're not. The fact that one calls the other must be documented otherwise you can't override either method safely - you don't know when it'll be called, or whether you're safe to call the other method without risking a stack overflow.
Now if you later want to change which method calls which in a later version, you can't - you'll potentially break subclasses. So in the name of "flexibility" you've actually made the implementation less flexible, and had to document your implementation details more closely. That doesn't sound like a great idea to me.
Next up is immutability - I like immutable types. I find them easier to reason about than mutable types. It's one reason why the Joda Time API is nicer than using Date and Calendar in Java. But an unsealed class can never be known to be immutable. If I accept a parameter of type Foo, I may be able to rely on the properties declared in Foo not to be changed over time, but I can't rely on the object itself not being modified - there could be a mutable property in the subclass. Heaven help me if that property is also used by an override of some virtual method. Wave goodbye to many of the benefits of immutability. (Ironically, Joda Time has very large inheritance hierarchies - often with things saying "subclasses should be immutable. The large inheritance hierarchy of Chronology made it hard to understand when porting to C#.)
Finally, there's the aspect of overuse of inheritance. Personally I favour composition over inheritance where feasible. I love polymorphism for interfaces, and occasionally I use inheritance of implementation - but it's rarely a great fit in my experience. Making classes sealed avoids them being inappropriately derived from where composition would be a better fit.
EDIT: I'd also like to point readers at Eric Lippert's blog post from 2004 on why so many of the framework classes are sealed. There are plenty of places where I wish .NET provided an interface we could work to for testability, but that's a slightly different request...
It is my opinion that architectural design decisions are made to communicate to other developers (including future maintenance developers) something important.
Sealing classes communicates that the implementation should not be overridden. It communicates that the class should not be impersonated. There are good reasons to seal.
If you take the unusual approach of sealing everything (and this is unusual), then your design decisions now communicate things that are really not important - like that the class wasn't intended to be inherited by the original/authoring developer.
But then how would you communicate to other developers that the class should not be inherited because of something? You really can't. You are stuck.
Also, sealing a class doesn't improve readability. I just don't see that. If inheritance is a problem in OOP development, then we have a much larger problem.
I'd like to think that I'm a reasonably-experienced programmer and, if I've learned nothing else, it's that I am remarkably bad at predicting the future.
Typing sealed is not hard, I just don't want to irritate a developer down the road (who could be me!) who discovers that a problem could be easily solved with a little inheritance.
I also have no idea how sealing a class makes it more readable. Are you trying to force people to prefer composition to inheritance?
© Jeffrey Richter
There are three reasons why a sealed
class is better than an unsealed
class:
Versioning: When a class is originally sealed, it can change to
unsealed in the future without
breaking compatibility. However, once
a class is unsealed, you can never
change it to sealed in the future as
this would break all derived classes.
In addition, if the unsealed class
defines any unsealed virtual methods,
ordering of the virtual method calls
must be maintained with new versions
or there is the potential of breaking
derived types in the future.
Performance: As discussed in the previous section, calling a virtual
method doesn’t perform as well as
calling a nonvirtual method because
the CLR must look up the type of the
object at runtime in order to
determine which type defines the
method to call. However, if the JIT
compiler sees a call to a virtual
method using a sealed type, the JIT
compiler can produce more efficient
code by calling the method
nonvirtually. It can do this because
it knows there can’t possibly be a
derived class if the class is sealed.
Security: and predictability A class must protect its own state and not
allow itself to ever become corrupted.
When a class is unsealed, a derived
class can access and manipulate the
base class’s state if any data fields
or methods that internally manipulate
fields are accessible and not private.
In addition, a virtual method can be
overridden by a derived class, and the
derived class can decide whether to
call the base class’s implementation.
By making a method, property, or event
virtual, the base class is giving up
some control over its behavior and its
state. Unless carefully thought out,
this can cause the object to behave
unpredictably, and it opens up
potential security holes.
There shouldn't be anything wrong in inheriting from a class.
You should seal a class only when theres a good reason why it should never be inherited.
Besides, if you seal them all, it will only decrease maintainability. Every time someone will want to inherit from one of your classes, he will see it is sealed, then he'll either remove the seal (mess with code he shouldn't have to mess with) or worse: create a poor implementation of your class for himself.
Then you'll have 2 implementations of the same thing, one probably worse than the other, and 2 pieces of code to maintain.
Better just keep it unsealed. No harm in it being unsealed.
Frankly I think that classes not being sealed by default in c# is kind of weird and out of place with how the rest of the defaults work in the language.
By default, classes are internal.
By default fields are private.
By default members are private.
There seems to be a trend that points to least plausible access by default. It would stand to reason that a unsealed keyword should exits in c# instead of a sealed.
Personally I'd rather classes were sealed by default. In most ocassions when someone writes a class, he is not designing it with subclassing in mind and all the complexities that come along with it. Designing for future subclassing should be a conscious act and therefore I'd rather you explicitly have to state it.
"...consider[ing] how their classes should be sub classed..." shouldn't matter.
At least a half dozen times over the past few years I've found myself cursing some open source team or another for a sloppy mix of protected and private, making it impossible to simply extend a class without copying the source of the entire parent class. (In most cases, overriding a particular method required access to private members.)
One example was a JSTL tag that almost did what I wanted. I need to override one small thing. Nope, sorry, I had to completely copy the source of the parent.
I only seal classes if I am working on a reusable component that I intend to distribute, and I don't want the end user to inherit from it, or as a system architect if I know I don't want another developer on the team to inherit from it. However there is usually some reason for it.
Just because a class isn't being inherited from, I don't think it should automatically be marked sealed. Also, it annoys me to no end when I want to do something tricky in .NET, but then realize MS marks tons of their classes sealed.
This is a very opinionated question that's likely to garner some very opinionated answers ;-)
That said, in my opinion, I strongly prefer NOT making my classes sealed/final, particularly at the beginning. Doing this makes it very difficult to infer the intended extensibility points, and it's nearly impossible to get them right at the beginning. IMHO, overuse of encapsulation is worse than overuse of polymorphism.
Your house, your rule.
You can also have the complementary rule instead: a class that can be subclassed must be annotated; nobody should subclass a class that's not annotated so. This rule is not harder to follow than your rule.
The main purpose of a sealed class to take away the inheritance feature from the user so they cannot derive a class from a sealed class.Are you sure you want to do that. Or do you want to start having all classes as sealed and then when you need to make it inheritable you will change it .. Well that might be ok when every thing is in house and in one team but incase other teams in future use your dlls it will be not possible to recompile whole source code everytime a class needs to be unsealed ....
I wont recommend this but thats just my opinion
I don't like that way to think. Java and c# are made to be OOP languages. These languages are designed in a way where a class can have a parent or a child. That's it.
Some people say that we should always start from the most restricting modifier (private, protected...) and set your member to public only when you use it externally. These people are ,to me, lazy and don't want to think about a good design at the beginning of the project.
My answer is: Design your apps in a good way now. Set your class to seal when it needs to be sealed and private when it needs to be private. Don't make them sealed by default.
I find that sealed / final classes are actually pretty rare, in my experience; I would certainly not recommend suggesting all classes be sealed / final by default. That specification makes a certain statement about the state of the code (i.e., that it's complete) that is not necessarily always true during development time.
I'll also add that leaving a class unsealed requires more design / test effort to make sure that the exposed behaviours are well-defined and tested; heavy unit testing is critical, IMO, to achieve a level of confidence in the behaviour of the class that appears to be desired by the proponents of "sealed". But IMO, that increased level of effort translates directly to a high level of confidence and to higher quality code.
What is the best approach for defining Interfaces in either C# or Java? Do we need to make generic or add the methods as and when the real need arises?
Regards,
Srinivas
Once an interface is defined, it is intended to not be changed.
You have to be thoughtful about the purpose of the interface and be as complete as possible.
If you find the need, later, to add a method, really you should define a new interface, possibly a _V2 interface, with the additional method.
Addendum: Here you will find some good guidelines on interface design in C#, as part of a larger, valuable work on C# design in general. It generally applies to Java as well.
Excerpts:
Although most APIs are best modeled using classes and structs, there are cases in which interfaces are more appropriate or are the only option.
DO provide at least one type that is
an implementation of an interface.
This helps to validate the design of
the interface. For example,
System.Collections.ArrayList is an
implementation of the
System.Collections.IList interface.
DO provide at least one API consuming
each interface you define (a method
taking the interface as a parameter or
a property typed as the interface).
This helps to validate the interface
design. For example, List.Sort
consumes IComparer interface.
DO NOT add members to an interface that
has previously shipped. Doing so
would break implementations of the
interface. You should create a new
interface to avoid versioning
problems.
I recommend relying on the broad type design guidelines.
To quote Joshua Bloch:
When in doubt, leave it out.
You can always add to an interface later. Once a member is a part of your interface it is very difficult to change or remove it. Be very conservative in your creation of you interfaces as they are binding contracts.
As a side note here is an excellent interview with Vance Morrison (of the Microsoft CLR team) in which he mentions the possibility of a future version of the CLR allowing "mixins" or interfaces with default implementations for their members.
If your interface is part of code that is shared with other projects and teams, listen to Cheeso. But if your interface is part of a private project and you have access to all the change points then you probably didn't need interfaces to begin with but go ahead and change them.
If the interface is going to be public, I feel that a good deal of care needs to be put into the design because changes to the interface is going to be difficult if a lot of code is going to suddenly break in the next iteration.
Changes to the interface needs to be taken with care, therefore, it would be ideal if changes wouldn't have to be made after the initial release. This means, that the first iteration will be very important in terms of the design.
However, if changes are required, one way to implement the changes to the interface would be deprecate the old methods, and provide a transition path for old code to use the newly-designed features. This does mean that the deprecated methods will still stick around to prevent the code using the old methods from breaking -- this is not ideal, so it is a "price to pay" for not getting it right the first time around.
On a related matter, yesterday, I stumbled upon the Google Tech Talk: How to Design a Good API and Why It Matters by Joshua Bloch. He was behind the design and implementation of the Java Collection libraries and such, and is the author of Effective Java.
The video is about an hour long where he goes into details and examples about what makes a good API, why we should be making well-designed APIs, and more. It's a good video to watch to get some ideas and inspiration for certain things to look out for when thinking about designing APIs.
Adding methods later to an interface immediately breaks all implementations of the interface that didn't accidentaly implement those methods. For that reason, make sure your interface specification is complete. I'd propose you start with a (sample) client of the interface, the part that actually uses instances of classes implementing said interface. Whatever the client needs must be part of the interface (obviously). Then make a (sample) implementation of the interface and look what additional methods are both generally usefull and available (in possible other implementations) so they should also be part of the interface. Check for symetry completeness (e.g. if there is an "openXYZ", there should also be a "closeXYZ". if there is an "addFooBar", there should be a "removeFooBar". etc.)
If possible, let a coworker check your specification.
And: Be sure you really want an interface. Maybe an abstract base class is a better fit for your needs.
Well, it really depends on your particular situation. If your team is the sole user/maintainer of that interface, then by all means, modify it as you see fit and forget all about that "best practice blabla" kind of stuff. It is YOUR code after all... Never blindly follow best pracice stuff without understanding its rationale.
Now, if you're making a public API that other team or customer, will work with (think plugins, extension points or things like that) then you have to be conservative with what you put in the interface. As other mentionned, you may have to add _V2 kind of interface int these cases. Microsoft did with several web browser COM interfaces.
The guidelines Microsoft publishes in Framework Design Guidelines are just that: guideline for PUBLIC interface. Not for private internal stuff; tough many of them still apply. Know what applies or not to your situation.
No rule will make up for lack of common sense.
I know this is a subjective question, but I'm always curious about best-practices in coding style. ReSharper 4.5 is giving me a warning for the keyword "base" before base method calls in implementation classes, i.e.,
base.DoCommonBaseBehaviorThing();
While I appreciate the "less is better" mentality, I also have spent a lot of time debugging/maintaining highly-chained applications, and feel like it might help to know that a member call is to a base object just by looking at it. It's simple enough to change ReSharper's rules, of course, but what do y'all think? Should "base" be used when calling base members?
The only time you should use base.MethodCall(); is when you have an overridden method of the same name in the child class, but you actually want to call the method in the parent.
For all other cases, just use MethodCall();.
Keywords like this and base do not make the code more readable and should be avoided for all cases unless they are necessary--such as in the case I described above.
I am not really sure using this is a bad practice or not. base, however is not a matter of good or bad practice, but a matter of semantics. Whereas this is polymorphic, meaning that even if the method using it belongs to a base class, it will use the overriden method, base is not. base will always refer to the method defined at the base class of the method calling it, hence it is not polymorphic. This is a huge semantic difference. base should then be used accordingly. If you want that method, use base. If you want the call to remain polymorphic, don't use base.
Another important point to take into consideration is that while you haven't currently overridden that method that doesn't mean you won't ever in the future and by prefacing all of your calls with base. you won't get the new functionality without performing a find and replace for all your calls.
While prefacing calls with this. will not do anything other than decrease / increase readability (ignoring the situation where two variables in scope have the same name) the base. prefix will change the functionality of the code you write in many common scenarios. So I would never add base. unless it is needed.
I think generally you should use base only when overriding previous functionality.
Some languages (C# does not) also provide this functionality by calling the function by it's base class name explicitly like this: Foo.common() (called from somewhere in Bar, of course).
This would allow you to skip upwards in the chain, or pick from multiple implementations -- in the case of multiple-inheritance.
Regardless, I feel base should be used only when needed to explicitly call your parent's functionality because you are or have overridden that functionality in this class.
It's really a matter of personal preference. If you like seeing "base." at the beginning of your members, you can easily turn off the rule (Go to Options>Inspection Severity>Code Redundancies>Redundant 'base.' qualifier). Don't let non-behavioral static code analysis rules affect your preferred coding style.
EDIT
One thing to consider is that the static code analysis in FXCop and R# are there to provide rules for all possible needs. To actually adhere to all of the rules simultaneously is a little onerous. You should define your preferred coding style (if you're working in a team, do it collectively), and stick with it. Modify your rules to match your coding standards, not vice versa.
ReSharper likes to point out multiple functions per ASP.NET page that could be made static. Does it help me if I do make them static? Should I make them static and move them to a utility class?
Performance, namespace pollution etc are all secondary in my view. Ask yourself what is logical. Is the method logically operating on an instance of the type, or is it related to the type itself? If it's the latter, make it a static method. Only move it into a utility class if it's related to a type which isn't under your control.
Sometimes there are methods which logically act on an instance but don't happen to use any of the instance's state yet. For instance, if you were building a file system and you'd got the concept of a directory, but you hadn't implemented it yet, you could write a property returning the kind of the file system object, and it would always be just "file" - but it's logically related to the instance, and so should be an instance method. This is also important if you want to make the method virtual - your particular implementation may need no state, but derived classes might. (For instance, asking a collection whether or not it's read-only - you may not have implemented a read-only form of that collection yet, but it's clearly a property of the collection itself, not the type.)
Static methods versus Instance methods
Static and instance members of the C# Language Specification explains the difference. Generally, static methods can provide a very small performance enhancement over instance methods, but only in somewhat extreme situations (see this answer for some more details on that).
Rule CA1822 in FxCop or Code Analysis states:
"After [marking members as static], the compiler will emit non-virtual call sites to these members which will prevent a check at
runtime for each call that ensures the current object pointer is
non-null. This can result in a measurable performance gain for
performance-sensitive code. In some cases, the failure to access the
current object instance represents a correctness issue."
Utility Class
You shouldn't move them to a utility class unless it makes sense in your design. If the static method relates to a particular type, like a ToRadians(double degrees) method relates to a class representing angles, it makes sense for that method to exist as a static member of that type (note, this is a convoluted example for the purposes of demonstration).
Marking a method as static within a class makes it obvious that it doesn't use any instance members, which can be helpful to know when skimming through the code.
You don't necessarily have to move it to another class unless it's meant to be shared by another class that's just as closely associated, concept-wise.
I'm sure this isn't happening in your case, but one "bad smell" I've seen in some code I've had to suffer through maintaining used a heck of a lot of static methods.
Unfortunately, they were static methods that assumed a particular application state. (why sure, we'll only have one user per application! Why not have the User class keep track of that in static variables?) They were glorified ways of accessing global variables. They also had static constructors (!), which are almost always a bad idea. (I know there are a couple of reasonable exceptions).
However, static methods are quite useful when they factor out domain-logic that doesn't actually depend on the state of an instance of the object. They can make your code a lot more readable.
Just be sure you're putting them in the right place. Are the static methods intrusively manipulating the internal state of other objects? Can a good case be made that their behavior belongs to one of those classes instead? If you're not separating concerns properly, you may be in for headaches later.
This is interesting read:
http://thecuttingledge.com/?p=57
ReSharper isn’t actually suggesting you make your method static.
You should ask yourself why that method is in that class as opposed to, say, one of the classes that shows up in its signature...
but here is what ReSharper documentaion says:
http://confluence.jetbrains.net/display/ReSharper/Member+can+be+made+static
Just to add to #Jason True's answer, it is important to realise that just putting 'static' on a method doesn't guarantee that the method will be 'pure'. It will be stateless with regard to the class in which it is declared, but it may well access other 'static' objects which have state (application configuration etc.), this may not always be a bad thing, but one of the reasons that I personally tend to prefer static methods when I can is that if they are pure, you can test and reason about them in isolation, without having to worry about the surrounding state.
For complex logic within a class, I have found private static methods useful in creating isolated logic, in which the instance inputs are clearly defined in the method signature and no instance side-effects can occur. All outputs must be via return value or out/ref parameters. Breaking down complex logic into side-effect-free code blocks can improve the code's readability and the development team's confidence in it.
On the other hand it can lead to a class polluted by a proliferation of utility methods. As usual, logical naming, documentation, and consistent application of team coding conventions can alleviate this.
You should do what is most readable and intuitive in a given scenario.
The performance argument is not a good one except in the most extreme situations as the only thing that is actually happening is that one extra parameter (this) is getting pushed onto the stack for instance methods.
ReSharper does not check the logic. It only checks whether the method uses instance members.
If the method is private and only called by (maybe just one) instance methods this is a sign to let it an instance method.
I hope you have already understood the difference between static and instance methods. Also, there can be a long answer and a short one. Long answers are already provided by others.
My short answer: Yes, you can convert them to static methods as ReSharper suggests. There is no harm in doing so. Rather, by making the method static, you are actually guarding the method so that you do not unnecessarily slip any instance members into that method. In that way, you can achieve an OOP principle "Minimize the accessibility of classes and members".
When ReSharper is suggesting that an instance method can be converted to a static one, it is actually telling you, "Why the .. this method is sitting in this class but it is not actually using any of its states?" So, it gives you food for thought. Then, it is you who can realize the need for moving that method to a static utility class or not. According to the SOLID principles, a class should have only one core responsibility. So, you can do a better cleanup of your classes in that way. Sometimes, you do need some helper methods even in your instance class. If that is the case, you may keep them within a #region helper.
If the functions are shared across many pages, you could also put them in a base page class, and then have all asp.net pages using that functionality inherit from it (and the functions could still be static as well).
Making a method static means you can call the method from outside the class without first creating an instance of that class. This is helpful when working with third-party vendor objects or add-ons. Imagine if you had to first create a Console object "con" before calling con.Writeline();
It helps to control namespace pollution.
Just my tuppence: Adding all of the shared static methods to a utility class allows you to add
using static className;
to your using statements, which makes the code faster to type and easier to read. For example, I have a large number of what would be called "global variables" in some code I inherited. Rather than make global variables in a class that was an instance class, I set them all as static properties of a global class. It does the job, if messily, and I can just reference the properties by name because I have the static namespace already referenced.
I have no idea if this is good practice or not. I have so much to learn about C# 4/5 and so much legacy code to refactor that I am just trying to let the Roselyn tips guide me.
Joey