Consider:
class Foo
{
static Foo()
{
// Static initialisation
}
}
Why are the () required in static Foo() {...}? The static constructor must always be parameterless, so why bother? Are they necessary to avoid some parser ambiguity, or is it just to maintain consistency with regular parameterless constructors?
Since it looks so much like an initialiser block, I often find myself leaving them out by accident and then have to think for a few seconds about what is wrong. It would be nice if they could be elided in the same way.
Because it's a static constructor, so it's static + a normal-looking constructor.
Consistency is key. :-)
I get this sort of question frequently; that is, the question "the compiler could work out that this thing is missing, so why is it required?" Here's another example of this sort of question:
C# using consts in static classes
As I noted in that question, basically we have three choices in that situation. Make the redundant text required, make it optional, or make it illegal.
Each has its own downside.
The downside of making it required is you end up with an unnecessary redundancy in the language.
The downside of making it optional is you confuse people who think there must be a difference between the two forms. Also, you make it harder for the error-recovering parser to do its work; it thrives on redundancy. And you potentially make it harder to add new language features in the future, because more "syntactic area" is already claimed.
The downside of making it illegal is you then make a "gotcha", where the user has to remember that oh, yeah, I'm supposed to put parens here, but not here.
The proposed feature had better have an upside that pays for the downside. The smallest downside seems to me to be the first: make it required. The other options I would want to have an upside that justifies the downside, and I'm not seeing one here.
I would assume it's for disambiguity: it makes the parser's job easier, recognising the code block as a constructor subroutine (irrespective of staticness); and conversely it helps ensure that the human author/maintainer is aware of the implications of choosing this particular construct, by forcing them to use a specific method-like syntax.
Related
In the comments of this answer it is stated that "checking whether the object has implemented the interface , rampant as it may be, is a bad thing"
Below is what I believe is an example of this practice:
public interface IFoo
{
void Bar();
}
public void DoSomething(IEnumerable<object> things)
{
foreach(var o in things)
{
if(o is IFoo)
((IFoo)o).Bar();
}
}
With my curiosity piqued as someone who has used variations of this pattern before, I searched for a good example or explanation of why it is a bad thing and was unable to find one.
While it is very possible that I misunderstood the comment, can someone provide me with an example or link to better explain the comment?
It depends on what you're trying to do. Sometimes it can be appropriate - examples could include:
LINQ to Objects, where it's used to optimize operations like Count which can be performed more efficiently on an IList<T> via the specialized members.
LINQ to XML, where it's used to provide a really friendly API which accepts a wide range of types, iterating over values where appropriate
If you wanted to find all the controls of a certain type under a particular control in Windows Forms, you would want to check whether each control was a container to determine whether or not to recurse.
In other cases it's less appropriate and you should consider whether you can change the parameter type instead. It's definitely a "smell" - normally you shouldn't concern yourself with the implementation details of whatever has been handed to you; you should just use the API provided by the declared parameter type. This is also known as a violation of the Liskov Substitution Principle.
Whatever the dogmatic developers around may say, there are times when you simply do want to check an object's execution time type. It's hard to override object.Equals(object) correctly without using is/as/GetType, for example :) It's not always a bad thing, but it should always make you consider whether there's a better approach. Use sparingly, only where it's genuinely the most appropriate design.
I would personally rather write the code you've shown like this, mind you:
public void DoSomething(IEnumerable<object> things)
{
foreach(var foo in things.OfType<IFoo>())
{
foo.Bar();
}
}
It accomplishes the same thing, but in a neater way :)
I would expect the method to look like this, it seems much safer:
public void DoSomething(IEnumerable<IFoo> things)
{
foreach(var o in things)
{
o.Bar();
}
}
To read about the referred violation of the Liskov Principle: What is the Liskov Substitution Principle?
If you want to know why the commenter made that comment, probably best to ask them to explain.
I would not consider the code you posted to be "bad". A more "genuinely" bad practice is to use interfaces as markers. That is, you're not planning on actually using a method of the interface; rather, you have declared the interface on a class as a way of describing it in some way. Use attributes, not interfaces, as markers on classes.
Marker interfaces are hazardous in a number of ways. A real-world situation I once ran into where an important product made a bad decision on the basis of a marker interface is here: http://blogs.msdn.com/b/ericlippert/archive/2004/04/05/108086.aspx
That said, the C# compiler itself uses a "marker interface" in one situation. Mads tells the story here: http://blogs.msdn.com/b/madst/archive/2006/10/10/what-is-a-collection_3f00_.aspx
A reason is that there will be a dependency on that interface that is not immediately visible without digging in the code.
The statement
checking whether the object has implemented the interface , rampant
as it may be, is a bad thing
Is overly dogmatic in my opinion. As other people have answered, you may well be able to pass a collection of IFoo to your method and achieve the same result.
However, interfaces can be useful to add optional features to classes. For example the .net framework provides the IDataErrorInfo interface*. When this is implemented it indicates to a consumer that in addition to the class' standard functionality, it can also provide error information.
In this case, the error information is optional. A WPF view model may or may not provide error information. Without querying for interfaces, this optional functionality would not be possible without base classes with huge surface area.
*We'll ignore for the moment the terrible design of the IDataErrorInfo interface.
If your method requires that you inject an instance of an interface, you should treat it the same regardless of the implementation.
In your example you generally wouldn't have a generic list of object, but a list of ISomething's and calling an ISomething.Bar() would be implemented by the concrete type, therefore calling it's implementaiton. If that implementation is to do nothing, then you don't have to do a check.
I dislike this whole "switch on type" style of coding for a couple of reasons. (Examples drawn in relation to my industry, game development. Apologies in advance. :) )
First and foremost, I think it's sloppy to have a heterogeneous collection of items. E.g. I could have a collection of "everything everywhere," but then when iterating the collection to apply bullet effects or fire damage or enemy AI, I have to walk this list which is mostly stuff I don't care about. It's much "cleaner" IMHO to have separate collections of bullets, raging fires, and enemies. Note that there's no reason why I can't have a single item in multiple collections; a single burning robotic missile could be referenced in all three of those lists to do parts of its "update" as appropriate for the three types of logic it needs to run. Outside of having "one single collection that references everything," I think a collection containing everything everywhere is not terribly useful; you can't do anything with anything in the list unless you query it for what it can do.
I hate doing unnecessary work. This really ties into the above, but when you create a given thing you know what its capabilities are (or can query them at that point), so you might as well take the opportunity at that time to put them in the right more specific collections. You have 16ms to process everything in the world, do you want to waste your time dealing with, querying, and selecting from generic things, or do you want to get down to business and operate only on the specific things you care about?
In my experience, transforming a codebase from generic operation on heterogeneous datasets to one that has homogeneous datasets has resulted in not only performance increases but also comprehension increases that come from simpler code doing more obvious work and in general a reduction in the amount of code required to do any given task.
So yeah, it's dogmatic to say that querying interfaces is bad, but it does seem to make things simpler if you can figure out how to avoid needing to query anything. As for my "performance" statements and the counter that "if you don't measure it, you can't say anything about it," it should be obvious that not doing something is faster than doing it. Whether or not this is important to an individual project, programmer, or function is up to the person with the editor, but if I can simplify code and while doing so make it do less work for the same results, I'm going to do it without bothering to measure.
I don’t see this as a “bad thing” at all, at least not in itself. The code is merely a literal transcription of “x all of the y in z”, and in a situation where you need to do that, it’s perfectly acceptable. You can of course use things.OfType<Foo>() for the sake of concision.
The main reason to recommend against it is that, according to OOP theology, interfaces are intended to model the different kinds of “black box” for which an object may substituted. Predicating an algorithm on fulfillment of an interface constitutes moving behaviour to the algorithm that should be in that interface.
Essentially, an interface is a behavioural role. If you think OOP is a good idea, then you should use interfaces only to model behaviours, so that algorithms don’t have to. I don’t think what passes for OOP these days is in fact a good idea, so this is as far as my answer can be useful.
Obviously there can't be an instance member on a static class, since that class could never be instantiated. Why do we need to declare members as static?
I get asked questions like this all the time. Basically the question boils down to "when a fact about a declared member can be deduced by the compiler should the explicit declaration of that fact be (1) required, (2) optional, or (3) forbidden?"
There's no one easy answer. Each one has to be taken on a case-by-case basis. Putting "static" on a member of a static class is required. Putting "new" on a hiding, non-overriding method of a derived class is optional. Putting "static" on a const is forbidden.
Briefly considering your scenario, it seems bizarre to make it forbidden. You have a whole class full of methods marked "static". You decide to make the class static and that means you have to remove all the static modifiers? That's weird.
It seems bizarre to make it optional; suppose you have a static class and two methods, one marked static, one not. Since static is not normally the default, it seems natural to think that there is intended to be a difference between them. Making it optional seems to be potentially confusing.
That leaves making it required, as the least bad of the three options.
See http://blogs.msdn.com/b/ericlippert/archive/2010/06/10/don-t-repeat-yourself-consts-are-already-static.aspx for more thoughts on these sorts of problems.
Because by definition, all of their members must be static. They decided not to give some confusing syntactic sugar.
I would go one step further and ask: Why does C# have static classes at all? It seems like a weird concept, a class that's not really a class. It's just a container, you can't use it to type any variables, parameters or fields. You also can't use it as a type parameter. And of course, you can never have an instance of such a class.
I'd rather have modules, like in VB.NET and F#. And then, the static modifier would not be necessary to avoid confusion.
It could be implicit, but also it would complicate code reading and lead to confusions.
Richard,
Hmmmm... I'd guess that the language designers decided that it would be better to be very, very explicit... to avert any possible confusion when a maintainer, who doesn't know the code, jumps into the middle of a static class, and presumes that they are in a "normal" instance context.
But of course, that's just a guess. Most IDE's help you out there anyway, by adding the static modifier "automagically"... or at least highlighting your mistake at "write time", as apposed to "compile time".
It's a good question... unfortunately not one with a "correct" answer... unless someone can turn up a link from a C#-language-designers blog (or similar) discussing this decision. What I can tell you is: "I'd bet $1,000 that it's no accident."
Cheers. Keith.
Explicit coding makes things maintainable
If I want to copy a method from one class to another, so that code is better organized, then I would have to keep cheking a lot of things all the time, just in case the destination class is or is not static.
By declaring the member as static, you also have a visual indication of what the code is, when you see it.
It is also less confusing. Imagine a class that is static, and inside it has got members marked as static, and others not marked.
I can see lots of reasons, and many other reasons exist.
One reason I would think it is important to explicitly state it is a static is because in a multi-threaded programming model, these static variables are shared by multiple threads. When doing code review or code analysis, it is much easier to pick up this importance from reading the variable, instead of looking up the class declaration, and determine if the variables are static or non-static. It can get pretty confusing when reading variable during code review if you don't know if the class is static or non-static.
This is because copy-paste would be more complicated.
If you copy a method from a static class to a non-static class then you would have to add the static keyword.
If you copy a method from a non-static class to a static class you would have to remove the static keyword.
Moving methods around is the primary thing developers do ('I need to refactor this code, it will take a week at least'), and by making it easier Eric and his team allowed us to save hours of work.
I'm glad C# doesn't let you access static members 'as though' they were instance members. This avoids a common bug in Java:
Thread t = new Thread(..);
t.sleep(..); //Probably doesn't do what the programmer intended.
On the other hand, it does let you access static members 'through' derived types. Other than operators (where it saves you from writing casts), I can't think of any cases where this is actually helpful. In fact, it actively encourages mistakes such as:
// Nasty surprises ahead - won't throw; does something unintended:
// Creates a HttpWebRequest instead.
var ftpRequest = FtpWebRequest.Create(#"http://www.stackoverflow.com");
// Something seriously wrong here.
var areRefEqual = Dictionary<string, int>.ReferenceEquals(dict1, dict2);
I personally keep committing similar errors over and over when I am searching my way through unfamiliar APIs (I remember starting off with expression trees; I hit BinaryExpression. in the editor and was wondering why on earth IntelliSense was offering me MakeUnary as an option).
In my (shortsighted) opinion, this feature:
Doesn't reduce verbosity; the programmer has to specify a type-name one way or another (excluding operators and cases when one is accessing inherited static members of the current type).
Encourages bugs/ misleading code such as the one above.
May suggest to the programmer that static methods in C# exhibit some sort of 'polymorphism', when they don't.
(Minor) Introduces 'silent', possibly unintended rebinding possibilities on recompilation.
(IMO, operators are a special case that warrant their own discussion.)
Given that C# is normally a "pit of success" language, why does this feature exist? I can't see its benefits (other than 'discoverability', which could always be solved in the IDE), but I see lots of problems.
I'd agree this is a misfeature. I don't know how often someone on Stack Overflow has posted code of:
ASCIIEncoding.ASCII
etc... which, while harmless in terms of execution is misleading in terms of reading the code.
Obviously it's too late to remove this "feature" now, although I guess the C# team could introduce a super-verbose warning mode for this and other style issues.
Maybe C#'s successor will improve things...
This is useful in WinForms.
In any control or form, you can write MousePosition, MouseButtons, or ModifierKeys to use the static members inherited from Control.
It's still debatable whether it was a good decision.
ReSharper sometimes hints that I can make some of my random utility methods in my WebForms static. Why would I do this? As far as I can tell, there's no benefit in doing so.. or is there? Am I missing something as far as static members in WebForms goes?
The real reason is not the performance reason -- that will be measured in billionths of a second, if it has any effect at all.
The real reason is that an instance method which makes no use of its instance is logically a design flaw. Suppose I wrote you a method:
class C
{
public int DoubleIt(int x, string y, Type z)
{
return x * 2;
}
}
Is this a well-designed method? No. It takes all kinds of information in which it then ignores and does not use to compute the result or execute a side effect. Why force the caller to pass in an unnecessary string and type?
Now, notice that this method also takes in a C, in the form of the "this" passed into the call. That is also ignored. This method should be static, and take one parameter.
A well-designed method takes in exactly the information it needs to compute its results and execute its side effects, no more, no less. Resharper is telling you that you have a design flaw in your code: you have a method that is taking in information that it is ignoring. Either fix the method so that it starts using that information, or stop passing in useless data by making the method static.
Again, the performance concern is a total red herring; you'll never notice a difference that small unless what you're doing takes on the order of a handful of processor cycles. The reason for the warning is to call your attention to a logical design flaw. Getting the program logic right is far more important than shaving off a nanosecond here and there.
I wouldn't mind any performance improvement, but what you might like is that static methods have no side effect on the instance. So unless you're having a lot of static state (do you?) this gives away your intention that this method is similar to a function, only looking at the parameters and (optional) returning a result.
For me this is a nice hint when I read someone else's code. I don't worry too much about shared state and can see the flow of information more easily. It's much more constrained in what it can do by declaring it static, which is less to worry about for me, the reader.
You will get a performance improvement, FxCop rule CA1822 is the same.
From MSDN:
Methods that do not access instance
data or call instance methods can be
marked as static (Shared in Visual
Basic). After you mark the methods as
static, the compiler will emit
non-virtual call sites to these
members. Emitting non-virtual call
sites will prevent a check at runtime
for each call that ensures that the
current object pointer is non-null.
This can result in a measurable
performance gain for
performance-sensitive code. In some
cases, the failure to access the
current object instance represents a
correctness issue
Resharper suggest to convert methods to static if they don't use any non-static variables or methods from the class.
Benefit could be a minor performance increase (application will use less memory), and there will be one less resharper warning ;)
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
What Advantages of Extension Methods have you found?
All right, first of all, I realize this sounds controversial, but I don't mean to be confrontational. I am asking a serious question out of genuine curiosity (or maybe puzzlement is a better word).
Why were extension methods ever introduced to .NET? What benefit do they provide, aside from making things look nice (and by "nice" I mean "deceptively like instance methods")?
To me, any code that uses an extension method like this:
Thing initial = GetThing();
Thing manipulated = initial.SomeExtensionMethod();
is misleading, because it implies that SomeExtensionMethod is an instance member of Thing, which misleads developers into believing (at least as a gut feeling... you may deny it but I've definitely observed this) that (1) SomeExtensionMethod is probably implemented efficiently, and (2) since SomeExtensionMethod actually looks like it's part of the Thing class, surely it will remain valid if Thing is revised at some point in the future (as long as the author of Thing knows what he/she's doing).
But the fact is that extension methods don't have access to protected members or any of the internal workings of the class they're extending, so they're just as prone to breakage as any other static methods.
We all know that the above could easily be:
Thing initial = GetThing();
Thing manipulated = SomeNonExtensionMethod(initial);
To me, this seems a lot more, for lack of a better word, honest.
What am I missing? Why do extension methods exist?
Extension methods were needed to make Linq work in the clean way that it does, with method chaining. If you have to use the "long" form, it causes the function calls and the parameters to become separated from each other, making the code very hard to read. Compare:
IEnumerable<int> r = list.Where(x => x > 10).Take(5)
versus
// What does the 5 do here?
IEnumerable<int> r = Enumerable.Take(Enumerable.Where(list, x => x > 10), 5);
Like anything, they can be abused, but extension methods are really useful when used properly.
I think that the main upside is discoverability. Type initial and a dot, and there you have all the stuff that you can do with it. It's a lot harder to find static methods tucked away in some class somewhere else.
First of all, in the Thing manipulated = SomeNonExtensionMethod(initial); case, SomeNonExtensionMethod is based on exactly the same assumptions like in the Thing manipulated = initial.SomeExtensionMethod(); case. Thing can change, SomeExtensionMethod can break. That's life for us programmers.
Second, when I see Thing manipulated = initial.SomeExtensionMethod();, it doesn't tell me exactly where SomeExtensionMethod() is implemented. Thing could inherit it from TheThing, which inherits it from TheOriginalThing. So the "misleading" argument leads to nowhere. I bet the IDE takes care of leading you to the right source, doesn't it?
What's so great? It makes code more consistent. If it works on a string, it looks like if it was a member of string. It's ugly to have several MyThing.doThis() methods and several static ThingUtil.doSomethingElse(Mything thing) methods in another class.
SO you can extend someone else's class. not yours... that's the advantage.
(and you can say.. oh I wish they implement this / that.... you just do it yourself..)
they are great for automatically mixing in functionality based on Interfaces that a class inherits without that class having to explicitly re implement it.
Linq makes use of this a lot.
Great way to decorate classes with extra functionality. Most effective when applied to an Interface rather than a specific class. Still a good way to extend Framework classes though.
It's just convenient syntactic sugar so that you can call a method with the same syntax regardless of whether it's actually part of the class. If party A releases a lib, and party B releases stuff that uses that lib, it's easier to just call everything with class.method(args) than to have to remember what gets called with method(class, args) vs. class.method(args).