Related
I have a method that takes 30 parameters. I took the parameters and put them into one class, so that I could just pass one parameter (the class) into the method. Is it perfectly fine in the case of refactoring to pass in an object that encapsulates all the parameters even if that is all it contains.
That is a great idea. It is typically how data contracts are done in WCF for example.
One advantage of this model is that if you add a new parameter, the consumer of the class doesn't need to change just to add the parameter.
As David Heffernan mentions, it can help self document the code:
FrobRequest frobRequest = new FrobRequest
{
FrobTarget = "Joe",
Url = new Uri("http://example.com"),
Count = 42,
};
FrobResult frobResult = Frob(frobRequest);
While other answers here are correctly point out that passing an instance of a class is better than passing 30 parameters, be aware that a large number of parameters may be a symptom of an underlying issue.
E.g., many times static methods grow in their number of parameters, because they should have been instance methods all along, and you are passing a lot of info that could more easily be maintained in an instance of that class.
Alternatively, look for ways to group the parameters into objects of a higher abstraction level. Dumping a bunch of unrelated parameters into a single class is a last resort IMO.
See How many parameters are too many? for some more ideas on this.
It's a good start. But now that you've got that new class, consider turning your code inside-out. Move the method which takes that class as a parameter into your new class (of course, passing an instance of the original class as the parameter). Now you've got a big method, alone in a class, and it will be easier to tease it apart into smaller, more manageable, testable methods. Some of those methods might move back to the original class, but a fair chunk will probably stay in your new class. You've moved beyond Introduce Parameter Object on to Replace Method with Method Object.
Having a method with thirty parameters is a pretty strong sign that the method is too long and too complicated. Too hard to debug, too hard to test. So you should do something about it, and Introduce Parameter Object is a fine place to start.
Whilst refactoring to a Parameter Object isn't in itself a bad idea it shouldn't be used to hide the problem that a class that needs 30 pieces of data provided from elsewhere could still be something of a code smell. The Introduce Parameter Object refactoring should probably be regarded as a step along the way in a broader refactoring process rather than the end of that procedure.
One of the concerns that it doesn't really address is that of Feature Envy. Does the fact that the class being passed the Parameter Object is so interested in the data of another class not indicate that maybe the methods that operate on that data should be moved to where the data resides? It's really better to identify clusters of methods and data that belong together and group them into classes, thereby increasing encapsulation and making your code more flexible.
After several iterations of splitting off behaviour and the data it operates on into separate units you should find that you no longer have any classes with enormous numbers of dependencies which is always a better end result because it'll make your code more supple.
That is an excellent idea and a very common solution to the problem. Methods with more than 2 or 3 parameters get exponentially harder and harder to understand.
Encapsulating all this in a single class makes for much clearer code. Because your properties have names you can write self-documenting code like this:
params.height = 42;
params.width = 666;
obj.doSomething(params);
Naturally when you have a lot of parameters the alternative based on positional identication is simply horrid.
Yet another benefit is that adding extra parameters to the interface contract can be done without forcing changes at all call sites. However, this is not always as trivial as it seems. If different call sites require different values for the new parameter, then it is harder to hunt them down than with the parameter based approach. In the parameter based approach, adding a new parameter forces a change at each call site to supply the new parameter and you can let the compiler do the work of finding them all.
Martin Fowler calls this Introduce Parameter Object in his book Refactoring. With that citation, few would call it a bad idea.
30 parameters is a mess. I think it's way prettier to have a class with the properties. You could even create multiple "parameter classes" for groups of parameters that fit in the same category.
You could also consider using a structure instead of a class.
But what you're trying to do is very common and a great idea!
It can be reasonable to use a Plain Old Data class whether you're refactoring or not. I'm curious as to why you thought it might not be.
Maybe C# 4.0's optional and named parameters be a good alternative to this?
Anyway, the method you are describing can also be good for abstracting the programs behavior. For example you could have one standard SaveImage(ImageSaveParameters saveParams)-function in an Interface where ImageSaveParameters also is an interface and can have additional parameters depending on the image-format. For example JpegSaveParameters has a Quality-property while PngSaveParameters has a BitDepth-property.
This is how the save save-dialog in Paint.NET does it so it is a very real life example.
As stated before: it is the right step to do, but consider the followings too:
your method might be too complex (you should consider dividing it into more methods, or even turn it into a separate class)
if you create the class for the parameters, make it immutable
if many of the parameters could be null or could have some default value, you might want to use the builder pattern for your class.
So many great answers here. I would like to add my two cents.
Parameter object is a good start. But there is more that could be done. Consider the following (ruby examples):
/1/ Instead of simply grouping all the parameters, see if there can be meaningful grouping of parameters. You might need more than one parameter object.
def display_line(startPoint, endPoint, option1, option2)
might become
def display_line(line, display_options)
/2/ Parameter object may have lesser number of properties than the original number of parameters.
def double_click?(cursor_location1, control1, cursor_location2, control2)
might become
def double_click?(first_click_info, second_click_info)
# MouseClickInfo being the parameter object type
# having cursor_location and control_at_click as properties
Such uses will help you discover possibilities of adding meaningful behavior to these parameter objects. You will find that they shake off their initial Data Class smell sooner to your comfort. :--)
Is it advisable to prefix an "Is" or a "Has" when creating a method that returns a Boolean. My feeling is that this practice is more suited to defining property names.
Say, we have a method like the following has some logic:
bool IsActivePage()
{
// Some logic to determine if the page is active...
}
Would it be more preferable to rename the method to GetActivePageStatus and then create a boolean property IsActivePage that returns the result of that method.
What is the .NET standard? All opinions will be appreciated?
The Framework Design Guidelines state that you should "give methods names that are verbs or verb phrases" since "typically methods act on data". Properties, on the other hand, should be named "using a noun, noun phrase, or an adjective" and "you can also prefix Boolean properties with Is, Can, or Has, but only where it adds value".
In this case, you are using a method rather than a property, probably since it is either expensive or has some side effects. I suggest you choose the name that provides the most clarity of what the returned value represents. The important part is that you're being consistent and that you're not confusing other developers with your convention.
I would be using
bool IsActivePage
{
get
{
// some logic
}
}
if the method has no side effects and is inexpensive.
I see no need to have both a method and a property for the same thing.
I vote for your solution: so YES, for methods , I personally think, it's better to have Get..Bla(), cause method intuitively, at least for me, is, not only something that returns a value to me, but also performs some calculations or calls other methods inside it, properties instead, just return value.
"Get" word, to me personally, seems DO SOMETHIGN+RETURN,
instead "Is" : check if this exists.
I think both are defensible. The key is really to think about how standardized a convention like this should be. In general, you should either decide at your team or company level about how to handle cases like this, and then be consistent after that. As long as code you and your company produce is clear to everyone involved, that's what matters.
I would say yes. All methods should start with an action verb to indicate that they do something. Is and Has are more suited for properties.
First, coding conventions are vitally important in any shared development project, or any project you expect to live beyond the first time you ship the code or set it down for a week.
That said, there are a number of .Net coding standards available on the Internet (Google is still your friend) and you should adhere to those documents as best you can. One exception is in a mixed language environment where different languages have different style conventions and you want to create a more common style that covers all of those languages. In that case, you should create a style document and publish it.
Would it be more preferable to rename
the method to GetActivePageStatus and
then create a boolean property
IsActivePage that returns the result
of that method.
I would probably not go this route. IMO either
a) the logic is very simple, and you can just put it in the property getter
b) the logic is not very simple, you want to put it in a method BUT NOT hide it inside a property where an unexpecting caller may incur unneeded overhead by using it inappropriately (ie not caching the value if there is significant overhead in calculating it)
I never seem to understand why we need delegates?
I know they are immutable reference types that hold reference of a method but why can't we just call the method directly, instead of calling it via a delegate?
Thanks
Simple answer: the code needing to perform the action doesn't know the method to call when it's written. You can only call the method directly if you know at compile-time which method to call, right? So if you want to abstract out the idea of "perform action X at the appropriate time" you need some representation of the action, so that the method calling the action doesn't need to know the exact implementation ahead of time.
For example:
Enumerable.Select in LINQ can't know the projection you want to use unless you tell it
The author of Button didn't know what you want the action to be when the user clicks on it
If a new Thread only ever did one thing, it would be pretty boring...
It may help you to think of delegates as being like single-method interfaces, but with a lot of language syntax to make them easy to use, and funky support for asynchronous execution and multicasting.
Of course you can call method directly on the object but consider following scenarios:
You want to call series of method by using single delegate without writing lot of method calls.
You want to implement event based system elegantly.
You want to call two methods same in signature but reside in different classes.
You want to pass method as a parameter.
You don't want to write lot of polymorphic code like in LINQ , you can provide lot of implementation to the Select method.
Because you may not have the method written yet, or you have designed your class in such a way that a user of it can decide what method (that user wrote) the user wants your class to execute.
They also make certain designs cleaner (for example, instead of a switch statement where you call different methods, you call the delegate passed in) and easier to understand and allow for extending your code without changing it (think OCP).
Delegates are also the basis of the eventing system - writing and registering event handlers without delegates would be much harder than it is with them.
See the different Action and Func delegates in Linq - it would hardly be as useful without them.
Having said that, no one forces you to use delegates.
Delegates supports Events
Delegates give your program a way to execute methods without having to know precisely what those methods are at compile time
Anything that can be done with delegates can be done without them, but delegates provide a much cleaner way of doing them. If one didn't have delegates, one would have to define an interface or abstract base class for every possible function signature containing a function Invoke(appropriate parameters), and define a class for each function which was to be callable by pseudo-delegates. That class would inherit the appropriate interface for the function's signature, would contain a reference to the class containing the function it was supposed to represent, and a method implementing Invoke(appropriate parameters) which would call the appropriate function in the class to which it holds a reference. If class Foo has two methods Foo1 and Foo2, both taking a single parameter, both of which can be called by pseudo-delegates, there would be two extra classes created, one for each method.
Without compiler support for this technique, the source code would have to be pretty heinous. If the compiler could auto-generate the proper nested classes, though, things could be pretty clean. Dispatch speed for pseudo-delegates would probably generally be slower than with conventional delegates, but if pseudo-delegates were an interface rather than an abstract base class, a class which only needs to make a pseudo-delegate for one method of a given signature could implement the appropriate pseudo-delegate interface itself; the class instance could then be passed to any code expecting a pseudo-delegate of that signature, avoiding any need to create an extra object. Further, while the number of classes one would need when using pseudo-delegates would be greater than when using "real" delegates, each pseudo-delegate would only need to hold a single object instance.
Think of C/C++ function pointers, and how you treat javascript event-handling functions as "data" and pass them around. In Delphi language also there is procedural type.
Behind the scenes, C# delegate and lambda expressions, and all those things are essentially the same idea: code as data. And this constitutes the very basis for functional programming.
You asked for an example of why you would pass a function as a parameter, I have a perfect one and thought it might help you understand, it is pretty academic but shows a use. Say you have a ListResults() method and getFromCache() method. Rather then have lots of checks if the cache is null etc. you can just pass any method to getCache and then only invoke it if the cache is empty inside the getFromCache method:
_cacher.GetFromCache(delegate { return ListResults(); }, "ListResults");
public IEnumerable<T> GetFromCache(MethodForCache item, string key, int minutesToCache = 5)
{
var cache = _cacheProvider.GetCachedItem(key);
//you could even have a UseCache bool here for central control
if (cache == null)
{
//you could put timings, logging etc. here instead of all over your code
cache = item.Invoke();
_cacheProvider.AddCachedItem(cache, key, minutesToCache);
}
return cache;
}
You can think of them as a construct similar with pointers to functions in C/C++. But they are more than that in C#. Details.
That is the question? So how big a sin is it not to use this convention when developing a c# project? This convention is widely used in the .NET class library. However, I am not a fan to say the least, not just for asthetic reasons but I don't think it makes any contribution. For example is IPSec an interface of PSec? Is IIOPConnection An interface of IOPConnection, I usually go to the definition to find out anyway.
So would not using this convention cause confusion?
Are there any c# projects or libraries of note that drop this convention?
Do any c# projects that mix conventions, as unfortunately Apache Wicket does?
The Java class libraries have existed without this for many years, I don't feel I have ever struggled to read code without it. Also, should the interface not be the most primitive description? I mean IList<T> as an interface for List<T> in c#, is it not better to have List<T> and LinkedList<T> or ArrayList<T> or even CopyOnWriteArrayList<T>? The classes describe the implementation? I think I get more information here, than I do from List<T> in c#.
The difference between Java and C# is that Java allows you to easily distinguish whether you implement an interface or extend a class since it has the corresponding keywords implements and extends.
As C# only has the : to express either an implementation or extension, I recommend following the standard and put an I before an interface's name.
It's bad practice in my opionion too. The reasons why, additional to yours are:
The whole purpose of interfaces is to abstract away implementation details. So it shouldn't matter if you call a method with a IParam or Param.
Elaborated tools have their own possibilities to mark interfaces with an icon.
If your eye is searching in a IDE for a name, the most significant part is the beginning of a string. Maybe your classes get sorted by alphabet, and now you have a block of similar names, all starting with I... together. They look similar, while it would be of advantage to distinguish them easily. It's ergonomical wrong to use an I-prefix.
Even more annoying: ImplList, ImplThat, AFoo for an abstract Foo, AImplFooBar for an abstract Foo, which implements Bar? SSomething as Singleton, or SMath for a static class? Stop it! :)
With respect, in your post you are only considering your needs (I, I, I), and not the needs of the readers of your code. If you are a one-man shop, then fair enough, but if your code if ever read by others, then consider that they will be expecting interfaces to have an I prefix--that is just the way it is in .Net, and too many people are used to it to change now.
Also, it would help if you used more readable names for classes. What is PSec? How can I tell whether IPSec is an interface, when I can't even tell what PSec is? If instead PSec was renamed to e.g., PersonalSecurity, then IPersonalSecurity is much more likely to be an interface.
Using I for interfaces goes against the whole point of an interface imo, that it is a connector that you can plug different concrete implementations in to dependencies.
An object that uses the database needs a DataStore, not an IDataStore, and it should be up to configuration whether that gets a DatabaseDataStore or a FileSystemDataStore or whatever plugged into it (or a MockDataStore for testing).
Read this and move on. If you're using Java, follow the Java naming conventions.
It's not a sin per se, it's best practice. It makes things a lot more readable all in all. Also, think about it. IMyClass is the interface to MyClass. It just makes sense, and stops unnecessary confusion. Also remember the : syntax vs. implements/extends. Lastly, you can bypass all of this by simply checking the tooltips/go to in VS, but for pure readability, the standard is important in my opinion.
Not that I'm aware of, but I'm sure they exist.
Haven't seen any, but I'm sure they exist.
I think the main reason for the I-Prefix is not that those using it can see it's an interface but that those implementing/deriving from existing classes and interfaces can see more easily wether it's an interface or base class.
Another advantage is that it prevents stupid things like (If my Java memory serves me correctly):
List foo = new List(); // Why does it fail?
The third advantage is refactoring. If you move through your objects and read the code you can see where you forgot to code-by-interface. "A method accepting something with a type not prefixed with I? Fix it!".
I used it even in Java and found it quite usefull, but it always depends on the guidelines for your company/team. Follow them, no matter how stupid you may think they are, some day you will be happy they exist.
Ask yourself: If my IDE could give me some hint in the text (e.g different colour, underline, italic...) that the type was an interface would I still bother?
Sounds like you are naming the types like that just so you can tell from the name something about parts of the definition other than the name.
Best practices override convention sometimes, in my opinion. While I may not personally like the convention, not using it goes against the best practice that has been in place for longer than I care to think about.
I would look at it more from the point of how other people do it, in this case. Since 99% of the common world will be prefacing with the "I", that is good enough to keep this best practice. If you have to bring in a contractor or on-board a new developer, you should be able to focus on the code and not have to explain/defend choices that you made.
It has been around long enough, and is ingrained well enough, that I don't expect it to change in my lifetime. It is just one of those "unwritten rules", better defined as an "unwritten best practice", that will probably outlive me.
I would say that not following this convention would get you down to .NET hell. It's a convention that's almost as important to me as using self in instance methods in Python.
I don't see any good reason to do this. 'Extends' vs 'implements' already tells you whether you are dealing with a class or an interface in the cases where it actually matters. In all other cases the whole idea is that you don't care.
In my opinion the biggest reason "I" is often prefixed is that the IDEs for both Java (Eclipse) and .NET (V Studio) do not make it extremely clear that the Class you are looking at is in fact an interface. The package browser in eclipse shows the same icon till you expand the class file and the font of an Interface declaration is not any different than a class.
An Example would be if I type:
ISomeInterface s = factory.create();
ISomeInterface should atleast have some sort of font modification to show that its an interface (like italics or underline).
The other big reason is in the Java world that people prefix with "I" is that it makes it easier in Eclipse to do a "Ctrl-Shift-R" and search for only interfaces.
This is important in the Java/Spring world where you need interfaces as your collaborators if you plan on using any AOP magic or some other Dynamic proxies.
Than you have the nasty choice of either prefixing your interface with "I" or suffixing your implementation class with "Impl" like ListImpl. I abhor the suffixing of classes with "Impl" to make the interface and concrete differ in name and prefix the prefix of "I".
In general I try to avoid making lots of interfaces.
In my own code I would never prefix with "I". I'm only give some reasons why people do it which is for old code consistency.
conventions exist to help all of us. If there is a chance another .net developer will be working with you then yes, follow the convention.
One idea is that the "I" part can be followed by a verb, stating what classes that implement the interface does; like ISaveXmlData, forming a nice human language name.
The key thing is consistency - as long you stick to having I prefixed to all interfaces or none at all, it's a matter of preference.
I use the I prefix for interfaces at work since the existing code already uses it for a naming convention for each interface. I find it more intuitive to quickly determine if a class implements an interface or another class simply by looking for the I prefix in the name of the base class.
On the other hand, some of the older projects at work don't use this naming convention and this makes the code slightly less readable, but it might just be that I'm used to the prefix.
Look at the BCL. In the Base Class Libraries you have IList<>, IQueryable, IDisposable.
If you don't prepend it with a 'I', how would people know it's an interface other than going to the definition?
Anyways, just my 2 cents
You can choose all names in your program how you like, but it's a good idea to hold naming conversion, if not you only will be read the program.
Usage of Interfaces is good not only if you design you own classes and interfaces. In some cases you makes other accents in your program it you use interfaces. For example, you can write code like
SqlDataReader dr = cmd.ExecuteReader (CommandBehavior.SequentialAccess);
if (!dr.HasRows) {
// ...
}
while (dr.Read ()) {
string name = dr.GetString (0);
// ...
}
or like
IDataReader dr = cmd.ExecuteReader (CommandBehavior.SequentialAccess);
if (!dr.HasRows) {
// ...
}
while (dr.Read ()) {
string name = dr.GetString (0);
// ...
}
the last one have looks like the same, but if you are using IDataReader instead of SqlDataReader you can easier to place some parts which works with dr in a method, which works not only with SqlDataReader class (but with OleDbDataReader, OracleDataReader, OdbcDataReader etc). On the other hand your program stay working exactly quick as before.
Updated (based on questions from comments):
The advantage is, like I written before, if you'll separate some parts of you code which work with IDataReader. For example, you can define delegate T ReadRowFromDataReader<T> (IDataReader dr, ...) and use it inside of while (dr.Read ()) block. So you write code which is more general as the code working with SqlDataReader directly. Inside of while (dr.Read ()) block you call rowReader (dr, ...). Your different implementations of code reading rows of data can be placed in a method with signature ReadRowFromDataReader<T> rowReader and place it as a actual parameter.
With the way you can write more independent code working with database. At the first time probably usage of generic delegate looks a little complex, but all code will be really easy to read. I want to accentuate one more time, that you really receive some advantages of using interfaces in this case only if you separate some parts of the code in another method. If you don't separate the code, the only advantage which you receive is: you receive code parts which are written more independend and you could copy and paced this parts easier in another program.
Usage of names started with 'I' makes easier to understand that now we are working with something more general as with one class.
I stick to the convention only because I have to, if I am to use any interfaces in the BCL and maintain consistency.
I don't like the convention, either.
Cannot believe it that so many people hate the 'I' prefix. I love the prefix 'I'.
Here is why:
Are abstract and interface different? Yes
Do I care the difference as a developer? Yes, but not always.
When do I need to care?
Design discussion(When I draw on the board, prefix 'I' clearly telling everyone it's an interface)
Read existing code(When I see prefix 'I', clearly I know it's an interface. There'are exceptions for words start with 'I', but very few cases)
Do I always need 'I'? No. But I want consistency, so YES.
With just one prefix 'I', it avoids so much communication overhead.
I think the real question in case of .NET should be: why do we ever need to distinguish between a class and an interface in a client code?
And for the C# & .NET there is a shameful answer - because someone invented an explicit interface implementations language support. A thing that is in my opinion a complete mess, because it allows to break a Single Responsibility Principle in an invisible way to the caller. Lets assume we have an IList interface and a List class.
This is only by convention that List.Count() does the same thing as IList.Count() does for the class. Normally you can't be so sure. As for me explicit interface implementation is a hidden form of method overloading done in the most wrong way ever. Let's assume like in old native languages that the instance reference is a first argument of a called method.
Now we have int Count(IList list) and int Count(List list). From the language point of view these are two separate methods that clearly advertise their responsibility - one can work with a more abstract IList, and another with the specific implementation List. But this is clearly visible here! No one would expect that both methods return the same value, because the more specific method may retrieve extra properties etc. It is however non obvious in the C# language in an explicit interface implementation form, because the caller is non aware which form is actually used - compiler knows, but I as a programmer might be unaware.
Unless I know if I call a class method or an interface method! I think it is a source of this somehow stupid convention for interfaces. If you use types named without the "I" prefix - especially in method arguments and return types - you may be unaware of whether you call a class instance method or an interface method.
As a good programmer using SOLID principles you should work with interfaces all the time - as long it is possible, especially if you are aware of explicit implementations.
This is in my opinion a hidden purpose of naming C# interfaces is this way - to cover the bad design of explicit interface implementations. You may not agree, but think twice about it - how could you ever make a method overloading feature that is effectively hidden from the calling site without expecting that a naming convention will naturally appear in order to manage it?
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
What Advantages of Extension Methods have you found?
All right, first of all, I realize this sounds controversial, but I don't mean to be confrontational. I am asking a serious question out of genuine curiosity (or maybe puzzlement is a better word).
Why were extension methods ever introduced to .NET? What benefit do they provide, aside from making things look nice (and by "nice" I mean "deceptively like instance methods")?
To me, any code that uses an extension method like this:
Thing initial = GetThing();
Thing manipulated = initial.SomeExtensionMethod();
is misleading, because it implies that SomeExtensionMethod is an instance member of Thing, which misleads developers into believing (at least as a gut feeling... you may deny it but I've definitely observed this) that (1) SomeExtensionMethod is probably implemented efficiently, and (2) since SomeExtensionMethod actually looks like it's part of the Thing class, surely it will remain valid if Thing is revised at some point in the future (as long as the author of Thing knows what he/she's doing).
But the fact is that extension methods don't have access to protected members or any of the internal workings of the class they're extending, so they're just as prone to breakage as any other static methods.
We all know that the above could easily be:
Thing initial = GetThing();
Thing manipulated = SomeNonExtensionMethod(initial);
To me, this seems a lot more, for lack of a better word, honest.
What am I missing? Why do extension methods exist?
Extension methods were needed to make Linq work in the clean way that it does, with method chaining. If you have to use the "long" form, it causes the function calls and the parameters to become separated from each other, making the code very hard to read. Compare:
IEnumerable<int> r = list.Where(x => x > 10).Take(5)
versus
// What does the 5 do here?
IEnumerable<int> r = Enumerable.Take(Enumerable.Where(list, x => x > 10), 5);
Like anything, they can be abused, but extension methods are really useful when used properly.
I think that the main upside is discoverability. Type initial and a dot, and there you have all the stuff that you can do with it. It's a lot harder to find static methods tucked away in some class somewhere else.
First of all, in the Thing manipulated = SomeNonExtensionMethod(initial); case, SomeNonExtensionMethod is based on exactly the same assumptions like in the Thing manipulated = initial.SomeExtensionMethod(); case. Thing can change, SomeExtensionMethod can break. That's life for us programmers.
Second, when I see Thing manipulated = initial.SomeExtensionMethod();, it doesn't tell me exactly where SomeExtensionMethod() is implemented. Thing could inherit it from TheThing, which inherits it from TheOriginalThing. So the "misleading" argument leads to nowhere. I bet the IDE takes care of leading you to the right source, doesn't it?
What's so great? It makes code more consistent. If it works on a string, it looks like if it was a member of string. It's ugly to have several MyThing.doThis() methods and several static ThingUtil.doSomethingElse(Mything thing) methods in another class.
SO you can extend someone else's class. not yours... that's the advantage.
(and you can say.. oh I wish they implement this / that.... you just do it yourself..)
they are great for automatically mixing in functionality based on Interfaces that a class inherits without that class having to explicitly re implement it.
Linq makes use of this a lot.
Great way to decorate classes with extra functionality. Most effective when applied to an Interface rather than a specific class. Still a good way to extend Framework classes though.
It's just convenient syntactic sugar so that you can call a method with the same syntax regardless of whether it's actually part of the class. If party A releases a lib, and party B releases stuff that uses that lib, it's easier to just call everything with class.method(args) than to have to remember what gets called with method(class, args) vs. class.method(args).