That is the question? So how big a sin is it not to use this convention when developing a c# project? This convention is widely used in the .NET class library. However, I am not a fan to say the least, not just for asthetic reasons but I don't think it makes any contribution. For example is IPSec an interface of PSec? Is IIOPConnection An interface of IOPConnection, I usually go to the definition to find out anyway.
So would not using this convention cause confusion?
Are there any c# projects or libraries of note that drop this convention?
Do any c# projects that mix conventions, as unfortunately Apache Wicket does?
The Java class libraries have existed without this for many years, I don't feel I have ever struggled to read code without it. Also, should the interface not be the most primitive description? I mean IList<T> as an interface for List<T> in c#, is it not better to have List<T> and LinkedList<T> or ArrayList<T> or even CopyOnWriteArrayList<T>? The classes describe the implementation? I think I get more information here, than I do from List<T> in c#.
The difference between Java and C# is that Java allows you to easily distinguish whether you implement an interface or extend a class since it has the corresponding keywords implements and extends.
As C# only has the : to express either an implementation or extension, I recommend following the standard and put an I before an interface's name.
It's bad practice in my opionion too. The reasons why, additional to yours are:
The whole purpose of interfaces is to abstract away implementation details. So it shouldn't matter if you call a method with a IParam or Param.
Elaborated tools have their own possibilities to mark interfaces with an icon.
If your eye is searching in a IDE for a name, the most significant part is the beginning of a string. Maybe your classes get sorted by alphabet, and now you have a block of similar names, all starting with I... together. They look similar, while it would be of advantage to distinguish them easily. It's ergonomical wrong to use an I-prefix.
Even more annoying: ImplList, ImplThat, AFoo for an abstract Foo, AImplFooBar for an abstract Foo, which implements Bar? SSomething as Singleton, or SMath for a static class? Stop it! :)
With respect, in your post you are only considering your needs (I, I, I), and not the needs of the readers of your code. If you are a one-man shop, then fair enough, but if your code if ever read by others, then consider that they will be expecting interfaces to have an I prefix--that is just the way it is in .Net, and too many people are used to it to change now.
Also, it would help if you used more readable names for classes. What is PSec? How can I tell whether IPSec is an interface, when I can't even tell what PSec is? If instead PSec was renamed to e.g., PersonalSecurity, then IPersonalSecurity is much more likely to be an interface.
Using I for interfaces goes against the whole point of an interface imo, that it is a connector that you can plug different concrete implementations in to dependencies.
An object that uses the database needs a DataStore, not an IDataStore, and it should be up to configuration whether that gets a DatabaseDataStore or a FileSystemDataStore or whatever plugged into it (or a MockDataStore for testing).
Read this and move on. If you're using Java, follow the Java naming conventions.
It's not a sin per se, it's best practice. It makes things a lot more readable all in all. Also, think about it. IMyClass is the interface to MyClass. It just makes sense, and stops unnecessary confusion. Also remember the : syntax vs. implements/extends. Lastly, you can bypass all of this by simply checking the tooltips/go to in VS, but for pure readability, the standard is important in my opinion.
Not that I'm aware of, but I'm sure they exist.
Haven't seen any, but I'm sure they exist.
I think the main reason for the I-Prefix is not that those using it can see it's an interface but that those implementing/deriving from existing classes and interfaces can see more easily wether it's an interface or base class.
Another advantage is that it prevents stupid things like (If my Java memory serves me correctly):
List foo = new List(); // Why does it fail?
The third advantage is refactoring. If you move through your objects and read the code you can see where you forgot to code-by-interface. "A method accepting something with a type not prefixed with I? Fix it!".
I used it even in Java and found it quite usefull, but it always depends on the guidelines for your company/team. Follow them, no matter how stupid you may think they are, some day you will be happy they exist.
Ask yourself: If my IDE could give me some hint in the text (e.g different colour, underline, italic...) that the type was an interface would I still bother?
Sounds like you are naming the types like that just so you can tell from the name something about parts of the definition other than the name.
Best practices override convention sometimes, in my opinion. While I may not personally like the convention, not using it goes against the best practice that has been in place for longer than I care to think about.
I would look at it more from the point of how other people do it, in this case. Since 99% of the common world will be prefacing with the "I", that is good enough to keep this best practice. If you have to bring in a contractor or on-board a new developer, you should be able to focus on the code and not have to explain/defend choices that you made.
It has been around long enough, and is ingrained well enough, that I don't expect it to change in my lifetime. It is just one of those "unwritten rules", better defined as an "unwritten best practice", that will probably outlive me.
I would say that not following this convention would get you down to .NET hell. It's a convention that's almost as important to me as using self in instance methods in Python.
I don't see any good reason to do this. 'Extends' vs 'implements' already tells you whether you are dealing with a class or an interface in the cases where it actually matters. In all other cases the whole idea is that you don't care.
In my opinion the biggest reason "I" is often prefixed is that the IDEs for both Java (Eclipse) and .NET (V Studio) do not make it extremely clear that the Class you are looking at is in fact an interface. The package browser in eclipse shows the same icon till you expand the class file and the font of an Interface declaration is not any different than a class.
An Example would be if I type:
ISomeInterface s = factory.create();
ISomeInterface should atleast have some sort of font modification to show that its an interface (like italics or underline).
The other big reason is in the Java world that people prefix with "I" is that it makes it easier in Eclipse to do a "Ctrl-Shift-R" and search for only interfaces.
This is important in the Java/Spring world where you need interfaces as your collaborators if you plan on using any AOP magic or some other Dynamic proxies.
Than you have the nasty choice of either prefixing your interface with "I" or suffixing your implementation class with "Impl" like ListImpl. I abhor the suffixing of classes with "Impl" to make the interface and concrete differ in name and prefix the prefix of "I".
In general I try to avoid making lots of interfaces.
In my own code I would never prefix with "I". I'm only give some reasons why people do it which is for old code consistency.
conventions exist to help all of us. If there is a chance another .net developer will be working with you then yes, follow the convention.
One idea is that the "I" part can be followed by a verb, stating what classes that implement the interface does; like ISaveXmlData, forming a nice human language name.
The key thing is consistency - as long you stick to having I prefixed to all interfaces or none at all, it's a matter of preference.
I use the I prefix for interfaces at work since the existing code already uses it for a naming convention for each interface. I find it more intuitive to quickly determine if a class implements an interface or another class simply by looking for the I prefix in the name of the base class.
On the other hand, some of the older projects at work don't use this naming convention and this makes the code slightly less readable, but it might just be that I'm used to the prefix.
Look at the BCL. In the Base Class Libraries you have IList<>, IQueryable, IDisposable.
If you don't prepend it with a 'I', how would people know it's an interface other than going to the definition?
Anyways, just my 2 cents
You can choose all names in your program how you like, but it's a good idea to hold naming conversion, if not you only will be read the program.
Usage of Interfaces is good not only if you design you own classes and interfaces. In some cases you makes other accents in your program it you use interfaces. For example, you can write code like
SqlDataReader dr = cmd.ExecuteReader (CommandBehavior.SequentialAccess);
if (!dr.HasRows) {
// ...
}
while (dr.Read ()) {
string name = dr.GetString (0);
// ...
}
or like
IDataReader dr = cmd.ExecuteReader (CommandBehavior.SequentialAccess);
if (!dr.HasRows) {
// ...
}
while (dr.Read ()) {
string name = dr.GetString (0);
// ...
}
the last one have looks like the same, but if you are using IDataReader instead of SqlDataReader you can easier to place some parts which works with dr in a method, which works not only with SqlDataReader class (but with OleDbDataReader, OracleDataReader, OdbcDataReader etc). On the other hand your program stay working exactly quick as before.
Updated (based on questions from comments):
The advantage is, like I written before, if you'll separate some parts of you code which work with IDataReader. For example, you can define delegate T ReadRowFromDataReader<T> (IDataReader dr, ...) and use it inside of while (dr.Read ()) block. So you write code which is more general as the code working with SqlDataReader directly. Inside of while (dr.Read ()) block you call rowReader (dr, ...). Your different implementations of code reading rows of data can be placed in a method with signature ReadRowFromDataReader<T> rowReader and place it as a actual parameter.
With the way you can write more independent code working with database. At the first time probably usage of generic delegate looks a little complex, but all code will be really easy to read. I want to accentuate one more time, that you really receive some advantages of using interfaces in this case only if you separate some parts of the code in another method. If you don't separate the code, the only advantage which you receive is: you receive code parts which are written more independend and you could copy and paced this parts easier in another program.
Usage of names started with 'I' makes easier to understand that now we are working with something more general as with one class.
I stick to the convention only because I have to, if I am to use any interfaces in the BCL and maintain consistency.
I don't like the convention, either.
Cannot believe it that so many people hate the 'I' prefix. I love the prefix 'I'.
Here is why:
Are abstract and interface different? Yes
Do I care the difference as a developer? Yes, but not always.
When do I need to care?
Design discussion(When I draw on the board, prefix 'I' clearly telling everyone it's an interface)
Read existing code(When I see prefix 'I', clearly I know it's an interface. There'are exceptions for words start with 'I', but very few cases)
Do I always need 'I'? No. But I want consistency, so YES.
With just one prefix 'I', it avoids so much communication overhead.
I think the real question in case of .NET should be: why do we ever need to distinguish between a class and an interface in a client code?
And for the C# & .NET there is a shameful answer - because someone invented an explicit interface implementations language support. A thing that is in my opinion a complete mess, because it allows to break a Single Responsibility Principle in an invisible way to the caller. Lets assume we have an IList interface and a List class.
This is only by convention that List.Count() does the same thing as IList.Count() does for the class. Normally you can't be so sure. As for me explicit interface implementation is a hidden form of method overloading done in the most wrong way ever. Let's assume like in old native languages that the instance reference is a first argument of a called method.
Now we have int Count(IList list) and int Count(List list). From the language point of view these are two separate methods that clearly advertise their responsibility - one can work with a more abstract IList, and another with the specific implementation List. But this is clearly visible here! No one would expect that both methods return the same value, because the more specific method may retrieve extra properties etc. It is however non obvious in the C# language in an explicit interface implementation form, because the caller is non aware which form is actually used - compiler knows, but I as a programmer might be unaware.
Unless I know if I call a class method or an interface method! I think it is a source of this somehow stupid convention for interfaces. If you use types named without the "I" prefix - especially in method arguments and return types - you may be unaware of whether you call a class instance method or an interface method.
As a good programmer using SOLID principles you should work with interfaces all the time - as long it is possible, especially if you are aware of explicit implementations.
This is in my opinion a hidden purpose of naming C# interfaces is this way - to cover the bad design of explicit interface implementations. You may not agree, but think twice about it - how could you ever make a method overloading feature that is effectively hidden from the calling site without expecting that a naming convention will naturally appear in order to manage it?
Related
I have an interface
interface IFooWidget
{
IWidget Get(string widgetName);
}
My question is - how do I tell implementers the return semantics for the error case? I mean if they cant find the requested widget should they throw or return a null value. This seems a vital part of the interface definition but I cannot express it.
Seems like documenting it is the only way - it would be much nicer if I could somehow put it in compiler-speak rather than human-speak.
Just wondering if anybody has any solutions or thoughts...
Consensus Answer:
Interface define syntax not semantics. I was allowing myself to be seduced by the englishness of my method definition. Imagine if it was instead
IWidget Fnargle(string wobbler);
Which is of course how it looks to the compiler. The call semantics were hinted at in the original question because I chose a method name and params that were supposedly helpful. But really I need to document all aspects of the semantics of the method - not avoidable
There's no built-in way to specify which exceptions might be thrown from a method - C# does not have checked exceptions similar to Javas. You could add a parameter for an exception handler if you want to ensure the caller will handle any error:
interface IFooWidget
{
IWidget Get(string widgetName, Action<ExceptionType> handler);
}
There is no way to formally document or enforce this in C#.
Actually, interfaces do not express a lot of stuff. They do not express pre- and post-conditions on the data passed in or out. They do express the data format (type and number of arguments) but that is a small part of the contract of any given method.
There are some tools trying to help here. Code contracts comes to mind. They only create a different way to express these conditions. They do not statically enforce them.
Well, I don't think that you can express this in C# (as in most other modern languages).
IMHO I think an interface is precisely for not expressing any implementation details; if you want your method (the one specified by the interface) to return something only if a particular condition is true I would tend to say that this is rather an implementation detail.
Methods defined in interfaces are the least common part of all possible implementations i.e. the methods signature not any statement on how the method should be (or must) implemented.
I do not believe there is an option to specify this in the interface. One option is to add comments like you suggested. A comment like the one below will show up in IntelliSense for the implementer.
/// <exception cref="Exception"></exception>
If you are using .Net 4.5, you can use the System.Diagnostics.Contracts Namespace to specify a post condition to ensure that the result is not null using Contract.Ensures. Unfortunately adding a contract to an interface is a bit of a pain. The Microsoft docs on ContractClassAttribute gives the details on how to do this.
In the comments of this answer it is stated that "checking whether the object has implemented the interface , rampant as it may be, is a bad thing"
Below is what I believe is an example of this practice:
public interface IFoo
{
void Bar();
}
public void DoSomething(IEnumerable<object> things)
{
foreach(var o in things)
{
if(o is IFoo)
((IFoo)o).Bar();
}
}
With my curiosity piqued as someone who has used variations of this pattern before, I searched for a good example or explanation of why it is a bad thing and was unable to find one.
While it is very possible that I misunderstood the comment, can someone provide me with an example or link to better explain the comment?
It depends on what you're trying to do. Sometimes it can be appropriate - examples could include:
LINQ to Objects, where it's used to optimize operations like Count which can be performed more efficiently on an IList<T> via the specialized members.
LINQ to XML, where it's used to provide a really friendly API which accepts a wide range of types, iterating over values where appropriate
If you wanted to find all the controls of a certain type under a particular control in Windows Forms, you would want to check whether each control was a container to determine whether or not to recurse.
In other cases it's less appropriate and you should consider whether you can change the parameter type instead. It's definitely a "smell" - normally you shouldn't concern yourself with the implementation details of whatever has been handed to you; you should just use the API provided by the declared parameter type. This is also known as a violation of the Liskov Substitution Principle.
Whatever the dogmatic developers around may say, there are times when you simply do want to check an object's execution time type. It's hard to override object.Equals(object) correctly without using is/as/GetType, for example :) It's not always a bad thing, but it should always make you consider whether there's a better approach. Use sparingly, only where it's genuinely the most appropriate design.
I would personally rather write the code you've shown like this, mind you:
public void DoSomething(IEnumerable<object> things)
{
foreach(var foo in things.OfType<IFoo>())
{
foo.Bar();
}
}
It accomplishes the same thing, but in a neater way :)
I would expect the method to look like this, it seems much safer:
public void DoSomething(IEnumerable<IFoo> things)
{
foreach(var o in things)
{
o.Bar();
}
}
To read about the referred violation of the Liskov Principle: What is the Liskov Substitution Principle?
If you want to know why the commenter made that comment, probably best to ask them to explain.
I would not consider the code you posted to be "bad". A more "genuinely" bad practice is to use interfaces as markers. That is, you're not planning on actually using a method of the interface; rather, you have declared the interface on a class as a way of describing it in some way. Use attributes, not interfaces, as markers on classes.
Marker interfaces are hazardous in a number of ways. A real-world situation I once ran into where an important product made a bad decision on the basis of a marker interface is here: http://blogs.msdn.com/b/ericlippert/archive/2004/04/05/108086.aspx
That said, the C# compiler itself uses a "marker interface" in one situation. Mads tells the story here: http://blogs.msdn.com/b/madst/archive/2006/10/10/what-is-a-collection_3f00_.aspx
A reason is that there will be a dependency on that interface that is not immediately visible without digging in the code.
The statement
checking whether the object has implemented the interface , rampant
as it may be, is a bad thing
Is overly dogmatic in my opinion. As other people have answered, you may well be able to pass a collection of IFoo to your method and achieve the same result.
However, interfaces can be useful to add optional features to classes. For example the .net framework provides the IDataErrorInfo interface*. When this is implemented it indicates to a consumer that in addition to the class' standard functionality, it can also provide error information.
In this case, the error information is optional. A WPF view model may or may not provide error information. Without querying for interfaces, this optional functionality would not be possible without base classes with huge surface area.
*We'll ignore for the moment the terrible design of the IDataErrorInfo interface.
If your method requires that you inject an instance of an interface, you should treat it the same regardless of the implementation.
In your example you generally wouldn't have a generic list of object, but a list of ISomething's and calling an ISomething.Bar() would be implemented by the concrete type, therefore calling it's implementaiton. If that implementation is to do nothing, then you don't have to do a check.
I dislike this whole "switch on type" style of coding for a couple of reasons. (Examples drawn in relation to my industry, game development. Apologies in advance. :) )
First and foremost, I think it's sloppy to have a heterogeneous collection of items. E.g. I could have a collection of "everything everywhere," but then when iterating the collection to apply bullet effects or fire damage or enemy AI, I have to walk this list which is mostly stuff I don't care about. It's much "cleaner" IMHO to have separate collections of bullets, raging fires, and enemies. Note that there's no reason why I can't have a single item in multiple collections; a single burning robotic missile could be referenced in all three of those lists to do parts of its "update" as appropriate for the three types of logic it needs to run. Outside of having "one single collection that references everything," I think a collection containing everything everywhere is not terribly useful; you can't do anything with anything in the list unless you query it for what it can do.
I hate doing unnecessary work. This really ties into the above, but when you create a given thing you know what its capabilities are (or can query them at that point), so you might as well take the opportunity at that time to put them in the right more specific collections. You have 16ms to process everything in the world, do you want to waste your time dealing with, querying, and selecting from generic things, or do you want to get down to business and operate only on the specific things you care about?
In my experience, transforming a codebase from generic operation on heterogeneous datasets to one that has homogeneous datasets has resulted in not only performance increases but also comprehension increases that come from simpler code doing more obvious work and in general a reduction in the amount of code required to do any given task.
So yeah, it's dogmatic to say that querying interfaces is bad, but it does seem to make things simpler if you can figure out how to avoid needing to query anything. As for my "performance" statements and the counter that "if you don't measure it, you can't say anything about it," it should be obvious that not doing something is faster than doing it. Whether or not this is important to an individual project, programmer, or function is up to the person with the editor, but if I can simplify code and while doing so make it do less work for the same results, I'm going to do it without bothering to measure.
I don’t see this as a “bad thing” at all, at least not in itself. The code is merely a literal transcription of “x all of the y in z”, and in a situation where you need to do that, it’s perfectly acceptable. You can of course use things.OfType<Foo>() for the sake of concision.
The main reason to recommend against it is that, according to OOP theology, interfaces are intended to model the different kinds of “black box” for which an object may substituted. Predicating an algorithm on fulfillment of an interface constitutes moving behaviour to the algorithm that should be in that interface.
Essentially, an interface is a behavioural role. If you think OOP is a good idea, then you should use interfaces only to model behaviours, so that algorithms don’t have to. I don’t think what passes for OOP these days is in fact a good idea, so this is as far as my answer can be useful.
I'm currently trying to learn Ruby and I'm trying to understand more about what it offers in terms of encapsulation and contracts.
In C# a contract can be defined using an interface. A class which implements the interface must fulfil the terms within the contract by providing an implementation for each method and property (and maybe other things) defined. The individual class that implements an interface can do whatever it needs within the scope of the methods defined by the contract, so long as it accepts the same types of arguments and returns the same type of result.
Is there a way to enforce this kind of thing in Ruby?
Thanks
A simple example of what I mean in C#:
interface IConsole
{
int MaxControllers {get;}
void PlayGame(IGame game);
}
class Xbox360 : IConsole
{
public int MaxControllers
{
get { return 4; }
}
public void PlayGame(IGame game)
{
InsertDisc(game);
NavigateToMenuItem();
Click();
}
}
class NES : IConsole
{
public int MaxControllers
{
get { return 2; }
}
public void PlayGame(IGame game)
{
InsertCartridge(game);
TurnOn();
}
}
There are no interfaces in ruby since ruby is a dynamically typed language. Interfaces are basically used to make different classes interchangeable without breaking type safety. Your code can work with every Console as long it behaves like a console which in C# means implements IConsole. "duck typing" is a keyword you can use to catch up with the dynamic languages way of dealing with this kind of problem.
Further you can and should write unit tests to verify the behavior of your code. Every object has a respond_to? method you can use in your assert.
Ruby has Interfaces just like any other language.
Note that you have to be careful not to conflate the concept of the Interface, which is an abstract specification of the responsibilities, guarantees and protocols of a unit with the concept of the interface which is a keyword in the Java, C# and VB.NET programming languages. In Ruby, we use the former all the time, but the latter simply doesn't exist.
It is very important to distinguish the two. What's important is the Interface, not the interface. The interface tells you pretty much nothing useful. Nothing demonstrates this better than the marker interfaces in Java, which are interfaces that have no members at all: just take a look at java.io.Serializable and java.lang.Cloneable; those two interfaces mean very different things, yet they have the exact same signature.
So, if two interfaces that mean different things, have the same signature, what exactly is the interface even guaranteeing you?
Another good example:
interface ICollection<T>: IEnumerable<T>, IEnumerable
{
void Add(T item);
}
What is the Interface of System.Collections.Generic.ICollection<T>.Add?
that the length of the collection does not decrease
that all the items that were in the collection before are still there
that item is in the collection
And which of those actually shows up in the interface? None! There is nothing in the interface that says that the Add method must even add at all, it might just as well remove an element from the collection.
This is a perfectly valid implementation of that interface:
class MyCollection<T>: ICollection<T>
{
void Add(T item)
{
Remove(item);
}
}
Another example: where in java.util.Set<E> does it actually say that it is, you know, a set? Nowhere! Or more precisely, in the documentation. In English.
In pretty much all cases of interfaces, both from Java and .NET, all the relevant information is actually in the docs, not in the types. So, if the types don't tell you anything interesting anyway, why keep them at all? Why not stick just to documentation? And that's exactly what Ruby does.
Note that there are other languages in which the Interface can actually be described in a meaningful way. However, those languages typically don't call the construct which describes the Interface "interface", they call it type. In a dependently-typed programming language, you can for example express the properties that a sort function returns a collection of the same length as the original, that every element which is in the original is also in the sorted collection and that no bigger element appears before a smaller element.
So, in short: Ruby does not have an equivalent to a Java interface. It does however have an equivalent to a Java Interface, and its exactly the same as in Java: documentation.
Also, just like in Java, Acceptance Tests can be used to specify Interfaces as well.
In particular, in Ruby, the Interface of an object is determined by what it can do, not what class is is, or what module it mixes in. Any object that has a << method can be appended to. This is very useful in unit tests, where you can simply pass in an Array or a String instead of a more complicated Logger, even though Array and Logger do not share an explicit interface apart from the fact that they both have a method called <<.
Another example is StringIO, which implements the same Interface as IO and thus a large portion of the Interface of File, but without sharing any common ancestor besides Object.
Interfaces are usually introduced to static typed OO languages in order to make up for lack of multiple inheritance. In other words, they are more of a necessary evil than something useful per se.
Ruby, on the other hand:
Is dynamically typed language with "duck typing", so if you want to call method foo on two objects, they don't need to neither inherit same ancestor class, nor implement the same interface.
Supports multiple inheritance through concept of mixins, again no need for interfaces here.
Ruby doesn't really have them; interfaces and contracts generally live more in the static world, rather than the dynamic.
There is a gem called Handshake that can implement informal contracts, if you really need it.
Ruby uses the concept of Modules as a stand-in (kinda) for interfaces. Design Patterns in Ruby has a lot of really great examples on the differences between the two concepts and why ruby chooses the more flexible alternative to interfaces.
http://www.amazon.com/Design-Patterns-Ruby-Russ-Olsen/dp/0321490452
Jorg has a good point, ruby has interfaces, just not the keyword. In reading some of the replies, I think this is a negative in dynamic languages. Instead of enforcing an interface through the language, you must create unit tests instead of having a compiler catch methods not being implemented. It also makes understanding method harder to reason about, as you have to hunt down what an object is when you are trying to call it.
Take as an example:
def my_func(options)
...
end
If you look at the function, you have no clue what options is and what methods or properties it should call, without hunting for the unit tests, other places it is called, and even look at the method. Worse yet, the method may not even use those options but pass it to further methods. Why write unit tests when this should have been caught by a compiler. The problem is you must write code differently to express this downside in dynamic languages.
There is one upside to this though, and that is dynamic programming languages are FAST to write a piece of code. I don't have to write any interface declaration and later I can new methods and parameters without going to the interface to expose it. The trade-offs are speed for maintenance.
I recently began to start using functions to make casting easier on my fingers for one instance I had something like this
((Dictionary<string,string>)value).Add(foo);
and converted it to a tiny little helper function so I can do this
ToDictionary(value).Add(foo);
Is this against the coding standards?
Also, what about simpler examples? For example in my scripting engine I've considered making things like this
((StringVariable)arg).Value="foo";
be
ToStringVar(arg).Value="foo";
I really just dislike how inorder to cast a value and instantly get a property from it you must enclose it in double parentheses. I have a feeling the last one is much worse than the first one though
Ignoring for a moment that you may actually need to do this casting - which I personally doubt - if you really just want to "save your fingers", you can use a using statement to shorten the name of your generic types.
At the top of your file, with all the other usings:
using ShorterType = Dictionary<string, Dictionary<int, List<Dictionary<OtherType, ThisIsRidiculous>>>>;
I don't think so. You've also done something nice in that it's a bit easier to read and see what's going on. Glib (in C) provides casting macros for their classes, so this isn't a new concept. Just don't go overkill trying to save your fingers.
In general, I would consider this to be code smell. In most situations where the type of casting you describe is necessary, you could get the same behavior by proper use of interfaces (Java) or virtual inheritance (C++) in addition to generics/templates. It is much safer to leave that responsibility of managing types to the compiler than attempting to manage it yourself.
Without additional context, it is hard to say about the example you have included. There are certainly situations in which the type of casting you describe is unavoidable; but they're the exception rather than the rule. For example, the type of casting (and the associated helper functions/macros) you're describing extremely common-place in generic C libraries.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
What Advantages of Extension Methods have you found?
All right, first of all, I realize this sounds controversial, but I don't mean to be confrontational. I am asking a serious question out of genuine curiosity (or maybe puzzlement is a better word).
Why were extension methods ever introduced to .NET? What benefit do they provide, aside from making things look nice (and by "nice" I mean "deceptively like instance methods")?
To me, any code that uses an extension method like this:
Thing initial = GetThing();
Thing manipulated = initial.SomeExtensionMethod();
is misleading, because it implies that SomeExtensionMethod is an instance member of Thing, which misleads developers into believing (at least as a gut feeling... you may deny it but I've definitely observed this) that (1) SomeExtensionMethod is probably implemented efficiently, and (2) since SomeExtensionMethod actually looks like it's part of the Thing class, surely it will remain valid if Thing is revised at some point in the future (as long as the author of Thing knows what he/she's doing).
But the fact is that extension methods don't have access to protected members or any of the internal workings of the class they're extending, so they're just as prone to breakage as any other static methods.
We all know that the above could easily be:
Thing initial = GetThing();
Thing manipulated = SomeNonExtensionMethod(initial);
To me, this seems a lot more, for lack of a better word, honest.
What am I missing? Why do extension methods exist?
Extension methods were needed to make Linq work in the clean way that it does, with method chaining. If you have to use the "long" form, it causes the function calls and the parameters to become separated from each other, making the code very hard to read. Compare:
IEnumerable<int> r = list.Where(x => x > 10).Take(5)
versus
// What does the 5 do here?
IEnumerable<int> r = Enumerable.Take(Enumerable.Where(list, x => x > 10), 5);
Like anything, they can be abused, but extension methods are really useful when used properly.
I think that the main upside is discoverability. Type initial and a dot, and there you have all the stuff that you can do with it. It's a lot harder to find static methods tucked away in some class somewhere else.
First of all, in the Thing manipulated = SomeNonExtensionMethod(initial); case, SomeNonExtensionMethod is based on exactly the same assumptions like in the Thing manipulated = initial.SomeExtensionMethod(); case. Thing can change, SomeExtensionMethod can break. That's life for us programmers.
Second, when I see Thing manipulated = initial.SomeExtensionMethod();, it doesn't tell me exactly where SomeExtensionMethod() is implemented. Thing could inherit it from TheThing, which inherits it from TheOriginalThing. So the "misleading" argument leads to nowhere. I bet the IDE takes care of leading you to the right source, doesn't it?
What's so great? It makes code more consistent. If it works on a string, it looks like if it was a member of string. It's ugly to have several MyThing.doThis() methods and several static ThingUtil.doSomethingElse(Mything thing) methods in another class.
SO you can extend someone else's class. not yours... that's the advantage.
(and you can say.. oh I wish they implement this / that.... you just do it yourself..)
they are great for automatically mixing in functionality based on Interfaces that a class inherits without that class having to explicitly re implement it.
Linq makes use of this a lot.
Great way to decorate classes with extra functionality. Most effective when applied to an Interface rather than a specific class. Still a good way to extend Framework classes though.
It's just convenient syntactic sugar so that you can call a method with the same syntax regardless of whether it's actually part of the class. If party A releases a lib, and party B releases stuff that uses that lib, it's easier to just call everything with class.method(args) than to have to remember what gets called with method(class, args) vs. class.method(args).