Related
Code analysis throws error CA1006: Do not nest generic types in member signatures whenever we define custom definitions in the interface contract. What is the best way of handling this so called design issue. Any deep thoughts on this.
Thanks for your valuable time to go through this.
Example:-
Task<IList<Employee>> LoadAllEmployeeAsync();
CA1006: Do not nest generic types in member signatures
I think the rule is pretty clear. However, the reasoning behind it is that whoever uses your class must undergo a complex process for instantiating the complex parameter(s) and decreases the adoption rate of new libraries.
However, if we think about it, the rule does not make much sense in this context. First of all, you have a nested complex generic return type, which might not be as bad as a similar parameter. Secondly, I don't think the rule was design for async methods.
I suggest to suppress it on the methods that exhibit this return type. Do not abuse it, so make sure to place it only on async methods and only when the return type is complex:
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Design", "CA1006:DoNotNestGenericTypesInMemberSignatures", Justification="This is an async method.")]
Task<IList<Employee>> LoadAllEmployeeAsync();
I'm currently trying to learn Ruby and I'm trying to understand more about what it offers in terms of encapsulation and contracts.
In C# a contract can be defined using an interface. A class which implements the interface must fulfil the terms within the contract by providing an implementation for each method and property (and maybe other things) defined. The individual class that implements an interface can do whatever it needs within the scope of the methods defined by the contract, so long as it accepts the same types of arguments and returns the same type of result.
Is there a way to enforce this kind of thing in Ruby?
Thanks
A simple example of what I mean in C#:
interface IConsole
{
int MaxControllers {get;}
void PlayGame(IGame game);
}
class Xbox360 : IConsole
{
public int MaxControllers
{
get { return 4; }
}
public void PlayGame(IGame game)
{
InsertDisc(game);
NavigateToMenuItem();
Click();
}
}
class NES : IConsole
{
public int MaxControllers
{
get { return 2; }
}
public void PlayGame(IGame game)
{
InsertCartridge(game);
TurnOn();
}
}
There are no interfaces in ruby since ruby is a dynamically typed language. Interfaces are basically used to make different classes interchangeable without breaking type safety. Your code can work with every Console as long it behaves like a console which in C# means implements IConsole. "duck typing" is a keyword you can use to catch up with the dynamic languages way of dealing with this kind of problem.
Further you can and should write unit tests to verify the behavior of your code. Every object has a respond_to? method you can use in your assert.
Ruby has Interfaces just like any other language.
Note that you have to be careful not to conflate the concept of the Interface, which is an abstract specification of the responsibilities, guarantees and protocols of a unit with the concept of the interface which is a keyword in the Java, C# and VB.NET programming languages. In Ruby, we use the former all the time, but the latter simply doesn't exist.
It is very important to distinguish the two. What's important is the Interface, not the interface. The interface tells you pretty much nothing useful. Nothing demonstrates this better than the marker interfaces in Java, which are interfaces that have no members at all: just take a look at java.io.Serializable and java.lang.Cloneable; those two interfaces mean very different things, yet they have the exact same signature.
So, if two interfaces that mean different things, have the same signature, what exactly is the interface even guaranteeing you?
Another good example:
interface ICollection<T>: IEnumerable<T>, IEnumerable
{
void Add(T item);
}
What is the Interface of System.Collections.Generic.ICollection<T>.Add?
that the length of the collection does not decrease
that all the items that were in the collection before are still there
that item is in the collection
And which of those actually shows up in the interface? None! There is nothing in the interface that says that the Add method must even add at all, it might just as well remove an element from the collection.
This is a perfectly valid implementation of that interface:
class MyCollection<T>: ICollection<T>
{
void Add(T item)
{
Remove(item);
}
}
Another example: where in java.util.Set<E> does it actually say that it is, you know, a set? Nowhere! Or more precisely, in the documentation. In English.
In pretty much all cases of interfaces, both from Java and .NET, all the relevant information is actually in the docs, not in the types. So, if the types don't tell you anything interesting anyway, why keep them at all? Why not stick just to documentation? And that's exactly what Ruby does.
Note that there are other languages in which the Interface can actually be described in a meaningful way. However, those languages typically don't call the construct which describes the Interface "interface", they call it type. In a dependently-typed programming language, you can for example express the properties that a sort function returns a collection of the same length as the original, that every element which is in the original is also in the sorted collection and that no bigger element appears before a smaller element.
So, in short: Ruby does not have an equivalent to a Java interface. It does however have an equivalent to a Java Interface, and its exactly the same as in Java: documentation.
Also, just like in Java, Acceptance Tests can be used to specify Interfaces as well.
In particular, in Ruby, the Interface of an object is determined by what it can do, not what class is is, or what module it mixes in. Any object that has a << method can be appended to. This is very useful in unit tests, where you can simply pass in an Array or a String instead of a more complicated Logger, even though Array and Logger do not share an explicit interface apart from the fact that they both have a method called <<.
Another example is StringIO, which implements the same Interface as IO and thus a large portion of the Interface of File, but without sharing any common ancestor besides Object.
Interfaces are usually introduced to static typed OO languages in order to make up for lack of multiple inheritance. In other words, they are more of a necessary evil than something useful per se.
Ruby, on the other hand:
Is dynamically typed language with "duck typing", so if you want to call method foo on two objects, they don't need to neither inherit same ancestor class, nor implement the same interface.
Supports multiple inheritance through concept of mixins, again no need for interfaces here.
Ruby doesn't really have them; interfaces and contracts generally live more in the static world, rather than the dynamic.
There is a gem called Handshake that can implement informal contracts, if you really need it.
Ruby uses the concept of Modules as a stand-in (kinda) for interfaces. Design Patterns in Ruby has a lot of really great examples on the differences between the two concepts and why ruby chooses the more flexible alternative to interfaces.
http://www.amazon.com/Design-Patterns-Ruby-Russ-Olsen/dp/0321490452
Jorg has a good point, ruby has interfaces, just not the keyword. In reading some of the replies, I think this is a negative in dynamic languages. Instead of enforcing an interface through the language, you must create unit tests instead of having a compiler catch methods not being implemented. It also makes understanding method harder to reason about, as you have to hunt down what an object is when you are trying to call it.
Take as an example:
def my_func(options)
...
end
If you look at the function, you have no clue what options is and what methods or properties it should call, without hunting for the unit tests, other places it is called, and even look at the method. Worse yet, the method may not even use those options but pass it to further methods. Why write unit tests when this should have been caught by a compiler. The problem is you must write code differently to express this downside in dynamic languages.
There is one upside to this though, and that is dynamic programming languages are FAST to write a piece of code. I don't have to write any interface declaration and later I can new methods and parameters without going to the interface to expose it. The trade-offs are speed for maintenance.
That is the question? So how big a sin is it not to use this convention when developing a c# project? This convention is widely used in the .NET class library. However, I am not a fan to say the least, not just for asthetic reasons but I don't think it makes any contribution. For example is IPSec an interface of PSec? Is IIOPConnection An interface of IOPConnection, I usually go to the definition to find out anyway.
So would not using this convention cause confusion?
Are there any c# projects or libraries of note that drop this convention?
Do any c# projects that mix conventions, as unfortunately Apache Wicket does?
The Java class libraries have existed without this for many years, I don't feel I have ever struggled to read code without it. Also, should the interface not be the most primitive description? I mean IList<T> as an interface for List<T> in c#, is it not better to have List<T> and LinkedList<T> or ArrayList<T> or even CopyOnWriteArrayList<T>? The classes describe the implementation? I think I get more information here, than I do from List<T> in c#.
The difference between Java and C# is that Java allows you to easily distinguish whether you implement an interface or extend a class since it has the corresponding keywords implements and extends.
As C# only has the : to express either an implementation or extension, I recommend following the standard and put an I before an interface's name.
It's bad practice in my opionion too. The reasons why, additional to yours are:
The whole purpose of interfaces is to abstract away implementation details. So it shouldn't matter if you call a method with a IParam or Param.
Elaborated tools have their own possibilities to mark interfaces with an icon.
If your eye is searching in a IDE for a name, the most significant part is the beginning of a string. Maybe your classes get sorted by alphabet, and now you have a block of similar names, all starting with I... together. They look similar, while it would be of advantage to distinguish them easily. It's ergonomical wrong to use an I-prefix.
Even more annoying: ImplList, ImplThat, AFoo for an abstract Foo, AImplFooBar for an abstract Foo, which implements Bar? SSomething as Singleton, or SMath for a static class? Stop it! :)
With respect, in your post you are only considering your needs (I, I, I), and not the needs of the readers of your code. If you are a one-man shop, then fair enough, but if your code if ever read by others, then consider that they will be expecting interfaces to have an I prefix--that is just the way it is in .Net, and too many people are used to it to change now.
Also, it would help if you used more readable names for classes. What is PSec? How can I tell whether IPSec is an interface, when I can't even tell what PSec is? If instead PSec was renamed to e.g., PersonalSecurity, then IPersonalSecurity is much more likely to be an interface.
Using I for interfaces goes against the whole point of an interface imo, that it is a connector that you can plug different concrete implementations in to dependencies.
An object that uses the database needs a DataStore, not an IDataStore, and it should be up to configuration whether that gets a DatabaseDataStore or a FileSystemDataStore or whatever plugged into it (or a MockDataStore for testing).
Read this and move on. If you're using Java, follow the Java naming conventions.
It's not a sin per se, it's best practice. It makes things a lot more readable all in all. Also, think about it. IMyClass is the interface to MyClass. It just makes sense, and stops unnecessary confusion. Also remember the : syntax vs. implements/extends. Lastly, you can bypass all of this by simply checking the tooltips/go to in VS, but for pure readability, the standard is important in my opinion.
Not that I'm aware of, but I'm sure they exist.
Haven't seen any, but I'm sure they exist.
I think the main reason for the I-Prefix is not that those using it can see it's an interface but that those implementing/deriving from existing classes and interfaces can see more easily wether it's an interface or base class.
Another advantage is that it prevents stupid things like (If my Java memory serves me correctly):
List foo = new List(); // Why does it fail?
The third advantage is refactoring. If you move through your objects and read the code you can see where you forgot to code-by-interface. "A method accepting something with a type not prefixed with I? Fix it!".
I used it even in Java and found it quite usefull, but it always depends on the guidelines for your company/team. Follow them, no matter how stupid you may think they are, some day you will be happy they exist.
Ask yourself: If my IDE could give me some hint in the text (e.g different colour, underline, italic...) that the type was an interface would I still bother?
Sounds like you are naming the types like that just so you can tell from the name something about parts of the definition other than the name.
Best practices override convention sometimes, in my opinion. While I may not personally like the convention, not using it goes against the best practice that has been in place for longer than I care to think about.
I would look at it more from the point of how other people do it, in this case. Since 99% of the common world will be prefacing with the "I", that is good enough to keep this best practice. If you have to bring in a contractor or on-board a new developer, you should be able to focus on the code and not have to explain/defend choices that you made.
It has been around long enough, and is ingrained well enough, that I don't expect it to change in my lifetime. It is just one of those "unwritten rules", better defined as an "unwritten best practice", that will probably outlive me.
I would say that not following this convention would get you down to .NET hell. It's a convention that's almost as important to me as using self in instance methods in Python.
I don't see any good reason to do this. 'Extends' vs 'implements' already tells you whether you are dealing with a class or an interface in the cases where it actually matters. In all other cases the whole idea is that you don't care.
In my opinion the biggest reason "I" is often prefixed is that the IDEs for both Java (Eclipse) and .NET (V Studio) do not make it extremely clear that the Class you are looking at is in fact an interface. The package browser in eclipse shows the same icon till you expand the class file and the font of an Interface declaration is not any different than a class.
An Example would be if I type:
ISomeInterface s = factory.create();
ISomeInterface should atleast have some sort of font modification to show that its an interface (like italics or underline).
The other big reason is in the Java world that people prefix with "I" is that it makes it easier in Eclipse to do a "Ctrl-Shift-R" and search for only interfaces.
This is important in the Java/Spring world where you need interfaces as your collaborators if you plan on using any AOP magic or some other Dynamic proxies.
Than you have the nasty choice of either prefixing your interface with "I" or suffixing your implementation class with "Impl" like ListImpl. I abhor the suffixing of classes with "Impl" to make the interface and concrete differ in name and prefix the prefix of "I".
In general I try to avoid making lots of interfaces.
In my own code I would never prefix with "I". I'm only give some reasons why people do it which is for old code consistency.
conventions exist to help all of us. If there is a chance another .net developer will be working with you then yes, follow the convention.
One idea is that the "I" part can be followed by a verb, stating what classes that implement the interface does; like ISaveXmlData, forming a nice human language name.
The key thing is consistency - as long you stick to having I prefixed to all interfaces or none at all, it's a matter of preference.
I use the I prefix for interfaces at work since the existing code already uses it for a naming convention for each interface. I find it more intuitive to quickly determine if a class implements an interface or another class simply by looking for the I prefix in the name of the base class.
On the other hand, some of the older projects at work don't use this naming convention and this makes the code slightly less readable, but it might just be that I'm used to the prefix.
Look at the BCL. In the Base Class Libraries you have IList<>, IQueryable, IDisposable.
If you don't prepend it with a 'I', how would people know it's an interface other than going to the definition?
Anyways, just my 2 cents
You can choose all names in your program how you like, but it's a good idea to hold naming conversion, if not you only will be read the program.
Usage of Interfaces is good not only if you design you own classes and interfaces. In some cases you makes other accents in your program it you use interfaces. For example, you can write code like
SqlDataReader dr = cmd.ExecuteReader (CommandBehavior.SequentialAccess);
if (!dr.HasRows) {
// ...
}
while (dr.Read ()) {
string name = dr.GetString (0);
// ...
}
or like
IDataReader dr = cmd.ExecuteReader (CommandBehavior.SequentialAccess);
if (!dr.HasRows) {
// ...
}
while (dr.Read ()) {
string name = dr.GetString (0);
// ...
}
the last one have looks like the same, but if you are using IDataReader instead of SqlDataReader you can easier to place some parts which works with dr in a method, which works not only with SqlDataReader class (but with OleDbDataReader, OracleDataReader, OdbcDataReader etc). On the other hand your program stay working exactly quick as before.
Updated (based on questions from comments):
The advantage is, like I written before, if you'll separate some parts of you code which work with IDataReader. For example, you can define delegate T ReadRowFromDataReader<T> (IDataReader dr, ...) and use it inside of while (dr.Read ()) block. So you write code which is more general as the code working with SqlDataReader directly. Inside of while (dr.Read ()) block you call rowReader (dr, ...). Your different implementations of code reading rows of data can be placed in a method with signature ReadRowFromDataReader<T> rowReader and place it as a actual parameter.
With the way you can write more independent code working with database. At the first time probably usage of generic delegate looks a little complex, but all code will be really easy to read. I want to accentuate one more time, that you really receive some advantages of using interfaces in this case only if you separate some parts of the code in another method. If you don't separate the code, the only advantage which you receive is: you receive code parts which are written more independend and you could copy and paced this parts easier in another program.
Usage of names started with 'I' makes easier to understand that now we are working with something more general as with one class.
I stick to the convention only because I have to, if I am to use any interfaces in the BCL and maintain consistency.
I don't like the convention, either.
Cannot believe it that so many people hate the 'I' prefix. I love the prefix 'I'.
Here is why:
Are abstract and interface different? Yes
Do I care the difference as a developer? Yes, but not always.
When do I need to care?
Design discussion(When I draw on the board, prefix 'I' clearly telling everyone it's an interface)
Read existing code(When I see prefix 'I', clearly I know it's an interface. There'are exceptions for words start with 'I', but very few cases)
Do I always need 'I'? No. But I want consistency, so YES.
With just one prefix 'I', it avoids so much communication overhead.
I think the real question in case of .NET should be: why do we ever need to distinguish between a class and an interface in a client code?
And for the C# & .NET there is a shameful answer - because someone invented an explicit interface implementations language support. A thing that is in my opinion a complete mess, because it allows to break a Single Responsibility Principle in an invisible way to the caller. Lets assume we have an IList interface and a List class.
This is only by convention that List.Count() does the same thing as IList.Count() does for the class. Normally you can't be so sure. As for me explicit interface implementation is a hidden form of method overloading done in the most wrong way ever. Let's assume like in old native languages that the instance reference is a first argument of a called method.
Now we have int Count(IList list) and int Count(List list). From the language point of view these are two separate methods that clearly advertise their responsibility - one can work with a more abstract IList, and another with the specific implementation List. But this is clearly visible here! No one would expect that both methods return the same value, because the more specific method may retrieve extra properties etc. It is however non obvious in the C# language in an explicit interface implementation form, because the caller is non aware which form is actually used - compiler knows, but I as a programmer might be unaware.
Unless I know if I call a class method or an interface method! I think it is a source of this somehow stupid convention for interfaces. If you use types named without the "I" prefix - especially in method arguments and return types - you may be unaware of whether you call a class instance method or an interface method.
As a good programmer using SOLID principles you should work with interfaces all the time - as long it is possible, especially if you are aware of explicit implementations.
This is in my opinion a hidden purpose of naming C# interfaces is this way - to cover the bad design of explicit interface implementations. You may not agree, but think twice about it - how could you ever make a method overloading feature that is effectively hidden from the calling site without expecting that a naming convention will naturally appear in order to manage it?
I'm currently writing some code for UnconstrainedMelody which has generic methods to do with enums.
Now, I have a static class with a bunch of methods which are only meant to be used with "flags" enums. I can't add this as a constraint... so it's possible that they'll be called with other enum types too. In that case I'd like to throw an exception, but I'm not sure which one to throw.
Just to make this concrete, if I have something like this:
// Returns a value with all bits set by any values
public static T GetBitMask<T>() where T : struct, IEnumConstraint
{
if (!IsFlags<T>()) // This method doesn't throw
{
throw new ???
}
// Normal work here
}
What's the best exception to throw? ArgumentException sounds logical, but it's a type argument rather than a normal argument, which could easily confuse things. Should I introduce my own TypeArgumentException class? Use InvalidOperationException? NotSupportedException? Anything else?
I'd rather not create my own exception for this unless it's clearly the right thing to do.
NotSupportedException sounds like it plainly fits, but the documentation clearly states that it should be used for a different purpose. From the MSDN class remarks:
There are methods that are not
supported in the base class, with the
expectation that these methods will be
implemented in the derived classes
instead. The derived class might
implement only a subset of the methods
from the base class, and throw
NotSupportedException for the
unsupported methods.
Of course, there's a way in which NotSupportedException is obviously good enough, especially given its common-sense meaning. Having said that, I'm not sure if it's just right.
Given the purpose of Unconstrained Melody ...
There are various useful things that can be done with generic
methods/classes where there's a type constraint of "T : enum" or "T :
delegate" - but unfortunately, those are prohibited in C#.
This utility library works around the prohibitions using ildasm/ilasm ...
... it seems like a new Exception might be in order despite the high burden of proof we justly have to meet before creating custom Exceptions. Something like InvalidTypeParameterException might be useful throughout the library (or maybe not - this is surely an edge case, right?).
Will clients need to be able to distinguish this from BCL Exceptions? When might a client accidentally call this using a vanilla enum? How would you answer the questions posed by the accepted answer to What factors should be taken into consideration when writing a custom exception class?
I would avoid NotSupportedException. This exception is used in the framework where a method is not implemented and there is a property indicating that this type of operation is not supported. It doesn't fit here
Shameless self Reference: http://blogs.msdn.com/jaredpar/archive/2008/12/12/notimplementedexception-vs-notsupportedexception.aspx
I think InvalidOperationException is the most appropriate exception you could throw here.
Generic programming should not throw at runtime for invalid type parameters. It should not compile, you should have a compile time enforcement. I don't know what IsFlag<T>() contains, but perhaps you can turn this into a compile time enforcement, like trying to create a type that is only possible to create with 'flags'. Perhaps a traits class can help.
Update
If you must throw, I'd vote for InvalidOperationException. The reasoning is that generic types have parameters and errors related to (method) parameters are centered around the ArgumentException hierarchy. However, the recommendation on ArgumentException states that
if the failure does not involve the
arguments themselves, then
InvalidOperationException should be
used.
There is at least one leap of faith in there, that method parameters recommendations are also to be applied to generic parameters, but there isn't anything better in the SystemException hierachy imho.
I would use NotSupportedException as that is what you are saying. Other enums than the specific ones are not supported. This would of course be stated more clearly in the exception message.
I'd go with NotSupportedException. While ArgumentException looks fine, it's really expected when an argument passed to a method is unacceptable. A type argument is a defining characteristic for the actual method you want to call, not a real "argument." InvalidOperationException should be thrown when the operation you're performing can be valid in some cases but for the particular situation, it's unacceptable.
NotSupportedException is thrown when an operation is inherently unsupported. For instance, when implementing an interface where a particular member doesn't make sense for a class. This looks like a similar situation.
Apparently, Microsoft uses ArgumentException for that, as demonstrated on example of Expression.Lambda<>, Enum.TryParse<> or Marshal.GetDelegateForFunctionPointer<> in Exceptions section. I couldn't find any example indicating otherwise, either (despite searching local reference source for TDelegate and TEnum).
So, I think it's safe to assume that at least in Microsoft code it's a common practice to use ArgumentException for invalid generic type arguments aside from basic variable ones. Given that the exception description in docs doesn't discriminate between those, it's not too much of a stretch, either.
Hopefully it decides the question things once and for all.
Id go with NotSupportedExpcetion.
Throwing a custom made exception should always be done in any case where it is questionable. A custom exception will always work, regardless of the API users needs. The developer could catch either exception type if he does not care, but if the developer needs special handling he will be SOL.
I'm always wary of writing custom exceptions, purely on the grounds that they aren't always documented clearly and cause confusion if not named correctly.
In this case I would throw an ArgumentException for the flags check failure. It's all down to preference really. Some coding standards I've seen go as far as to define which types of exceptions should be thrown in scenarios like this.
If the user was trying to pass in something which wasn't an enum then I would throw an InvalidOperationException.
Edit:
The others raise an interesting point that this is not supported. My only concern with a NotSupportedException is that generally those are the exceptions that get thrown when "dark matter" has been introduced to the system, or to put it another way, "This method must go into the system on this interface, but we won't turn it on until version 2.4"
I've also seen NotSupportedExceptions be thrown as a licensing exception "you're running the free version of this software, this function is not supported".
Edit 2:
Another possible one:
System.ComponentModel.InvalidEnumArgumentException
The exception thrown when using invalid arguments that are enumerators.
I'd also vote for InvalidOperationException. I did an (incomplete) flowchart on .NET exception throwing guidelines based on Framework Design Guidelines 2nd Ed. awhile back if anyone's interested.
How about inheriting from NotSupportedException. While I agree with #Mehrdad that it makes the most sense, I hear your point that it doesn't seem to fit perfectly. So inherit from NotSupportedException, and that way people coding against your API can still catch a NotSupportedException.
In a response to this question runefs suggested that "unless you have a very specific reason for using IList you should considere IEnumerable". Which do you use and why?
IEnumberable<T> is read-only, you have to reconstruct the collection to make changes to it. On the other hand, IList<T> is read-write. So if you expect a lot of changes to the collection, expose IList<T> but if it's safe to assume that you won't modify it, go with IEnumerable<T>.
Always use the most restrictive interface that provides the features you need, because that gives you the most flexibility to change the implementation later. So, if IEnumerable<T> is enough, then use that... if you need list-features, use IList<T>.
And preferably use the strongly typed generic versions.
The principle I follow is one I read a while back:
"Consume the simplest and expose the
most complex"
(I'm sure this is really common and I'm mis-quoting it, if anyone can knows the source can you leave a comment or edit please...)
(Edited to add - well, here I am a couple of weeks later and I've just ran into this quote in a completely different context, it looks like it started as the Robustness Principle or Postel's Law -
Be conservative in what you do; be liberal in what you accept from others.
The original definition is for network communication over the internet, but I'm sure I've seen it repurposed for defining class contracts in OO.)
Basically, if you're defining a method for external consumption then the parameters should be the most basic type that gives you the functionality you require - in the case of your example this may mean taking in an IEnumerable instead of an IList. This gives client code the most flexibility over what they can pass in. On the other hand if you are exposing a property for external consumption then do so with the most complex type (IList or ICollection instead of IEnumerable) since this gives the client the most flexibility in the way they use the object.
I'm finding the conversation between myself and DrJokepu in the comments fascinating, but I also appreciate that this isn't supposed to be a discussion forum so I'll edit my answer to further outline the reasons behind my choice to buck the trend and suggest that you expose it as an IList (well a List actually, as you'll see). Let's say this is the class we are talking about:
public class Example
{
private List<int> _internal = new List<int>();
public /*To be decided*/ External
{
get { return _internal; }
}
}
So first of all let's assume that we are exposing External as an IEnumerable. The reasons I have seen for doing this in the other answers and comments are currently:
IEnumerable is read-only
IEnumerable is the defacto standard
It reduces coupling so you can change the implementation
You don't expose implementation details
While IEnumerable only exposes readonly functionality that doesn't make it readonly. The real type you are returning is easily found out by reflection or simply pausing a debugger and looking, so it is trivial to cast it back to List<>, the same applies to the FileStream example in the comments below. If you are trying to protect the member then pretending it is something else isn't the way to do it.
I don't believe it is the defacto standard. I can't find any code in the .NET 3.5 library where Microsoft had the option to return an concrete collection and returned an interface instead. IEnumerable is more common in 3.5 thanks to LINQ, but that is because they can't expose a more derived type, not because they don't want to.
Now, reducing coupling I agree with - to a point - basically this argument seems to be that you are telling the client that while the object you return is a list I want you to treat is as an IEnumerable just in case you decide to change the internal code later. Ignoring the problems with this and the fact that the real type is exposed anyway it still leaves the question of "why stop at IEnumerable?" if you return it as an object type then you'll have complete freedom to change the implementation to anything! This means that you must have made a decision about how much functionality the client code requires so either you are writing the client code, or the decision is based around arbitrary metrics.
Finally, as previously discussed, you can assume that implementation details are always exposed in .NET, especially if you are publicly exposing the object.
So, after that long diatribe there is one question left - Why expose it as a List<>? Well, why not? You haven't gained anything by exposing it as a IEnumerable, so why artificially limit the client code's ability to work with the object? Imagine if Microsoft had decided that the Controls collection of a Winforms Control should appear as IEnumerable instead of ControlCollection - you'd no longer be able to pass it to methods that require an IList, work on items at an arbitrary index, or see if it contains a particular control unless you cast it. Microsoft wouldn't really have gained anything, and it would just inconvenience you.
When I only need to enumerate the children, I use IEnumerable. If I happen to need Count, I use ICollection. I try to avoid that, though, because it exposes implementation details.
IList<T> is indeed a very beefy interface. I prefer to expose Collection<T>-derived types. This is pretty much in line with what Jeffrey Richter suggests doing (don't have a book nearby, so can't specify page/chapter number): methods should accept the most common types as parameters and return the most "derived" types as return values.
If a read-only collection is exposed via a property, then the conventional way to do it is to expose it as ReadOnlyCollection (or a derived class of that) wrapping whatever you have there. It exposes full capabilities of IList to the client, and yet it makes it very clear that it is read-only.