Are default parameters bad practice in OOP? - c#

Do default parameters for methods violate Encapsulation?
What was the rationale behind not providing default parameters in C#?

I would take this as the "official" answer from Microsoft. However, default (and named) parameters will most definitely be available in C# 4.0.

No, it doesn't affect encapsulation in any way. It simply is not often necessary. Often, creating an overload which takes fewer arguments is a more flexible and cleaner solution, so C#'s designer simply did not see a reason to add the complexity of default parameters to the language.
Adding "Another way to do the same thing" is always a tradeoff. In some cases it may be convenient. But the more syntax you make legal, the more complex the language becomes to learn, and the more you may wall yourself in, preventing future extension. (Perhaps they'd one day come up with another extension to the language, which uses a similar syntax. Then that'd be impossible to add, because it'd conflict with the feature they added earlier)

As has been noted, default parameters were not a prioritized feature, but is likely to be added in C# 4.0. However, I believe there were excellent reasons not to include it earlier (in 4.0, as I've understood it, itäs mostly to support duck typing styles of programming where default parameters increases type compatibility).
I believe excessive parameter lists (certainly more than 4-5 distinct parameters) is a code smell. Default parameters are not evil in themselves, but risk encouraging poor design, delaying the refactoring into more objects.

To your first question - no, it's exactly the same as providing multiple overloaded constructors. As for the second, I couldn't say.

Default parameters will be included in C# 4.0
Some reading material about it:
click
click
It also seems that the author of this post will publish an article in the near future on the 'why' MS chooses to implement default params in C#

Here is an answer why it's not provided in C#
http://blogs.msdn.com/csharpfaq/archive/2004/03/07/85556.aspx

One drawback with the default parameter implementation in C# 4.0 is that it creates a dependency on the parameters name. This already existed in VB, which could be one reason why they chose to implement it in 4.0.
Another drawback is that the default value depends on how you cast your object. You can read about it here: http://saftsack.fs.uni-bayreuth.de/~dun3/archives/optional-parameters-conclusion-treat-like-unsafe/216.html .

Related

C++ named arguments, like C# [duplicate]

I've looked at both the Named Parameter Idiom and the Boost::Parameter library. What advantages does each one have over the other? Is there a good reason to always choose one over the other, or might each of them be better than the other in some situations (and if so, what situations)?
Implementing the Named Parameter Idiom is really easy, almost about as easy as using Boost::Parameter, so it kind of boils down to one main point.
-Do you already have boost dependencies? If you don't, Boost::parameter isn't special enough to merit adding the dependency.
Personally I've never seen Boost::parameter in production code, 100% of the time its been a custom implementation of Named Parameters, but that's not necessarily a good thing.
Normally, I'm a big fan of Boost, but I wouldn't use the Boost.Parameter library for a couple of reasons:
If you don't know what's going on,
the call looks like you're assigning
a value to a variable in the scope
on the calling function before
making the call. That can be very
confusing.
There is too much boilerplate code necessary to set it up in the first place.
Another point, while I have never used Named Parameter Idiom, I have used Boost Parameter for defining up to 20 optional arguments. And, my compile times are insane. What used to take a couple seconds, now takes 30sec. This adds up if you have a library of stuff that use your one little application that you wrote using boost parameter. Of course, I might be implementing it wrongly, but I hope this changes, because other than that, i really like it.
The Named Parameter idiom is a LOT simpler. I can't see (right now) why we would need the complexity of the Boost::Parameter library. (Even the supposed "feature" Deduced parameters, seems like a way to introduce coding errors ;) )
You probably don't want Boost.Parameter for general application logic so much as you would want it for library code that you are developing where it can be quite a time saver for clients of the library.
Never heard of either, but reviewing the links, named parameter is WAY easier and more obvious to understand. I'd pick it in a heartbeat over the boost implementation.

Why does C# allow ambiguous function calls through optional arguments?

I came across this today, and I am surprised that I haven't noticed it before. Given a simple C# program similar to the following:
public class Program
{
public static void Main(string[] args)
{
Method(); // Called the method with no arguments.
Method("a string"); // Called the method with a string.
Console.ReadLine();
}
public static void Method()
{
Console.WriteLine("Called the method with no arguments.");
}
public static void Method(string aString = "a string")
{
Console.WriteLine("Called the method with a string.");
}
}
You get the output shown in the comments for each method call.
I understand why the compiler chooses the overloads that it does, but why is this allowed in the first place? I am not asking what the overload resolution rules are, I understand those, but I am asking if there is a technical reason why the compiler allows what are essentially two overloads with the same signature?
As far as I can tell, a function overload with a signature that differs from another overload only through having an additional optional argument offers nothing more than it would if the argument (and all preceding arguments) were simply required.
One thing it does do is makes it possible for a programmer (who probably isn't paying enough attention) to think they're calling a different overload to the one that they actually are.
I suppose it's a fairly uncommon case, and the answer for why this is allowed may just be because it's simply not worth the complexity to disallow it, but is there another reason why C# allows function overloads to differ from others solely through having one additional optional argument?
His point that Eric Lippert could have an answer lead me to this https://meta.stackoverflow.com/a/323382/1880663, which makes it sounds like my question will only annoy him. I'll try to rephrase it to make it clearer that I'm asking about the language design, and that I'm not looking for a spec reference
I appreciate it! I am happy to talk about language design; what annoys me is when I waste time doing so when the questioner is very unclear about what would actually satisfy their request. I think your question was phrased clearly.
The comment to your question posted by Hans is correct. The language design team was well aware of the issue you raise, and this is far from the only potential ambiguity created by optional / named arguments. We considered a great many scenarios for a long time and designed the feature as carefully as possible to mitigate potential problems.
All design processes are the result of compromise between competing design principles. Obviously there were many arguments for the feature that had to be balanced against the significant design, implementation and testing costs, as well as the costs to users in the form of confusion, bugs, and so on, from accidental construction of ambiguities such as the one you point out.
I'm not going to rehash what was dozens of hours of debate; let me just give you the high points.
The primary motivating scenario for the feature was, as Hans notes, popular demand, particularly coming from developers who use C# with Office. (And full disclosure, as a guy on the team that wrote the C# programming model for Word and Excel before I joined the C# team, I was literally the first one asking for it; the irony that I then had to implement this difficult feature a couple years later was not lost on me.) Office object models were designed to be used from Visual Basic, a language that has long had optional / named parameter support.
C# 4 might have seemed like a bit of a "thin" release in terms of obvious features. That's because a lot of the work done in that release was infrastructure for allowing more seamless interoperability with object models that were designed for dynamic languages. The dynamic typing feature is the obvious one, but there were numerous other small features added that combine together to make working with dynamic and legacy COM object models easier. Named / optional arguments was just one of them.
The fact that we had existing languages like VB that had this specific feature for decades and the world hadn't ended yet was further evidence that the feature was both doable and valuable. It's great having an example where you can learn from its successes and failures before designing a new version of the feature.
As for the specific situation you mention: we considered doing things like detecting when there was a possible ambiguity and making a warning, but that then opens up a whole other cans of worms. Warnings have to be for code that is common, plausible and almost certainly wrong, and there should be a clear way to address the problem that causes the warning to go away. Writing an ambiguity detector is a lot of work; believe me, it took way longer to write the ambiguity detection in overload resolution than it took to write the code to handle successful cases. We didn't want to spend a lot of time on adding a warning for a rare scenario that is hard to detect and that there might be no clear advice on how to eliminate the warning.
Also, frankly, if you write code where you have two methods named the same thing that do something completely different depending on which one you call, you already have a larger design problem on your hands! Fix that problem first, rather than worrying that someone is going to accidentally call the wrong method; make it so that either method is the right one to call.
This behaviour is specified by Microsoft at the MSDN. Have a look at Named and Optional Arguments (C# Programming Guide).
If two candidates are judged to be equally good, preference goes to a candidate that does not have optional parameters for which arguments were omitted in the call. This is a consequence of a general preference in overload resolution for candidates that have fewer parameters.
A reason why they decided to implement it the way like this could be if you want to overload a method afterwards. So you don't have to change all your method calls that are already written.
UPDATE
I'm surprised, also Jon Skeet has no real explantation why they did it like this.
I think this question basically boils down to how those signatures are represented by the intermediate language. Note that the signatures of both overloads are not equal! The second method has a signature like this:
.method public hidebysig static void Method([opt] string aString) cil managed
{
.param [1] = string('a string')
// ...
}
In IL the signature of the method is different. It takes a string, which is marked as optional. This changes the behaviour of how the parameter get's initialize, but does not change the presence of this parameter.
The compiler is not able to decide, which method you are calling, so it uses the one that fits best, based on the parameters you provide. Since you did not provide any parameters for the first call, it assumes that you are calling the overload without any parameters.
In the end it is a question about good code design. As a rule of thumb, I either use optional parameters or overloads, depending on what I want to do: Optional parameters are good, if the logic within the method does not depend on the provided arguments, while overloads are good to provide a different implementation for different sets of arguments. If you ever find yourself checking if a parameter equals a default value in order to decide what to do, you should probably go for an overload. On the other hand, if you find yourself repeating large chunks of code in many overloads, you should try extracting optional parameters.
There's also a good answer of Chuck Skeet to this question.

Why do .NET methods sometimes return general types instead of using generics and type constraints?

Consider, for example, the method:
public static Attribute GetCustomAttribute(this ParameterInfo element, Type attributeType);
defined in System.Reflection.CustomAttributeExtensions
Wouldn't it make more sense to define instead:
public static T GetCustomAttribute<T>(this ParameterInfo element, T attributeType) where T : Attribute;
And save the casting?
The non-generic method of retrieving custom attribute is from the old .NET days when generics weren't implemented.
For current and future coding, you can take advantage of CustomAttributeExtensions.GetCustomAttributes<T> - if you're coding using .NET 4.5 version and above -.
Sadly - or maybe actually - software has a sequential evolution. I mean, there was a time where generic weren't with us (.NET 1.0 and 1.1), and there's a lot of code base that's inherited from early .NET versions, and because of framework team prioritzation it seems like not every method that would be better using a generic parameter(s) has been already implemented.
About inherited code
#BenRobinson said in some comment here bellow:
The point I was making was that these extension methods were added in
.net 4.5 (all of them not just the generic ones) and extension methods
were added after generics so non generic extension methods of any kind
have nothing to do with backwards compatibility with .net 1.0/1.1.
I'm adding this to avoid confusion: don't understand inherited code in terms of Microsoft not changing the non-generic code base because of backwards compatibility with third-party code.
Actually, I'm pointing out that current .NET version itself has a lot of code inherited from early .NET versions, either if the intention of Microsoft is maintaining backwards compatibility with third-party code or not.
I assume or guess that .NET Framework Team has prioritized new base class library (BCL) and satellite frameworks additions and some of members coming from pre-generics era are still as is because the change isn't worth the effort, or we could discuss if it's worth the effort and they did design decision mistakes, but StackOverflow isn't a discussion board, is it?
There is indeed a an overload that is equivalent to your example, GetCustomAttribute<T>(ParameterInfo), however to call this method without nasty reflection, you need to know the type of T at compile time. If you only know the type of T at run time then the equivalent method is GetCustomAttribute(ParameterInfo, Type)
Generics was added to the C# language in version 2. I believe that attributes was in the language at version 1 or 1.1 (can't remember which, I think it was in version 1 but I could be wrong).
This means that even though they would have saved a lot of unnecessary casting by changing all methods to use generics instead, they could have broken backwards compatability. And breaking backwards compatability is bad™.
Edit:
Also, I just thought about one more reason.
If you are writing reflection code it's often quite a hassle to call a generic method via reflection (C# has a really stupid api for doing so...) so if you are writing reflection code then using the non-generic version is in many cases much easier than using the generic one.
Edit again:
Damn, Ben Robinson beat me to the reflection point by a minute! :)

C# overloading resolution?

From the article Anders Hejsberg interview, "the way we do overload resolution in C# is different from any other language"
Can somebody provide some examples with C# and Java?
What Anders was getting at here was that the original design team explicitly designed the overload resolution algorithm to have certain properties that worked nicely with versioning scenarios, even though those properties seem backwards or confusing when you consider the scenarios without versioning.
Probably the most common example of that is the rule in C# that if any method on a more-derived class is an applicable candidate, it is automatically better than any method on a less-derived class, even if the less-derived method has a better signature match. This rule is not, to my knowledge, found in other languages that have overload resolution. It seems counterintuitive; if there's a method that is a better signature match, why not choose it? The reason is because the method that is a better signature match might have been added in a later version and thereby be introducing a "brittle base class" failure.
For more thoughts on how various languages handle brittle base class failures, see
Link
and for more thoughts on overload resolution, see
Link
The way that C# handles overloading from an internal perspective is what's different.
The complete quote from Anders:
I have always described myself as a
pragmatic guy. It's funny, because
versioning ended up being one of the
pillars of our language design. It
shows up in how you override virtual
methods in C#. Also, the way we do
overload resolution in C# is different
from any other language I know of, for
reasons of versioning. Whenever we
looked at designing a particular
feature, we would always cross check
with versioning. We would ask, "How
does versioning change this? How does
this function from a versioning
perspective?" It turns out that most
language design before has given very
little thought to that.

Convention for Filenames of Generic Classes [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I want to be able to distinguish between a generic and regular (non-generic) version of a class. Much like the .NET framework does with it's generic and non-generic versions of several of it's interfaces and collection classes. (Queue, Queue(T))
I generally like to follow the convention of one class per file (as in Java). Is there a common convention for naming files containing a single generic class? I'm mostly interested in Windows (NTFS specifically) but it seems like a good convention would be (at least a little) portable.
At Microsoft, they use ClassNameOfT.cs.
Just found this question after looking for what conventions other people use for generic class filenames.
Lately I've been using ClassName[T].cs. I really like this convention, and I think it's superior to the others for the following reasons:
The type parameters jump out at you a
little more than they do with the
Microsoft convention (e.g.,
ClassNameOfT.cs).
It allows you to have multiple
type parameters without too much
confusion: Dictionary[TKey,
TValue].cs
It doesn't require you to create any special folders, or to have your generic classes in a special namespace. If you only have a few generic classes, having a special namespace dedicated to them just isn't practical.
I borrowed this convention from Boo's generic syntax, albeit slightly modified (Boo uses ClassName[of T]).
Some developers seem to have a phobia of filenames that contain anything but letters and underscores, but once you can get past that this convention seems to work extremely well.
I see that this topic has been abandoned more than a year ago, but still I would like to share my view on this convention.
First of all, having multiple classes that have the same name but only differ in the amount of type-parameters isn't always a case of backwards compatibility. Surely, you don't see it very often, but the new Action- and Func-classes of .NET were just designed this way, and I'm currently implementing something similar.
For clarity and distinguishability, I use the following convention that only specifies the number of generic arguments for a given type:
MyClass.cs
MyClass.T1.cs
MyClass.T2.cs
This way, my filenames stay short and simple while still clearly communicating the class-name and the different amount of type parameters at the cost of a simple extra dot (which is, in my experience, a commonly accepted thing to do in a filename and looks much better than comma's and other non-alpanumeric characters, but this is just a matter of taste I guess). Putting the names (or acronyms) of the type parameters just lengthens the filenames while at this level I'm not really interested in the actual names of the type parameters anyway...
Don't use the grave accent ` in your generic file names if you're running Visual Studio 2008. There's a known issue with them that causes breakpoints to fail:
http://connect.microsoft.com/VisualStudio/feedback/details/343042/grave-accent-in-filename-causes-failure-to-recognize-target-language-breakpoints-fail
Personally I wouldn't use the grave accent notation:
Foo.cs
Foo`1.cs
For the simple reason that I am scared of the grave accent. Not only does it have a scary name 👻😨😱, but I am unsure how it will be handled by different file systems, version control systems and in URLs. Hence, I would prefer to stick to common alphanumeric characters.
NameOfT.cs seems to be used in ASP.NET Core according to a search on GitHub. 40 results. Reference.
Also used in the .NET Core runtime. 36 results. Reference.
Example:
Foo.cs
FooOfT.cs
Sometimes I also see ClassName{T}.cs but it is common to name it ClassNameOfT.cs (like mentioned before Microsoft uses it)
EntityFrameworkCore project(also Microsoft's) uses ClassName`.cs
All new Microsoft classes use generics. The Queue and ArrayList were there before generics came out. Generics is the way forward.
The convention for one-class-per-single file is to name the filename after the class name (whether generic of not). For MyClass, you'll have MyClas.cs. For every new namespace you'll need to create a new folder. This is how Visual Studio also works.
How about:
Type.cs
and
TypeGeneric.cs
Whenever I have done this in the past I have always put both types in one file with the non-generic type as the file name. I think that this makes things pretty clear as .NET has no conventions/restrictions on one type per file like Java does.
But if you must then I would suggest something like I have above, and using a suffix will make the files show up together in any alphabetized list (Solution Explorer, Windows Explorer, etc.).
Here is another idea:
Type`1.cs
This would allow you to break out different generic types by the number of generic type parameters they accepted. Its just a thought though as I still think it would be simpler to just put all the types in one file.
I would probably put them in folders and use the namespace mechanism instead. You can compare with System.Collections vs. System.Collections.Generic. On the other hand, if it's more common than not that the classes use generics, perhaps it's better to point out those that are not. That is if you really want to separate the generic classes from other classes. Personally I usually don't bother to do that, since I don't really see a practical benefit from it.
From the responses so far it seems there isn't a consensus.
Using the same filename in a sub-namespace (and sub-folder) "Generics" (like System.Collecctions.Generics) is an option. But it's not always desirable to create a new namespace.
For example, in an existing namespace with non-generic classes that are maintained for backwards compatibility, but marked with ObsoleteAttribute, it's probably better to keep the generic versions in the same namespace.
I think a suffix is a reasonable way to go. I've adopted a convention of using the type parameters as a suffix (so: MyClassT for MyClass<T>, or MyDictionaryKV for MyDictionary<K,V>.
I'd probably have two folders in the project, something like Gereric, NonGeneric or something like that. They can still be in the same namespace, and then they can both have the same file name. Just a thought...

Categories