As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Why is the default decision in C++, C#, and Ada 95 to use static method binding, rather than dynamic method binding.?
Is the gain in implementation speed worth the loss in abstraction and re-usability?
In general, you can consider that you have todesign the base class for extensibility. If a member function (to use the C++ vocabulary) isn't designed to be overridden, there is a good chance than overriding it will in practice not be possible and for sure it won't it be possible without knowledge of what the class designer think is implementation details and will change without giving you prior notice.
Some additional considerations for two languages (I don't know C# enough to write about it):
Ada 95 would have had compatibility issues with Ada 83 if the choice was different. And considering the whole object model of Ada 95, doing it differently would have make no sense (but you can consider that compatibility was a factor in the choice of the object model).
For C++, performance was certainly a factor. The you don't pay for what you don't use principle and the possibility to use C++ just as a better C was quite instrumental in its success.
The obvious answer is because most functions shouldn't be virtual. As AProgrammer points out, unless a function has been designed explicitly to be overridden, you probably can't override it (virtual or not) without breaking class invariants. (When I work in Java, for example, I end up declaring most functions final, as a matter of good engineering. C++ and Ada make the right decision: the author must explicitly state that the function is designed to be overridden.
Also, C++ and (I think) Ada support value semantics. And value semantics doesn't work well with polymorphism; in Java, classes like java.lang.String are final, in order to simulate value semantics for them. Far to many applications programmers, however, don't bother, since it's not the default. (In a similar manner, far too many C++ programmers omit to inhibit copy and assignment when the class is polymorphic.)
Finally, even when a class is polymorphic, and designed for inheritance, the contract is still specified, and in so far as is reasonable, enforced, in the base class. In C++, typically, this means that public functions are not virtual, since it is the public functions which define and enforce the contract.
I can't speak about Ada, but for C++ two important goals for the design of C++ were:
backwards compatibility with C
you should pay nothing (to the extent possible) for features that you don't use
While neither of these would necessarily dictate that dynamic binding couldn't have been chosen to be the default, having static method binding (I assume you mean non-virtual member functions) does seem to 'fit' better with these design goals.
I'll give one of the other two thirds of Michael Burr's answer.
For Ada it was an important design goal that the language be suitable for system's programming and use on small realtime embedded devices (eg: missile and bomb CPUs). Perhaps there are now techniques that would allow dynamic languages to do such things well, but there certianly weren't back in the late 70's and early 80's when the language was first being designed. Ada95 of course could not radically deviate from the orginal language's basic underlying design, any more than C++ could from C.
That being said, both Ada and C++ (and certianly C# as well?) provide a way to do dynamic method binding ("dynamic dispatch") if you really want it. In both it is accesed via pointers, which IMHO are kind of error-prone. It can also make things a bit of a pain to debug, as it is tough to tell from sources alone exactly what is getting called. So I avoid it unless I really need it.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
In C++, you can write code like this:
template<class T>
T Add(T lhs, T rhs)
{
return lhs + rhs;
}
But, you can't do something like this in C#:
public static T Add<T>(T x, T y) where T : operator+
{
return x + y;
}
Is there any reason why? I know it could be accomplished through reflection (generic Add with objects and then run type checking over it all), but that's inefficient and doesn't scale well. So, again, why?
There is no inherent reason this could not exist. The way generic type constrains are implemented is through interface calls. If there were an interface that provided an operator+ this would work.
This interface would be required for all relevant types, though, to be as general as the C++ template-based analog.
Another issue would be that .NET has no multiple dispatch. The interface call would be asymmetric: a.Plus(b) could mean something different than b.Plus(a). Equals has the same problem, btw.
So this issue probably did not meet the "usefulness"-bar or the "cost/utility"-bar. This is not an issue of impossibility, but of practical concerns.
Proof that it is possible: ((dynamic)a) + ((dynamic)b).
The CLR doesn't natively support such constraints, and the C# design team evidently decided to support the same set of constraints as the CLR. Presumably both the CLR team and the C# team felt that the benefits of implementing such a feature didn't outweigh the costs of speccing, implementing, and testing it.
If you want to use a .NET language that does support such constraints, consider taking a look at F#.
There are several possible ways to implement operator constraints and all of them are either non-trivial (and would probably require a change of the CLR) or have significant drawbacks (e.g. they would be slow, much slower than adding two integers).
And this is something that is relatively easy to work around (either the slow, general and unsafe way of using dynamic, or the fast, type-specific and safe way of having bajillion overloads). Because of that, such feature is probably considered “nice to have” but far from important enough to warrant such changes.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I know this might be a stupid question, but here it goes.
I always wrote my private members like privateMember and I've been reading a lot about naming conventions in C# because I noticed that a lot of the automatic generated code in visual studio use _variableName for private members. Everywhere I read, even in Microsoft documents, that you should use privateMember.
So, my question is, if the good practices says that I should write privateMember, as I do now, Why the heck Visual Studio generates classes with private members using underscore (_privateMember)?
Microsoft Code Conventions actually recommend against using underscores altogether. It is really personal preference. I would not use generated code as inspiration for my coding convention standard.
Do not use underscores, hyphens, or any other nonalphanumeric characters.
Maybe it's because it's generated code and not intended to be read by humans. ;-)
Not so long time ago when C# was raising to the market there was a concept that local variables should be leaded by a prefix _. This concept was not accepted by the community as in pure C the _ leads system variable/functions and the metadata are lead by __. So after few years, they now discourage to use that. But still you will find some believer that use this notation not because it is a fanatic but a lot of old C# applications contain this convention.
Why this is in VisualStudio ?
This might be related to the time gap it was designed. In those time this approach was suggested by language designers. So it is probably that no one changed that in the configuration for latest version.
Naming conventions aren't 100% agreed upon. This is one of those that some people like, some people are indifferent to, and some people hate. Certain people consider it better for instance variables to stand out, via their name, and this is one way to do that. Other people use this.instanceVariable rather than instanceVariable all of the time so that instance variables stand out, other people prepend something other than a '_' character, and some people just don't go out of their way to use any special distinction.
At the end of the day what's important is that you, and the other members of your team agree on a standard and are consistent with it. What the rest of the world chooses to do doesn't need to affect you.
It's also worth mentioning that the code snippets generated by Visual Studio, in most cases, can be configured to be in line with your team's coding practices.
It's just a convention they use, I do it too. You can ultimately name your private fields whatever you want. Prefixing it with an underscore just makes it easier to read IMO.
As a convention private fields were/is used as with underscore e.g. string _name;
The link will give you more info on guidelines for naming coventions by MS http://msdn.microsoft.com/en-us/library/ms229045.aspx
It's just the C# language convention so that in constructor you can use _varable instead of this.variable, when the constructor and field name is the same.
there are all c# naming conventions in
http://msdn.microsoft.com/en-us/library/ms229002.aspx
It's a mather of you if you follow the convention of the generated code.
Besisdes the recomendations many programmers use the same convention as the generated code.
Some programs that help you refactor the code also sugests you to follow that name convention for field names.
The underscore at the beginning is VS's way of showing that it is a privateMember. We keep the underscore at the beginning as a rule, but it is really a personal preference as to what naming convention you use. Just pick one and stick with it so you don't confuse yourself or anyone else that might look at your code.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
The method List<T>.IndexOf() returns the zero-based index of the first occurrence of item within the entire List, if found; otherwise, –1.
I'm seeing a parallel between that, and something I just read in Code Complete, which is telling me to "avoid variables with hidden meanings".
For example:
The value in the
variable pageCount might represent the number of pages printed, unless
it equals -1, in which case it indicates that an error has occurred.
Well, I don't know if the meaning is "hidden", because it's documented clearly enough, but null seems to convey better meaning to me than -1, and .HasValue reads like a much better check than > -1. As far as I can tell, List and nullable types were both introduced in C# 2.0, so I don't think the reason for retuning an int has to do with backwards compatibility. So, do you know if there was a reason, or if this was just something that someone forgot to implement, and we now have to live with that mistake forever?
List was released with the 2.0 version of the run time as was nullable T. But List implements IList which existed with the 1.0 version of the runtime which not only didn't have nullable it didn't support generics. To meet the contract of the IList interface an implementer must return -1 on indexof failure. Code complete also maintains that you must meet the contracts you agree to and therefore List.Indexof must return -1.
Answer to Comment:
By the time the 2.0 runtime with generics there were already thousands of applications written against the non generic version. That code would be allowed to function most effectively be migrated by supporting the generic interface as an extension of the non-generic interfaces. Also, image having to classes with the same name and 90% same usage but in the case of a couple of methods totally different semantics.
System.Collections.Generic.List may have been introduced in .NET 2.0, along with nullable, but the usage of -1 predates that considerably. ArrayList.IndexOf(), Array.IndexOf(), String.IndexOf(), SelectedIndex of a listbox, etc. So List<T> most likely did the same for consistency with the existing library. (And in fact even classic VB has this meaning, so it even predates .NET 1.0)
As #rerun points out, it isn't just stylistic consistency, it's actually part of the interface contract. So +1 to him.
While List<T> didn't have to follow in the footsteps of ArrayList and didn't have to implement IList, doing so made it much more useful to programmers already familiar with those.
Just a guess, but I would think it has to do with historical reasons and readability. Even though -1 is a "special value", it will never be returned in any other case. The rule you quoted in my opinion is primarily to keep you from returning values that could have more than one meaning. -1 is standard in c++ and similar languages, so it has become somewhat of an idiom across languages. Also, while hasvalue might be more readable, I think it would be best to have the same style consistently across the framework, which would require starting from the ground up. Finally, IMHO, in my limited experience, because nullable types have to be unboxed in order to use them elsewhere, they can be more trouble than they're worth, but that's just me.
I think it's for backwards compatibility, as in framework 1.1 there wasn't nullables and the API was created back then.
The reason may be the simple fact that Nullable<T> (the actual type used when you write T?) involves unwanted complexity for something as simple as an index search. For T? x doing x == null is really only syntactic sugar for x.HasValue and when it turns out it's not null you'd still have to access the actual value through x.Value.
You don't really gain anything in comparison to just returning a negative value, which is not a valid index either. In fact, you make things just more complex.
This follows the same pattern as String.IndexOf, and many other implementations and variations of IndexOf methods, not only in the .NET libraries.
Using -1 as return value is a magic number, and those should normally be avoided. However, this pattern is well known, which is clearly an advantage when you implement a library method.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am currently learning C# and I have a situation where I have a class that contains a ISet. I don't wish clients to modify this set directly, and most clients only Add and Remove, and I provide accessors through my class to do this.
However, I have one client that wishes to know more about this set and its contents. I don't really want to muddy the wrapper class itself with lots of methods for this one client, so I would prefer to be able to return the set itself in a immutable way.
I found I can't - well, not really. The only options I seem to have are:
Return an IEnumerable (No: restrictive functionality);
ReadOnlyCollection (No: It's a LIST);
Return a copy (No: Bad form IMHO, allows clients to modify the returned collection perhaps unaware that it's not going to change the real object, plus it has performance overhead);
Implement my own ReadOnlySet (No: Would need to derive from ISet and thus meaning I need to implement mutators, probably firing exceptions, I would rather compile time errors - not runtime).
Am I missing something? Am I being unreasonable? Is my only option to provide the full set of accessors on my wrapper? Am I incorrect in my original intent to keep the wrapper clean for the vast majority of clients?
So two questions:
Why isn't there an standard C# immutable Collection interface? It seems like a fairly reasonable requirement?
Why is ReadOnlyCollection annoyingly called ReadOnlyCollection when it is really a ReadOnlyList? I was going to bite the bullet and use that until I found out it was a List (and I use a Set).
Why isn't there a standard C# immutable interface? It seems like a
fairly reasonable requirement?
A standard C# immutable¹ interface already exists: it's called IEnumerable and all containers implement it.
More powerful immutable interfaces are problematic, because there are many kinds of immutability. If the BCL team decided to pick one definition of immutability and elevate it to the immutability status it's certain that down the road people looking for a different kind of immutability would complain about the choice.
Satisfying everyone would mean not only sorting all of the immutability mess out but creating lots of interfaces (good luck picking good names for them too) and baking all these immutability concepts into the language well enough to make immutability a first-class citizen -- remember that there are no second chances here, once you ship a public class its public interface is immutable forever (pun intended). While all of this might be good to have, I 'm really skeptical about the cost/benefit ratio.
It's not difficult to define IReadOnlyList, IReadOnlySet and such if you do require them. I assume that they do not already exist because again, minus 100 points.
ReadOnlyCollection is IMHO either a concession or a class that was required internally for the BCL and exposed to the world because hey, free functionality at really low cost for the BCL team (since it would have to be implemented, documented and tested anyway). In any case I don't think that it does not live in the glamorous System.Collections.Generic neighborhood by chance.
Why is ReadOnlyCollection annoyingly called ReadOnlyCollection when it
is really a ReadOnlyList? I was going to bite the bullet and use that
until I found out it was a List (and I use a Set).
I 'm sure the BCL team would love to be able to go back in time and fix that, because it's almost certainly one of those little inconsistencies that unavoidably sneak into any library of comparable scope. Since ReadOnlyCollection implements IList it should definitely have been called ReadOnlyList.
However, given that a "list" offers more functionality than a "collection", I don't see how this would stop you. Neither is a Set, so you would have to build set-related functionality on top of them in any case (which is not a good idea; just build read-only semantics on top of Set).
¹ We 're tossing around "immutable" a lot here, but that word does not have a singular meaning. I think it would be more appropriate to use "read-only", but I 'll go with your choice of word for consistency.
This may help,
http://blogs.msdn.com/b/jaredpar/archive/2008/04/22/api-design-readonlycollection-t.aspx
I think the only way for you to provide a read-only 'copy' of the set without actually copying the data into another instance of the same or a different structure, is to go with the wrapper and implement all the item-adding-and-removing methods to throw an exception.
If your set is exposed only as an ISet anyway, consumers are only going to see the members defined on the interface, no matter what your wrapper contains - that doesn't seem like it's a bad thing.
I agree it would be nice if there were better support in .net for both immutability and read-only wrappers, though I think it's important to note that there is a huge difference between the concepts. A read-only wrapper promises its creator that consumers of it won't be able to change the underlying object, but makes no promise to consumers that the underlying object itself won't change. By contrast, an immutable object promises its creator and consumers that its values won't change.
I'm not sure why the notion that there are many different types of immunity should be a problem. If I have a generic ImmutableList<T> which takes an unqualified T my expectation would be that it will always contain the same T's as it did when it was created. The collection could in no way affect whether any of the properties of the T's could change, and thus it shouldn't be expected to.
If I had my druthers, most of the collection-related interfaces would include readable, mutable, and immutable variants (mutable and immutable would both extend from readable). I'd also add a write-only contravariant IAppendable interface, as well as an IImmutableEnumerable derived from IEnumerable (I'd add a ToImmutable method to IEnumerable (and IImmutableEnumerable); an implementation could construct an immutable collection, but in some cases that might not be the best approach. For example, a mutable object might implement IEnumerable by return a mutable number of copies of a mutable element. If the number of copies is large, converting to a simple collection could be very wasteful.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why is String.Concat not optimized to StringBuilder.Append?
One day I was ranting about a particular Telerik control to a friend of mine. I told him that it took several seconds to generate a controls tree, and after profiling I found out that it is using a string concatenation in a loop instead of a StringBuilder. After rewriting it worked almost instantaneously.
So my friend heard that and seemed to be surprised that the C# compiler didn't do that conversion automatically like the Java compiler does. Reading many of Eric Lippert's answers I realize that this feature didn't make it because it wasn't deemed worthy enough. But if, hypothetically, costs were small to implement it, what rationale would stop one from doing it?
But if, hypothetically, costs were small to implement it, what rationale would stop one from doing it?
It sounds like you're proposing a bit of a tautology: if there is no reason to not do X, then is there a reason to not do X? No.
I see little value in knowing the answers to hypothetical, counterfactual questions. Perhaps a better question to ask would be a question about the real world:
Are there programming languages that use this optimization?
Yes. In JScript.NET, we detect string concatenations in loops and the compiler turns them into calls to a string builder.
That might then be followed up with:
What are some of the differences between JScript .NET and C# that justify the optimization in the one language but not in the other?
A core assumption of JScript.NET is that its programmers are mostly going to be JavaScript programmers, and many of them will have already built libraries that must run in any implementation of ECMAScript. Those programmers might not know the .NET framework well, and even if they do, they might not be able to use StringBuilder without making their library code non-portable. It is also reasonable to assume that JavaScript programmers may be either novice programmers, or programmers who came to programming via their line of business rather than a course of study in computer science.
C# programmers are far more likely to know the .NET framework well, to write libraries that work with the framework, and to be experienced programmers who understand why looped string concatenation is O(n2) in the naive implementation. They need this optimization generated by the compiler less because they can just do it themselves if they deem it necessary.
In short: compiler features are about spending our budget to add value for the customer; you get more "bang for buck" adding the feature to JScript.NET than you do adding it to C#.
The C# compiler does better than that.
a + b + c is compiled to String.Concat(a, b, c), which is faster than StringBuilder.
"a" + "b" is compiled directly to "ab" (useful for multi-line literals).
The only place to use StringBuilder is when concatenating repetitively inside a loop; the compiler cannot easily optimize that.