Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm learning C# recently with a strong C++ background, and there is something about C# that I don't quite understand, given my understanding of and experiences with C++.
In C++, people do care a lot about uniformity, otherwise it would be impossible to write generic code using template meta-programming. In C#, however, people seem to care little about uniformity. For example, while array types have a Length property, List<T> uses Count. While IndexOf, LastIndexOf, and alike for array types are static methods, their counterparts for List<T> are not. This gives me the impression that instead of being uniform, C# is actually trying hard to be nonuniform. This doesn't make sense to me. Since C# doesn't support template meta-programming, uniformity is not that important as in C++. But still, being uniform can be beneficial in many other ways. For example, it would be easier for humans to learn and master. When things are highly uniform, you mater one, and you master it all. Please note that I'm not a C++ fanatics nor diehard. I just don't really understand.
You've got a conceptual issue here.
List<T>, and the other collection classes with it, aren't C# constructs. They are classes in the BCL. Essentially, you can use any BCL class in any .NET Language, not just C#. If you're asking why the BCL classes differ in certain ways, it's not because the designers disrespected or didn't want uniformity. It's probably for one of (at least two) reasons:
1) The BCL and FCL evolved over time. You're likely to see very significant differences in classes that were introduced before and after generics were added. One example, DataColumnCollection, is an IEnumerable (but not an IEnumerable<DataColumn>). That causes you to need to cast to perform some operations.
2) There's a subtle difference in the meaning of the method. .Length, I believe, is made to imply that there's a static number somewhere, where .Count implies that some operation might be done to get the number of items in the list.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
First, please no spamming because I am not necessarily an OOP devotee. That said, I have been a programmer on and off for almost 30 years and have created a lot of pretty cool production code systems/solutions in several industries. I've also done my share of break/fix, database development, etc. Even a bout 10 years as a web programmer, not developer, so I an not so much a newbie but someone trying to get an answer about something that frankly is eluding me.
I started as a "C" programmer int he early 1980's and "C" served me well into the early 2000s (even today most scripting and higher level languages use "C" syntactical elements).
That said, overloading seems to violate every principle of what I was taught were "good coding practices" by increasing ambiguity in the opportunity for omission of intended code to be executed for a given condition or actually running a routine you didn't expect to due to some condition falling through the cracks. Also generally seems to creates LOTS of confusion for learners.
I am not saying overloading is bad per se, I just want to better understand it's practical application to real problems other than simply a way to provide input validation or perhaps just to handle inputs from other sources that you have no control over in an API or something else that you don't necessarily know the type of (again not clear on how or why that could actually happen either) C# has a lot of parse and try catch to handle this as do most OOP languages.
In over a decade, I have yet to get a straight, non judgmental and dare I say unsnarky answer to this question. Surely there is someone who can offer a reasonable explanation of why it is used.
So I pose the question to you the stack overflow gurus, Personally, does having a method/function that is potentially callable multiple different ways with multiple exclusive code segments really a good thing, or does it just suggest lack of good planning when designing software. Again, not knocking, judging, or disparaging, I just don't get it.....please enlighten me!
I'd say std::to_string is a pretty good example of good use of overloading. Why would you want to have different functions for converting different types to std::string? You don't. You just want one - std::to_string and you want it to behave sensibly whatever type of argument you give it - and it does just that. Using overloading keeps the client code simple.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
This article spells out some reasons F#'s performance is occasionally better than C#. It says in it's "Firstly" section that only F# generates tail calls.
What exactly does that mean? And why is it a performance boost? This one thing may actually make or break between F# and C# for my chess app, which uses a ton of recursion.
Performance will depend more on the way you implement your program than the language. F# may generate IL better for some things while the C# compiler will be better for others. When choosing the languages you should consider other things rather than just performance.
If you're writing your chess program to learn F#, give it a try, it's an awesome language, just don't expect super blazing fast programs just because you're using a functional language.
Edit to answer the new question:
The F# compiler indeed does generate IL that has the tail. op code whareas the C# compiler doesn't. That by itself doesn't make F# faster or more performatic than C#, as you can see in my original answer above, but can indeed make a difference in your specific chess app, since you are stating that recursion is heavily used.
As a side note, the CLR may generate some simpler tail call optimizations during runtime, so for simpler functions in a x64 enviroment, even IL generated by the C# compiler may have tail call optimization.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Static and primitives are not part of OOPs. I have read that static and primitives are not allowed in scala class definitions. If this is true, then why static and primitives were allowed in java and in c# and few other languages?
They are not part of "pure" Object Oriented, but sometimes "pure" gets in the way of "getting the job done".
Using primitives can make mathematical operations (in particular) much faster, and statics enable a lot of useful design patterns.
C#, C++ and Java are a general purpose programming languages. You can find for example elements of duck-typing in C# 4.0, elements of functional programming, and many more useful constructs.
Not every program and not every part of a program has to be object oriented. Use OOP when needed and when it serves purpose, in C#, C++ and Java you can use other or simpler constructs every time you feel OOP is a 'firing mosquito with a cannon'.
Its an engineering decision. Like most engineering decisions it involves carefully balancing lots of different forces on the design. In this case its the design of the language. Different languages are intended to solve different problems and have different objectives. So its not surprising that different languages come to different conclusions.
It this case there are costs and benefits to having primitives.
Primitives can be faster.
Primitives can make it easer to do certain low level things (eg writing individual bits to a register on a micro condoler in an embedded system.)
On the other hand primitives can make your programming language more complicated as your primitive types have different syntax etc to operate on them. Some newer languages are trying to have there cake and eat it by using primitive types under the covers but making them look like objects in terms of syntax.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I recently start to learn C# and I want to know whether the C# is Pure Object-Oriented with reason in both the cases(cases yes or no).
I'm not %100 sure exactly the meaning of "pure" object-oriented but my answer is YES.
From Smaltalks wikipedia page;
Smalltalk is a "pure" object-oriented programming language, meaning
that, unlike Java and C++, there is no difference between values which
are objects and values which are primitive types. In Smalltalk,
primitive values such as integers, booleans and characters are also
objects,
That is the same as in C#.
I found an interesting article called Wyvern: A Simple, Typed, and Pure Object-Oriented Language
1.1 What Makes an Object-Oriented Model Pure?
From these sources, we extract three key requirements that we wish to
satisfy in coming up with a typed, pure object-oriented model:
Uniform access principle. Following Meyer, Cook, and Kay, it should be possible to access objects only by invoking their methods.
Interoperability and uniform treatment. Different implementations of the same object oriented interface should interoperate by default,
and it should be easy to treat them uniformly at run time (e.g., by
storing different implementations of the same interface within a
single run-time data structure).
State encapsulation. All mutable state should be encapsulated within objects.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Can anyone tell me what would be more efficient: A large program is written in visual-C++ years ago is now intended to be written in C#. What would be better, re-writing the whole code of visual-C++ in C# or write C++ DLLs to be used in C# program via DLLimport?
I guess it depends on how data-centric your code is. If you can easily separate out the functionality that does not require an interface, then you'd most likely be better off writing a DLL to utilize this functionality, and then re-writing the interface in C#.
If the program is rather interface heavy, and you do not want to go through separating out all of the data functions, then I'd just go ahead and re-write the whole thing in C#, although I'd expect to lose some performance.
VisualC++ is still a very widely used language - is this your only reason for wanting to move to C# (i.e. finding it hard to recruit people, lacking skills to continue development)?
There is only a single answer to this: "it depends". We cannot possibly know this, it's something you must decide.
Check what you need in terms of time and other resources for both. Check what benefit your gain from both. Weigth cost against benefit. Decide.