Is the C# pure object-oriented programming language? [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I recently start to learn C# and I want to know whether the C# is Pure Object-Oriented with reason in both the cases(cases yes or no).

I'm not %100 sure exactly the meaning of "pure" object-oriented but my answer is YES.
From Smaltalks wikipedia page;
Smalltalk is a "pure" object-oriented programming language, meaning
that, unlike Java and C++, there is no difference between values which
are objects and values which are primitive types. In Smalltalk,
primitive values such as integers, booleans and characters are also
objects,
That is the same as in C#.
I found an interesting article called Wyvern: A Simple, Typed, and Pure Object-Oriented Language
1.1 What Makes an Object-Oriented Model Pure?
From these sources, we extract three key requirements that we wish to
satisfy in coming up with a typed, pure object-oriented model:
Uniform access principle. Following Meyer, Cook, and Kay, it should be possible to access objects only by invoking their methods.
Interoperability and uniform treatment. Different implementations of the same object oriented interface should interoperate by default,
and it should be easy to treat them uniformly at run time (e.g., by
storing different implementations of the same interface within a
single run-time data structure).
State encapsulation. All mutable state should be encapsulated within objects.

Related

Why are C#'s format specifiers strings? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Why are format specifiers to base types in C# of type string instead of a less error-prone and more readable type, like enum? While I know many of the specifiers by now, I almost always have to double-check the documentation to avoid bugs or weird edge-cases. enum types and values could've easily provided that information in their comments.
C#'s version 1.0 was released in 2002, and object.ToString() has been a feature since C# 1.1. It is an old feature and I understand that the development process and goals doesn't necessarily look the same now as then. However, I cannot understand the reason for not using a type-safe, well-defined and easy-to-document behavior by using language features such as enums or classes instead of strings.
Of course, most older langauges such as C use string type format specifiers, so perhaps it's just by convention? If so, why do they feel the need to follow that convention? (and besides, C use the % character for specifiers, so C# already made up their own conventions)
It's because they expect the format will vary a lot in different cultures, countries, systems, users (some systems allow users to choose their own prefer format). And we will end up a very big enum for all possibilities. And also, .net framework can't just maintain a huge amount of formats.

Why there is no floating point type smaller in byte-size than 'float' in C# [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Hopefully it is the 1st question of its type on SO. C# has 4 integral types:
byte: 1 byte
short: 2 bytes
int: 4 bytes
long: 8 bytes
But just two floating point types: float and double. Why did the creators did not feel a need to come up with a tiny float? There is a Wikipedia article about Minifloats.
To provide an attempt at answer -
Most floating point value systems are based on the IEEE 754 standard; the systems need to be standardized so that hardware manufacturers can provide interchangeable hardware-based implementations of the formats. So, for starters, you're not going to find an implementation unless it is standardized, and the smallest format that IEEE 754 defines is binary16.
https://en.wikipedia.org/wiki/IEEE_754#Representation_and_encoding_in_memory
binary16 is smaller than the smallest type that C# provides, so we're still left with "why doesn't C# implement binary16?".
The answer is likely a combination of reasons:
C# designers might have felt that it would be an uncommonly-used type (most other languages don't implement it, after all), and providing support for an unused type would be an unwise choice.
C# designers might have felt that it would add complication that they would've rather not had to deal with when it came time to implement C# on other platforms, such as ARM.
Additionally, the most common platforms that C# would ostensibly run on - x86-32/64 and ARM - don't have much hardware support for binary16, or didn't have any for a long while (see F16c).
Finally, it looks like the folks behind .Net/C# are indeed considering adding support for it, along with other types like binary128:
https://github.com/dotnet/corefx/issues/17267
The software development landscape has changed a lot since C# was created, 18 years ago. Nowadays, we are starting to run a lot more code on GPUs, where we do in fact find and use binary16 types. However, when C# was first created, that might not have been seen as a viable use case. Now that C#-on-GPU is becoming bigger and more viable, it makes sense that the language designers would re-evaluate the design and evolve as the language's use evolve's. And it seems that they're doing exactly that.

Why doesn't C# seem to care about uniformity? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm learning C# recently with a strong C++ background, and there is something about C# that I don't quite understand, given my understanding of and experiences with C++.
In C++, people do care a lot about uniformity, otherwise it would be impossible to write generic code using template meta-programming. In C#, however, people seem to care little about uniformity. For example, while array types have a Length property, List<T> uses Count. While IndexOf, LastIndexOf, and alike for array types are static methods, their counterparts for List<T> are not. This gives me the impression that instead of being uniform, C# is actually trying hard to be nonuniform. This doesn't make sense to me. Since C# doesn't support template meta-programming, uniformity is not that important as in C++. But still, being uniform can be beneficial in many other ways. For example, it would be easier for humans to learn and master. When things are highly uniform, you mater one, and you master it all. Please note that I'm not a C++ fanatics nor diehard. I just don't really understand.
You've got a conceptual issue here.
List<T>, and the other collection classes with it, aren't C# constructs. They are classes in the BCL. Essentially, you can use any BCL class in any .NET Language, not just C#. If you're asking why the BCL classes differ in certain ways, it's not because the designers disrespected or didn't want uniformity. It's probably for one of (at least two) reasons:
1) The BCL and FCL evolved over time. You're likely to see very significant differences in classes that were introduced before and after generics were added. One example, DataColumnCollection, is an IEnumerable (but not an IEnumerable<DataColumn>). That causes you to need to cast to perform some operations.
2) There's a subtle difference in the meaning of the method. .Length, I believe, is made to imply that there's a static number somewhere, where .Count implies that some operation might be done to get the number of items in the list.

Clarification on Static and Primitives [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Static and primitives are not part of OOPs. I have read that static and primitives are not allowed in scala class definitions. If this is true, then why static and primitives were allowed in java and in c# and few other languages?
They are not part of "pure" Object Oriented, but sometimes "pure" gets in the way of "getting the job done".
Using primitives can make mathematical operations (in particular) much faster, and statics enable a lot of useful design patterns.
C#, C++ and Java are a general purpose programming languages. You can find for example elements of duck-typing in C# 4.0, elements of functional programming, and many more useful constructs.
Not every program and not every part of a program has to be object oriented. Use OOP when needed and when it serves purpose, in C#, C++ and Java you can use other or simpler constructs every time you feel OOP is a 'firing mosquito with a cannon'.
Its an engineering decision. Like most engineering decisions it involves carefully balancing lots of different forces on the design. In this case its the design of the language. Different languages are intended to solve different problems and have different objectives. So its not surprising that different languages come to different conclusions.
It this case there are costs and benefits to having primitives.
Primitives can be faster.
Primitives can make it easer to do certain low level things (eg writing individual bits to a register on a micro condoler in an embedded system.)
On the other hand primitives can make your programming language more complicated as your primitive types have different syntax etc to operate on them. Some newer languages are trying to have there cake and eat it by using primitive types under the covers but making them look like objects in terms of syntax.

How come there's no C# equivalent of python's doctest feature? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Seems like it would be a good way to introduce some people to unit testing.
Well for one thing, the documentation for doctest talks about "interactive Python sessions". There's no equivalent of that in C#... so how would the output be represented? How would you perform all the necessary setup?
I dare say such a thing would be possible, but personally I think that at least for C#, it's clearer to have unit tests as unit tests, where you have all the benefits of the fact that you're writing code rather than comments. The code can be checked for syntactic correctness at compile-time, you have IntelliSense, syntax highlighting, debugger support etc.
If you're writing code, why not represent that as code? Admittedly it's reasonably common to include sample code in XML documentation, but that's rarely in the form of tests - and without an equivalent of an "interactive session" it would require an artificial construct to represent the output in a testable form.
I'm not saying this is a bad feature in Python - just that it's one which I don't believe maps over to C# particularly well. Languages have their own styles, and not every feature in language X will make sense in language Y.

Categories