Why are C#'s format specifiers strings? [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Why are format specifiers to base types in C# of type string instead of a less error-prone and more readable type, like enum? While I know many of the specifiers by now, I almost always have to double-check the documentation to avoid bugs or weird edge-cases. enum types and values could've easily provided that information in their comments.
C#'s version 1.0 was released in 2002, and object.ToString() has been a feature since C# 1.1. It is an old feature and I understand that the development process and goals doesn't necessarily look the same now as then. However, I cannot understand the reason for not using a type-safe, well-defined and easy-to-document behavior by using language features such as enums or classes instead of strings.
Of course, most older langauges such as C use string type format specifiers, so perhaps it's just by convention? If so, why do they feel the need to follow that convention? (and besides, C use the % character for specifiers, so C# already made up their own conventions)

It's because they expect the format will vary a lot in different cultures, countries, systems, users (some systems allow users to choose their own prefer format). And we will end up a very big enum for all possibilities. And also, .net framework can't just maintain a huge amount of formats.

Related

Why there is no floating point type smaller in byte-size than 'float' in C# [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Hopefully it is the 1st question of its type on SO. C# has 4 integral types:
byte: 1 byte
short: 2 bytes
int: 4 bytes
long: 8 bytes
But just two floating point types: float and double. Why did the creators did not feel a need to come up with a tiny float? There is a Wikipedia article about Minifloats.
To provide an attempt at answer -
Most floating point value systems are based on the IEEE 754 standard; the systems need to be standardized so that hardware manufacturers can provide interchangeable hardware-based implementations of the formats. So, for starters, you're not going to find an implementation unless it is standardized, and the smallest format that IEEE 754 defines is binary16.
https://en.wikipedia.org/wiki/IEEE_754#Representation_and_encoding_in_memory
binary16 is smaller than the smallest type that C# provides, so we're still left with "why doesn't C# implement binary16?".
The answer is likely a combination of reasons:
C# designers might have felt that it would be an uncommonly-used type (most other languages don't implement it, after all), and providing support for an unused type would be an unwise choice.
C# designers might have felt that it would add complication that they would've rather not had to deal with when it came time to implement C# on other platforms, such as ARM.
Additionally, the most common platforms that C# would ostensibly run on - x86-32/64 and ARM - don't have much hardware support for binary16, or didn't have any for a long while (see F16c).
Finally, it looks like the folks behind .Net/C# are indeed considering adding support for it, along with other types like binary128:
https://github.com/dotnet/corefx/issues/17267
The software development landscape has changed a lot since C# was created, 18 years ago. Nowadays, we are starting to run a lot more code on GPUs, where we do in fact find and use binary16 types. However, when C# was first created, that might not have been seen as a viable use case. Now that C#-on-GPU is becoming bigger and more viable, it makes sense that the language designers would re-evaluate the design and evolve as the language's use evolve's. And it seems that they're doing exactly that.

Is using non standard English characters in c# names a bad practice? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
English is not my native language. I would like to use non-English characters in my code, for instance: "ç, ã, é ..."
Is it a bad practice to use those characters for classes, variable or method declarations?
I'm not asking what characters are technically available in c#, My question is whether or not it is a good idea.
There's no technical issues with using non-English characters. C# allows a very large number of Unicode symbols in variable names. The real question is whether or not it is a good idea.
To answer that, you really have to ask yourself a question of "who is my audience?" If your code will only be looked at by French speakers typing on a French layout keyboard, ç is probably a very valid character. However, if you intend your code to be modified by others whose keyboard layout is not French, you may find that that symbol is very hard for them to type. This will mean that, if they want to use your variable name, they'll have to cut/paste it in place because they can't type it directly. This would be a death sentence for any development.
So figure out who your audience is, and limit yourself to their keyboard layout.
It's supported. See here: http://rosettacode.org/wiki/Unicode_variable_names#C.23
Whether it's bad practice or not, it's hard to tell. If it works, it works. Personally, I'd just choose a language all the possible contributors understand.
I just tried the 3 characters you listed there and it compiled when I used them as variable names, so I assume that means they won't cause issues in your code.

Why doesn't C# seem to care about uniformity? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm learning C# recently with a strong C++ background, and there is something about C# that I don't quite understand, given my understanding of and experiences with C++.
In C++, people do care a lot about uniformity, otherwise it would be impossible to write generic code using template meta-programming. In C#, however, people seem to care little about uniformity. For example, while array types have a Length property, List<T> uses Count. While IndexOf, LastIndexOf, and alike for array types are static methods, their counterparts for List<T> are not. This gives me the impression that instead of being uniform, C# is actually trying hard to be nonuniform. This doesn't make sense to me. Since C# doesn't support template meta-programming, uniformity is not that important as in C++. But still, being uniform can be beneficial in many other ways. For example, it would be easier for humans to learn and master. When things are highly uniform, you mater one, and you master it all. Please note that I'm not a C++ fanatics nor diehard. I just don't really understand.
You've got a conceptual issue here.
List<T>, and the other collection classes with it, aren't C# constructs. They are classes in the BCL. Essentially, you can use any BCL class in any .NET Language, not just C#. If you're asking why the BCL classes differ in certain ways, it's not because the designers disrespected or didn't want uniformity. It's probably for one of (at least two) reasons:
1) The BCL and FCL evolved over time. You're likely to see very significant differences in classes that were introduced before and after generics were added. One example, DataColumnCollection, is an IEnumerable (but not an IEnumerable<DataColumn>). That causes you to need to cast to perform some operations.
2) There's a subtle difference in the meaning of the method. .Length, I believe, is made to imply that there's a static number somewhere, where .Count implies that some operation might be done to get the number of items in the list.

Should I avoid using Unicode characters in variable names? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The C# language specification says we can use Unicode characters in specifiers (class and variable names, etc.)
I went a long way doing this in my code. Since I live in Brazil, this includes lots of accented characters, with variable names such as rotação, ângulo, máximo, etc.
Every time a more "experienced" developers catches this, I am strongly advised to avoid it and change everything back. Otherwise a lot of kittens will die.
I went then a quite long way undoing it, but today I found some variables still named with accents, in methods written long ago, and no kitten died so far (at least not because of that).
Since my language (Portuguese) is accented, it would make a lot of sense if our codebase has those characters, since C# explicitly allows it.
Are there any sound technical reason not to use Unicode characters in C#/Visual Studio codebases?
What if you had to take over code written in Cyrillic? Most developers are comfortable with standard Latin character sets. They're easy to type on any keyboard.
I would recommend sticking to the simple set.
A few reasons why not to use Unicode variable names:
It makes it hard for people to type them
Some Unicode characters look very similar to non-native speaker (see the case of the Turkish I)
Some editors might not display them correctly
https://twitter.com/Stephan007/status/481001490463866880/photo/1

How come there's no C# equivalent of python's doctest feature? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Seems like it would be a good way to introduce some people to unit testing.
Well for one thing, the documentation for doctest talks about "interactive Python sessions". There's no equivalent of that in C#... so how would the output be represented? How would you perform all the necessary setup?
I dare say such a thing would be possible, but personally I think that at least for C#, it's clearer to have unit tests as unit tests, where you have all the benefits of the fact that you're writing code rather than comments. The code can be checked for syntactic correctness at compile-time, you have IntelliSense, syntax highlighting, debugger support etc.
If you're writing code, why not represent that as code? Admittedly it's reasonably common to include sample code in XML documentation, but that's rarely in the form of tests - and without an equivalent of an "interactive session" it would require an artificial construct to represent the output in a testable form.
I'm not saying this is a bad feature in Python - just that it's one which I don't believe maps over to C# particularly well. Languages have their own styles, and not every feature in language X will make sense in language Y.

Categories