Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The C# language specification says we can use Unicode characters in specifiers (class and variable names, etc.)
I went a long way doing this in my code. Since I live in Brazil, this includes lots of accented characters, with variable names such as rotação, ângulo, máximo, etc.
Every time a more "experienced" developers catches this, I am strongly advised to avoid it and change everything back. Otherwise a lot of kittens will die.
I went then a quite long way undoing it, but today I found some variables still named with accents, in methods written long ago, and no kitten died so far (at least not because of that).
Since my language (Portuguese) is accented, it would make a lot of sense if our codebase has those characters, since C# explicitly allows it.
Are there any sound technical reason not to use Unicode characters in C#/Visual Studio codebases?
What if you had to take over code written in Cyrillic? Most developers are comfortable with standard Latin character sets. They're easy to type on any keyboard.
I would recommend sticking to the simple set.
A few reasons why not to use Unicode variable names:
It makes it hard for people to type them
Some Unicode characters look very similar to non-native speaker (see the case of the Turkish I)
Some editors might not display them correctly
https://twitter.com/Stephan007/status/481001490463866880/photo/1
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Why are format specifiers to base types in C# of type string instead of a less error-prone and more readable type, like enum? While I know many of the specifiers by now, I almost always have to double-check the documentation to avoid bugs or weird edge-cases. enum types and values could've easily provided that information in their comments.
C#'s version 1.0 was released in 2002, and object.ToString() has been a feature since C# 1.1. It is an old feature and I understand that the development process and goals doesn't necessarily look the same now as then. However, I cannot understand the reason for not using a type-safe, well-defined and easy-to-document behavior by using language features such as enums or classes instead of strings.
Of course, most older langauges such as C use string type format specifiers, so perhaps it's just by convention? If so, why do they feel the need to follow that convention? (and besides, C use the % character for specifiers, so C# already made up their own conventions)
It's because they expect the format will vary a lot in different cultures, countries, systems, users (some systems allow users to choose their own prefer format). And we will end up a very big enum for all possibilities. And also, .net framework can't just maintain a huge amount of formats.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
English is not my native language. I would like to use non-English characters in my code, for instance: "ç, ã, é ..."
Is it a bad practice to use those characters for classes, variable or method declarations?
I'm not asking what characters are technically available in c#, My question is whether or not it is a good idea.
There's no technical issues with using non-English characters. C# allows a very large number of Unicode symbols in variable names. The real question is whether or not it is a good idea.
To answer that, you really have to ask yourself a question of "who is my audience?" If your code will only be looked at by French speakers typing on a French layout keyboard, ç is probably a very valid character. However, if you intend your code to be modified by others whose keyboard layout is not French, you may find that that symbol is very hard for them to type. This will mean that, if they want to use your variable name, they'll have to cut/paste it in place because they can't type it directly. This would be a death sentence for any development.
So figure out who your audience is, and limit yourself to their keyboard layout.
It's supported. See here: http://rosettacode.org/wiki/Unicode_variable_names#C.23
Whether it's bad practice or not, it's hard to tell. If it works, it works. Personally, I'd just choose a language all the possible contributors understand.
I just tried the 3 characters you listed there and it compiled when I used them as variable names, so I assume that means they won't cause issues in your code.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm learning C# recently with a strong C++ background, and there is something about C# that I don't quite understand, given my understanding of and experiences with C++.
In C++, people do care a lot about uniformity, otherwise it would be impossible to write generic code using template meta-programming. In C#, however, people seem to care little about uniformity. For example, while array types have a Length property, List<T> uses Count. While IndexOf, LastIndexOf, and alike for array types are static methods, their counterparts for List<T> are not. This gives me the impression that instead of being uniform, C# is actually trying hard to be nonuniform. This doesn't make sense to me. Since C# doesn't support template meta-programming, uniformity is not that important as in C++. But still, being uniform can be beneficial in many other ways. For example, it would be easier for humans to learn and master. When things are highly uniform, you mater one, and you master it all. Please note that I'm not a C++ fanatics nor diehard. I just don't really understand.
You've got a conceptual issue here.
List<T>, and the other collection classes with it, aren't C# constructs. They are classes in the BCL. Essentially, you can use any BCL class in any .NET Language, not just C#. If you're asking why the BCL classes differ in certain ways, it's not because the designers disrespected or didn't want uniformity. It's probably for one of (at least two) reasons:
1) The BCL and FCL evolved over time. You're likely to see very significant differences in classes that were introduced before and after generics were added. One example, DataColumnCollection, is an IEnumerable (but not an IEnumerable<DataColumn>). That causes you to need to cast to perform some operations.
2) There's a subtle difference in the meaning of the method. .Length, I believe, is made to imply that there's a static number somewhere, where .Count implies that some operation might be done to get the number of items in the list.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How can I optimally split one big, difficult method to several smaller methods in C#? Is there any perception or functionality for this issue?
Assuming you are using Visual Studio to edit your C# code, there is built-in functionality for what you are trying to do: Extract Method Refactoring. Note that this is still a manual process - there is no automatic tool which knows how to take your entire method apart.
What to keep in mind while refactoring? Taken from Robert C. Martin's Clean Code:
The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that.
Functions should do one thing. They should do it well. They should do it only.
We want the code to read like a top-down narrative.
Use descriptive names.
The ideal number of arguments for a function is
zero (niladic). Next comes one (monadic), followed
closely by two (dyadic). Three arguments (triadic)
should be avoided where possible. More than three
(polyadic) requires very special justification
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Can anyone tell me what would be more efficient: A large program is written in visual-C++ years ago is now intended to be written in C#. What would be better, re-writing the whole code of visual-C++ in C# or write C++ DLLs to be used in C# program via DLLimport?
I guess it depends on how data-centric your code is. If you can easily separate out the functionality that does not require an interface, then you'd most likely be better off writing a DLL to utilize this functionality, and then re-writing the interface in C#.
If the program is rather interface heavy, and you do not want to go through separating out all of the data functions, then I'd just go ahead and re-write the whole thing in C#, although I'd expect to lose some performance.
VisualC++ is still a very widely used language - is this your only reason for wanting to move to C# (i.e. finding it hard to recruit people, lacking skills to continue development)?
There is only a single answer to this: "it depends". We cannot possibly know this, it's something you must decide.
Check what you need in terms of time and other resources for both. Check what benefit your gain from both. Weigth cost against benefit. Decide.